美文网首页
Druid 实践2-安装与配置

Druid 实践2-安装与配置

作者: zfylin | 来源:发表于2017-08-22 14:53 被阅读0次

安装准备

安装包准备

  • 源码编译
  • 官网安装包
  • imply组合套件

生产环境的Hadoop使用Java7, 官方安装包使用Java8,所以需要下载源码使用Java7重新编译。
安装使用 Druid 0.9.2 + imply 2.0.0

  • 编译

    git clone https://github.com/druid-io/druid.git # 下载源码
    cd druid
    git checkout 0.9.2                              # 切换0.9.2分支
    mvn clean package                               # 打包
    

    打包后会生成 distribution 目录

    druid 安装包目录为:distribution/target

    [imply@85-195-119-23 target]$ ll distribution/target/
    total 924
    drwxrwxr-x  2 imply imply      6 Aug  1 07:49 archive-tmp
    drwxrwxr-x  9 imply imply    141 Aug  1 07:59 druid-0.9.2.1-SNAPSHOT
    drwxrwxr-x 16 imply imply   4096 Aug  1 07:48 extensions
    drwxrwxr-x  3 imply imply     42 Aug  1 07:49 generated-resources
    drwxrwxr-x  3 imply imply     27 Aug  1 07:48 hadoop-dependencies
    -rw-rw-r--  1 imply imply 941721 Aug  1 07:49 mysql-metadata-storage-0.9.2.1-SNAPSHOT.tar
    
  • 替换

    imply目录结构

    [imply@85-195-119-23 imply-2.2.3]$ ll
    total 6
    drwxr-xr-x 2 imply imply 4096 Jul 31 05:35 bin                # 运行相关组件的脚本程序
    drwxr-xr-x 7 imply imply   78 Jul 26 08:59 conf               # 生产环境集群配置文件
    drwxr-xr-x 6 imply imply   61 Jul 31 05:35 conf-quickstart    # 单机测试版配置文件
    drwxr-xr-x 6 imply imply   80 Jul 31 05:35 dist               # 相关软件包()
    drwxr-xr-x 2 imply imply  226 May 26 18:23 quickstart
    drwxrwxr-x 5 imply imply   40 Jul 31 05:42 var
    

    druid-0.9.2.1-SNAPSHOT 替换 imply-2.0.0/dist/druid目录

    [root@85-195-119-23 imply-2.0.0]# ll dist/
    total 142380
    lrwxrwxrwx 1 imply imply        22 Aug  1 08:05 druid -> druid-0.9.2.1-SNAPSHOT
    drwxrwxr-x 9 imply imply       141 Aug  1 08:04 druid-0.9.2.1-SNAPSHOT
    -rw-rw-r-- 1 imply imply 145790825 Aug  1 07:49 druid-0.9.2.1-SNAPSHOT-bin.tar.gz
    drwxr-xr-x 9 imply imply       141 Dec  1  2016 druid-bak
    drwxr-xr-x 6 imply imply        84 Dec  1  2016 pivot
    drwxr-xr-x 5 imply imply        40 Dec  1  2016 tranquility
    -rw-r--r-- 1 imply imply         6 Dec  1  2016 VERSION.txt
    drwxr-xr-x 3 imply imply        44 Dec  1  2016 zk
    

安装环境

  • Java7以上版本(推荐Java8,最新版本Druid要求Java8及以上版本)
  • Nodejs 4.x以上版本
  • Linux或Unix系统
  • 4GB以上内存

外部依赖

  • Deep Storage(数据文件存储库):负责存储和加载Druid的数据文件(Segment)
  • MetaData Storage(元数据库):负责存储和管理整个系统的配置记录信息
  • Zookeeper(集群状态管理):负责管理并同步各个节点的状态信息,以及新增节点时的服务发现功能

规划与部署

druid 采用分布式设计,不同类型的节点各司其职,故在实际部署集群环境走了需要对各类节点进行统一规划,从功能上分为三个部分。

  • Master: 管理节点,包含协调节点和统治节点,负责管理数据写入及容错相关处理;
  • Data: 数据节点,包含历史节点和中间管理者,负责数据写入处理,历史数据的加载与查询;
  • Query: 查询节点,包含查询节点和Pivot Web界面,负责提供数据查询接口和Web交互查询功能。

实际部署,至少部署两个管理节点互备容错;由于Druid支持横向扩展,考虑机器资源有限,可以将管理节点和查询节点混合部署在同一台物理机上,同时为了加快热点数据的查询,可以考虑加上历史节点,利用分层特性,把小部分热点数据源放在管理节点所在的机器上的历史节点。
机器选择上,管理节点和历史节点考虑用多核大内存机器,数据节点涉及历史数据和数据的本地缓存,需要更大的磁盘空间。推荐使用ssd。

管理节点配置文件:

cp conf/supervise/master-no-zk.conf conf/supervise/master-with-query.conf
vim conf/supervise/master-with-query.conf
:verify bin/verify-java
:verify bin/verify-node

broker bin/run-druid broker conf
historical bin/run-druid historical conf
pivot bin/run-pivot conf
coordinator bin/run-druid coordinator conf
!p80 overlord bin/run-druid overlord conf

运行命令:

nohup ./bin/supervise -c conf/supervise/master-with-query.conf > master-with-query.log &

数据节点配置文件

vim conf/supervise/data.conf
:verify bin/verify-java

historical bin/run-druid historical conf
middleManager bin/run-druid middleManager conf

# Uncomment to use Tranquility Server
#!p95 tranquility-server bin/tranquility server -configFile conf/tranquility/server.json

# Uncomment to use Tranquility Kafka
#!p95 tranquility-kafka bin/tranquility kafka -configFile conf/tranquility/kafka.json

运行命令

nohup ./bin/supervise -c conf/supervise/data.conf > data.log &

基本配置

基础依赖配置

配置文件为:conf/druid/_common/common.runtime.properties

  • Zookeeper

    druid.zk.service.host=${Zookeepr 集群地址}
    druid.zk.paths.base=/druid
    
  • Metadata Storage

    # For MySQL:
    druid.extensions.loadList=["mysql-metadata-storage"]
    druid.metadata.storage.type=mysql
    druid.metadata.storage.connector.connectURI=jdbc:mysql://{IP:PORT}/druid
    druid.metadata.storage.connector.user=${USER}
    druid.metadata.storage.connector.password=${PASSWORD}
    
    # For PostgreSQL:
    #druid.metadata.storage.type=postgresql
    #druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
    #druid.metadata.storage.connector.user=...
    #druid.metadata.storage.connector.password=.....
    
  • Deep Storage

    # For HDFS:
    druid.extensions.loadList=["druid-hdfs-storage"]
    druid.storage.type=hdfs
    druid.storage.storageDirectory=hdfs://${namenode:port}/druid/segments
    
    # For HDFS:
    druid.indexer.logs.type=hdfs
    druid.indexer.logs.directory=hdfs://ip:port/druid/indexing-logs
    

注:采用HDFS作为Deep Storage时,离线批量导入数据任务会利用MapReduce加速写入处理,因此需要将生产环境Hadoop对应客户端配置文件core-site.xml,hdfs-site.xml,yarn-site.xml,mapred-site.xml放到conf/druid/_common目录下。

数据节点配置调优

查询节点配置调优

节点配置

节点规划

机器 IP 节点
druid-01 10.1.12.76 master节点、query节点、pivot
druid-02 10.1.12.77 data节点
druid-03 10.1.12.78 master节点、query节点
druid-04 10.1.12.79 data节点
druid-05 10.1.12.80 data节点

全局Common配置

#
# Extensions
#

druid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-histogram","druid-datasketches","mysql-metadata-storage","druid-hdfs-storage"]

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#

druid.zk.service.host=datanode1:2181,datanode2:2181,datanode3:2181
druid.zk.paths.base=/druid

#
# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://master.example.com:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=master.example.com
#druid.metadata.storage.connector.port=1527

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://druid-01:3306/druid?characterEncoding=utf8&useSSL=false&serverTimezone=UTC
druid.metadata.storage.connector.user=username
druid.metadata.storage.connector.password=password

# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://ip:port/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=hdfs://ip:port/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=debug

Master机器配置

Master机器两台,作为管理节点相互作为HA支撑,同时承担部分数据查询

  • 协调节点

jvm.config

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log

runtime.properties

druid.host=druid-01
druid.service=druid/coordinator
druid.port=8081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

  • 统治节点

jvm.config

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/overlord
druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata
  • 查询节点

jvm.config

-server
-Xms12g
-Xmx12g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/broker
druid.port=8082

# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false

# SQL
druid.sql.enable=true
  • 历史节点

jvm.config

-server
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=20

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000

Data机器配置

Data机器3台,作为数据节点负责数据处理,Shared nothing 架构。

  • 历史节点

jvm.config

-server
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=3072m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-01
druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=20

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=3
druid.processing.tmpDir=var/druid/processing

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000
  • 中间管理者

jvm.config

-server
-Xms64m
-Xmx64m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

druid.host=druid-02
druid.service=druid/middlemanager
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=268435456
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.3.0"]

相关文章

网友评论

      本文标题:Druid 实践2-安装与配置

      本文链接:https://www.haomeiwen.com/subject/kgrldxtx.html