美文网首页我爱编程
Flume部署及使用

Flume部署及使用

作者: Sx_Ren | 来源:发表于2018-03-19 16:08 被阅读0次

    Flume是一个分布式的、高可靠的、高可用的用于高效收集、聚合、移动大量日志数据的框架(Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.),设计的目标就是高可靠性,扩展性,管理性,使用flume我们可以方便的把日志从源端(webserver等)收集到目的地(比如hdfs、kafka)。

    • 与flume类似的框架包括:

    Flume: Cloudera/Apache Java
    Scribe: Facebook C/C++ 不再维护
    Chukwa: Yahoo/Apache Java 不再维护
    Kafka:apache,放在这里不是很合适,主要还是数据缓冲
    Fluentd: Ruby
    Logstash: ELK(ElasticSearch,Kibana)
    需要重点关注的应该是Flume和Logstash,这两个业界用的比较广泛

    • 架构及核心组件

    Flume工作单元是Agent,每个Agent都包括Source(源端,用于数据收集)、Channel(聚集,用户数据缓存)、Sink(数据输出)3个核心组件


    flume
    • Flume安装(版本为1.6.0)
    1. 前置条件
      Java Runtime Environment - Java 1.7 or later(jdk1.7或以上)
      Memory - Sufficient memory for configurations used by sources, channels or sinks(足够的机器内存)
      Disk Space - Sufficient disk space for configurations used by channels or sinks(足够的磁盘空间)
      Directory Permissions - Read/Write permissions for directories used by agent(目录权限,包括读写权限)
    2. jdk安装
      下载 jdk
      解压到~/app
      将java配置系统环境变量中: ~/.bash_profile
      export JAVA_HOME=/home/hadoop/app/jdk1.8.0_144
      export PATH=$JAVA_HOME/bin:$PATH
      source下让其配置生效
      检测: java -version
    3. 安装Flume
      下载 Flume
      解压到~/app
      将java配置系统环境变量中: ~/.bash_profile
      export FLUME_HOME=/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin
      export PATH=$FLUME_HOME/bin:$PATH
      source下让其配置生效
      flume-env.sh的配置:export JAVA_HOME=/home/hadoop/app/jdk1.8.0_144
      检测: flume-ng version
    • Flume示例1(netcat source + memory channel + logger sink)
    1. 使用Flume的关键就是写配置文件,分别配置Source、Channel、Sink,然后把三者串联起来
      比如这里写一个配置文件$FLUME_HOME/conf/example.conf,使用netcat source、memory channel、logger sink,example.conf内容如下:
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = hadoop000
    a1.sources.r1.port = 44444
    
    a1.sinks.k1.type = logger
    
    a1.channels.c1.type = memory
    
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    
    1. 启动Agent:
    flume-ng agent \
    --name a1  \
    --conf $FLUME_HOME/conf  \
    --conf-file $FLUME_HOME/conf/example.conf \
    -Dflume.root.logger=INFO,console
    
    1. 启动telnet输入数据验证
      telnet hadoop000 44444启动后输入内容123就可以在Flume看到如下数据:
      Event: { headers:{} body: 31 32 33 0D 123. }
      Event是FLume数据传输的基本单元
      Event = 可选的header + byte array
    • Flume示例2(exec source + memory channel + logger sink)
    1. 创建exec-memory-logger.conf配置文件
      内容如下:
      a1.sources = r1
      a1.sinks = k1
      a1.channels = c1
      
      a1.sources.r1.type = exec
      a1.sources.r1.command = tail -F /home/hadoop/data/data.log
      a1.sources.r1.shell = /bin/sh -c
      
      a1.sinks.k1.type = logger
      
      a1.channels.c1.type = memory
      
      a1.sources.r1.channels = c1
      a1.sinks.k1.channel = c1
      
    2. 启动Agent
    flume-ng agent \
    --name a1  \
    --conf $FLUME_HOME/conf  \
    --conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
    -Dflume.root.logger=INFO,console
    
    1. 向/home/hadoop/data/data.log日志文件追加数据,验证
    • Flume示例3(两个Agent串起来)

    对于这种情况:


    AgentToAgent

    如果webserver在一台服务器上产生日志,可以在改服务器上使用一个Agent Sink数据到另一台服务器的Source,然后采用logger sink输出到控制台,当然日志输出到控制台没啥用,最终应该输出到HDFS或者对接到kafka去处理数据,这里只是举例。
    第一个Agent(exec source + memory channel + avro sink)
    第二个Agent(avro source + memory channel + logger sink)


    A.png
    1. 创建exec-memory-avro.conf和avro-memory-logger.conf配置文件
      因为我手头没有两台机器,这里我只是在一台机器(hadoop000)上模拟两台机器的情况
      exec-memory-avro.conf:
      exec-memory-avro.sources = exec-source
      exec-memory-avro.sinks = avro-sink
      exec-memory-avro.channels = memory-channel
      
      exec-memory-avro.sources.exec-source.type = exec
      exec-memory-avro.sources.exec-source.command = tail -F /home/hadoop/data/data.log
      exec-memory-avro.sources.exec-source.shell = /bin/sh -c
      
      exec-memory-avro.sinks.avro-sink.type = avro
      exec-memory-avro.sinks.avro-sink.hostname = hadoop000
      exec-memory-avro.sinks.avro-sink.port = 44444
      
      exec-memory-avro.channels.memory-channel.type = memory
      
      exec-memory-avro.sources.exec-source.channels = memory-channel
      exec-memory-avro.sinks.avro-sink.channel = memory-channel
      
      avro-memory-logger.conf
      avro-memory-logger.sources = avro-source
      avro-memory-logger.sinks = logger-sink
      avro-memory-logger.channels = memory-channel
      
      avro-memory-logger.sources.avro-source.type = avro
      avro-memory-logger.sources.avro-source.bind = hadoop000
      avro-memory-logger.sources.avro-source.port = 44444
      
      avro-memory-logger.sinks.logger-sink.type = logger
      
      avro-memory-logger.channels.memory-channel.type = memory
      
      avro-memory-logger.sources.avro-source.channels = memory-channel
      avro-memory-logger.sinks.logger-sink.channel = memory-channel
      
    2. 启动Agent
      先启动avro-memory-logger
      flume-ng agent \
      --name avro-memory-logger  \
      --conf $FLUME_HOME/conf  \
      --conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
      -Dflume.root.logger=INFO,console
      
      然后启动exec-memory-avro
      flume-ng agent \
      --name exec-memory-avro  \
      --conf $FLUME_HOME/conf  \
      --conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
      -Dflume.root.logger=INFO,console
      
    3. 向/home/hadoop/data/data.log日志文件追加数据,验证

    相关文章

      网友评论

        本文标题:Flume部署及使用

        本文链接:https://www.haomeiwen.com/subject/axyyfftx.html