美文网首页
尚硅谷大数据技术之Flume

尚硅谷大数据技术之Flume

作者: 尚硅谷教育 | 来源:发表于2018-12-06 14:58 被阅读8次

    4.执行配置文件
    分别开启对应配置文件:flume3-flume-logger.conf,flume2-netcat-flume.conf,flume1-logger-flume.conf。
    [atguigu@hadoop104 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group3/flume3-flume-logger.conf -Dflume.root.logger=INFO,console

    [atguigu@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group3/flume2-netcat-flume.conf

    [atguigu@hadoop103 flume]bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group3/flume1-logger-flume.conf 5.在hadoop103上向/opt/module目录下的group.log追加内容 [atguigu@hadoop103 module] echo 'hello' > group.log
    6.在hadoop102上向44444端口发送数据
    [atguigu@hadoop102 flume]$ telnet hadoop102 44444
    7.检查hadoop104上数据

    第4章 Flume监控之Ganglia
    4.1 Ganglia的安装与部署

    1. 安装httpd服务与php
      [atguigu@hadoop102 flume]$ sudo yum -y install httpd php
    2. 安装其他依赖
      [atguigu@hadoop102 flume]sudo yum -y install rrdtool perl-rrdtool rrdtool-devel [atguigu@hadoop102 flume] sudo yum -y install apr-devel
    3. 安装ganglia
      [atguigu@hadoop102 flume]sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [atguigu@hadoop102 flume] sudo yum -y install ganglia-gmetad
      [atguigu@hadoop102 flume]sudo yum -y install ganglia-web [atguigu@hadoop102 flume] sudo yum install -y ganglia-gmond
    4. 修改配置文件/etc/httpd/conf.d/ganglia.conf
      [atguigu@hadoop102 flume]$ sudo vim /etc/httpd/conf.d/ganglia.conf
      修改为红颜色的配置:

    Ganglia monitoring system php web frontend

    Alias /ganglia /usr/share/ganglia
    <Location /ganglia>
    Order deny,allow
    Deny from all
    Allow from all

    Allow from 127.0.0.1

    Allow from ::1

    Allow from .example.com

    </Location>

    1. 修改配置文件/etc/ganglia/gmetad.conf
      [atguigu@hadoop102 flume]$ sudo vim /etc/ganglia/gmetad.conf
      修改为:
      data_source "hadoop102" 192.168.1.102
    2. 修改配置文件/etc/ganglia/gmond.conf
      [atguigu@hadoop102 flume]$ sudo vim /etc/ganglia/gmond.conf
      修改为:
      cluster {
      name = "hadoop102"
      owner = "unspecified"
      latlong = "unspecified"
      url = "unspecified"
      }
      udp_send_channel {

    bind_hostname = yes # Highly recommended, soon to be default.

                       # This option tells gmond to use a source address
                       # that resolves to the machine's hostname.  Without
                       # this, the metrics may appear to come from any
                       # interface and the DNS names associated with
                       # those IPs will be used to create the RRDs.
    

    mcast_join = 239.2.11.71

    host = 192.168.1.102
    port = 8649
    ttl = 1
    }
    udp_recv_channel {

    mcast_join = 239.2.11.71

    port = 8649
    bind = 192.168.1.102
    retry_bind = true

    Size of the UDP buffer. If you are handling lots of metrics you really

    should bump it up to e.g. 10MB or even higher.

    buffer = 10485760

    }

    本教程由尚硅谷教育大数据研究院出品,如需转载请注明来源,欢迎大家关注尚硅谷公众号(atguigu)了解更多。

    相关文章

      网友评论

          本文标题:尚硅谷大数据技术之Flume

          本文链接:https://www.haomeiwen.com/subject/qidocqtx.html