美文网首页
1. ELK6+kafka+filebeat基础安装

1. ELK6+kafka+filebeat基础安装

作者: yaoyao妖妖 | 来源:发表于2020-07-17 17:34 被阅读0次

    1、安装elasticsearch
    1.) 关闭防火墙及SELinux

    service iptables stop
    chkconfig iptables off
    chkconfig iptables --list
    vim /etc/sysconfig/selinux
    SELinux=disabled
    setenforce 0
    

    2.) 配置jdk环境
    vim /etc/profile.d/java.sh

    export JAVA_HOME=/home/admin/jdk1.8.0_172/
    export PATH=$JAVA_HOME/BIN:$PATH
    export CLASSPATH=.:$JAVA_HOME/lib.tools.jar
    source /etc/profile.d/java.sh
    

    3.)安装ElasticSearch6.x
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
    tar -zxvf elasticsearch-6.2.4.tar.gz -C /home/admin/project/elk
    cd /home/admin/project/elkelasticsearch-6.2.4
    vim config/elasticsearch.yml

    cluster.name: yaoyao-test
    node.name: node-1
    path.data: /home/admin/project/elk/elasticsearch-6.2.4/data
    path.logs: /home/admin/project/elk/elasticsearch-6.2.4/logs
    bootstrap.memory_lock: false
    network.host: 127.0.0.1
    network.bind_host: 0.0.0.0
    bootstrap.system_call_filter: false
    http.port: 9200
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
    http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
    
    

    4.)启动elasticsearch elasticsearch 不能在root 上启动
    新建用户

    useradd elk
    passwd elk
    chown -R elk.elk /home/admin/project/elk/elasticsearch-6.2.4
    ./bin/elasticsearch -d #后台启动
    netstat -luntp |grep 9200 #查看监听端口9200
    curl 10.2.151.203:9200
    

    5.)启动常见错误

    uncaught exception in thread [main]
    org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
    问题原因:不能使用root用户启动
    解决方法:切换要其他用户启动
    
    unable to install syscall filter:
    java.lang.UnsupportedOperationException: seccomp unavailable:
    问题原因:其实只是一个警告,主要是因为你Linux版本过低造成的
    解决方法:警告不影响使用,可以忽略
    
    ERROR: bootstrap checks failed
    memory locking requested for elasticsearch process but memory is not locked
    问题原因:锁定内存失败
    解决方法:切换到root用户,编辑limits.conf配置文件
    vim /etc/security/limits.conf
    
    hard nproc 65536
    soft nproc 65536
    hard nofile 65536
    soft nofile 65536
    max number of threads [1024] for user [es] is too low, increase to at least [2048]
    原因:无法创建本地线程问题,用户最大可创建线程数太小
    解决方案:切换到root用户,进入limits.d目录下,修改90-nproc.conf 配置文件
    vim /etc/security/limits.d/90-nproc.conf
    
    soft nofile 65536
    soft nproc 65536
    soft nproc 2048
    max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    原因:最大虚拟内存太小
    解决方案:切换到root用户下,修改配置文件sysctl.conf
    vim /etc/sysctl.conf
    vm.max_map_count=655360
    sysctl -p
    
    system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
    问题原因:因为Centos6不支持SecComp
    解决方法:在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:
    bootstrap.memory_lock: false
    bootstrap.system_call_filter: false
    

    2、elasticsearch-head插件安装
    目的通过web界面来查看elasticsearch集群状态信息

    1.)下载安装nodejs

    wget https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.xz
    tar -zxvf node-v8.11.3-linux-x64.tar.gz -C /home/admin/project/elk/
    cd /home/admin/project/elk/
    mv node-v8.11.3-linux-x64/node-v8.11.3
    

    配置nodejs环境变量

    vim /etc/profile.d/node.sh
    export NODE_HOME=/home/admin/project/elk/node-v8.11.3
    export PATH=$PATH:$NODE_HOME/bin
    export NODE_PATH=$NODE_HOME/lib/node_modules
    source /etc/profile.d/node.sh
    
    #查看nodejs是否生效
    [admin@localhost node-v8.11.3]$ node -v
    v8.11.3
    [admin@localhost node-v8.11.3]$ npm -v
    5.6.0
    

    2.)安装grunt

    npm config set registry https://registry.npm.taobao.org
    vim ~/.npmrc
    registry=https://registry.npm.taobao.org
    strict-ssl = false
    npm install -g grunt-cli
    #将grunt加入系统文件
    ln -s /home/admin/project/elk/node-v8.11.3/lib/node_modules/grunt-cli/bin/grunt /usr/bin/grunt
    

    3.)下载head二进制包

    wget https://codeload.github.com/mobz/elasticsearch-head/zip/master
    unzip elasticsearch-head-master.zip
    cd elasticsearch-head-master
    npm install
    #如果速度较慢或安装失败,建议使用国内镜像
    npm install --ignore-scripts -g cnpm --registry=https://registry.npm.taobao.org
    

    4.)修改elasticserach配置文件
    vi ./config/elasticsearch.yml

    增加新的参数,这样head插件可以访问es
    http.cors.enabled: true
    http.cors.allow-origin: “*”
    

    5.)修改Gruntfile.js配置
    vim Gruntfile.js

    port: 9100上面增加hostname地址
    hostname: “0.0.0.0”,
    

    6.)修改_site/app.js配置
    vim _site/app.js

    #localhost替换为IP地址
    this.base_uri = this.config.base_uri || this.prefs.get(“app-base_uri”) || “http://localhost:9200”;
    

    7.) 启动grunt

    grunt server
    

    如果启动成功,则可以直接使用后台运行,命令行可继续输入(但是如果想退出,则需要自己kill进程)

    grunt server &
    nohup grunt server & 
    exit #后台启动
    

    启动提示模块未找到

    Local Npm module “grunt-contrib-jasmine” not found. Is it installed?
    npm install grunt-contrib-jasmine #安装模块

    3、安装kibana
    1.)下载安装

    wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz
    tar -zxvf kibana-6.2.4-linux-x86_64.tar.gz -C /home/admin/project/elk/
    cd /home/admin/project/elk/ kibana-6.2.4-linux-x86_64
    

    2.)修改配置
    vim config/kibana.yml

    server.port: 5601
    server.host: “IP"
    elasticsearch.url: http://IP:9200
    

    3.)启动kibana

    nohup ./bin/kibana &
    

    4、安装logstash
    1.)下载安装

    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.tar.gz
    tar -zxvf logstash-6.2.4.tar.gz -C /home/admin/project/elk/
    cd /home/admin/project/elk/logstash-6.2.4
    

    2.)新建模板,一定要注意格式
    vim config/test.conf

    input
    {
      kafka
      {
        bootstrap_servers => "127.0.0.1:9092"
        topics => "tt1"
        codec => "json"
      }
    }
    output
    {
        elasticsearch {
            #action => "index"
            hosts => ["127.0.0.1:9200"]
            index =>  "my_log"
            template => "/home/admin/project/elk/logstash-6.2.4/config/template/logstash.json"
            manage_template => false #关闭logstash自动管理模板功能
            template_name => "crawl" #映射模板的名字
            template_overwrite => true
         }
    
    
         if [level] == "ERROR" {
             elasticsearch {
                #action => "index"
                hosts => ["127.0.0.1:9200"]
                index =>  "mlog"
                template => "/home/admin/project/elk/logstash-6.2.4/config/template/logstash.json"
                manage_template => false #关闭logstash自动管理模板功能
                template_name => "crawl" #映射模板的名字
                template_overwrite => true
             }
         }
    }
    
    

    es 动态模版 logstash(/home/admin/project/elk/logstash-6.2.4/config/template/logstash.json)

    {  
      "template" : "crawl-*",  
      "settings" : {  
       "index.number_of_shards": 5,  
       "number_of_replicas": 0    
      
    },  
      "mappings" : {  
        "_default_" : {  
          "_all" : {"enabled" : true, "omit_norms" : true},  
          "dynamic_templates" : [ {  
            "message_field" : {  
              "match" : "message",  
              "match_mapping_type" : "string",  
              "mapping" : {  
                "type" : "string", "index" : "analyzed", "omit_norms" : true,  
                "fielddata" : { "format" : "disabled" }  
              }  
            }  
          }, {  
            "string_fields" : {  
              "match" : "*",  
              "match_mapping_type" : "string",  
              "mapping" : {  
                "type" : "string", "index" : "not_analyzed", "doc_values" : true  
              }  
            }  
          } ],  
          "properties" : {  
            "@timestamp": { "type": "date" },  
            "@version": { "type": "string", "index": "not_analyzed" },  
            "geoip"  : {  
              "dynamic": true,  
              "properties" : {  
                "ip": { "type": "ip" },  
                "location" : { "type" : "geo_point" },  
                "latitude" : { "type" : "float" },  
                "longitude" : { "type" : "float" }  
              }  
            }  
          }  
        }  
      }  
    }
    

    3.)启动logstash

    nohup ./bin/logstash -f config/test.conf & 
    # -f 指定配置文件
    

    5、安装kafka
    1.)下载安装

    wget https://archive.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
    wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
    tar -zxvf kafka_2.11-1.0.0.tgz -C /home/admin/project/elk/
    tar -zxvf zookeeper-3.4.14.tar.gz -C /home/admin/project/elk/
    cd /home/admin/project/elk/kafka_2.11-1.0.0/
    

    2.)修改kafka参数及启动

    vim config/zookeeper.properties

    dataDir=/tmp/zookeeper/data # 数据持久化路径
    clientPort=2181 # 连接端口
    maxClientCnxns=100 # 最大连接数
    dataLogDir=/tmp/zookeeper/logs #日志存放路径
    tickTime=2000 # Zookeeper服务器心跳时间,单位毫秒
    initLimit=10 # 投票选举新leader的初始化时间。
    

    启动zookeeper

    ./bin/zookeeper-server-start.sh config/zookeeper.properties
    

    后台启动zookeeper

    nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties &

    3.)修改kafka参数及启动
    vim config/server.properties

    broker.id=0   #标识主机
    port=9092    # 端口
    listeners=PLAINTEXT://localhost:9092
    advertised.listeners=PLAINTEXT://localhost:9092
    host.name=10.2.151.203
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    log.dirs=/data/logs/kafka   #kafka 接受消息的路径
    num.partitions=2
    num.recovery.threads.per.data.dir=1
    log.retention.check.interval.ms=300000
    zookeeper.connect=localhost:2181  # zookeeper.connect 指定连接的zookeeper 集群地址
    #zookeeper.session.timeout.ms=6000  #ZooKeeper的最大超时时间,就是心跳的间隔,若是没有反映,那么认为已经死了,不易过大
    #zookeeper.connection.timeout.ms=6000#ZooKeeper的连接超时时间
    zookeeper.sync.time.ms =2000  #zookeeper的follower同leader的同步时间
    

    默认配置 advertised.listeners=PLAINTEXT://your.host.name:9092

    修改为 advertised.listeners=PLAINTEXT://ip:9092

    例:advertised.listeners=PLAINTEXT://192.168.244.128:9092

    ip为服务器ip
    hostname和端口是用来建议给生产者和消费者使用的,如果没有设置,将会使用listeners的配置,如果listeners也没有配置,将使用java.net.InetAddress.getCanonicalHostName()来获取这个hostname和port,对于ipv4,基本就是localhost了。
    "PLAINTEXT"表示协议,可选的值有PLAINTEXT和SSL,hostname可以指定IP地址,也可以用"0.0.0.0"表示对所有的网络接口有效,如果hostname为空表示只对默认的网络接口有效。也就是说如果你没有配置advertised.listeners,就使用listeners的配置通告给消息的生产者和消费者,这个过程是在生产者和消费者获取源数据(metadata)。

    启动kafka

    ./bin/kafka-server-start.sh config/server.properties
    

    后台启动

    nohup bin/kafka-server-start.sh config/server.properties &
    

    4.)测试kafka

    创建topic (test)

    bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
    

    查看topic

    bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
    

    启动生产进程测试

    bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test
    

    启动启动消费者进程

    bin/kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic test --from-beginning
    
    1. 安装 kafka界面化工具:kafka-eagle
      注意:需要自己创建db库,导入到指定的位置
      https://www.jianshu.com/p/db9f37bb7f98
      kafka-eagle搭建 web无法访问:
      https://blog.csdn.net/Dreamy_zsy/article/details/105026083?utm_medium=distribute.pc_aggpage_search_result.none-task-blog-2

    7、安装filebeat
    1.)下载安装

    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz
    tar –zxvf filebeat-6.2.4-linux-x86_64.tar.gz –C /home/admin/project/elk
    cd /home/admin/project/elk/ filebeat-6.2.4-linux-x86_64
    

    2.)配置filebeat
    vim filebeat.yml

    
    filebeat.prospectors:
    
    # Each - is a prospector. Most options can be set at the prospector level, so
    # you can use different prospectors for various configurations.
    # Below are the prospector specific configurations.
    - type: log
    
      enabled: true
      paths:
        - /home/filebeatlog/*.log
      fields:  ##添加字段
        serverip: "localhost"
        logtopic: "tt1"
     # scan_frequency: 1s
    
    - type: log
    
      enable: true
      paths:
        - /home/topictest/*.log
      fields:
        serverip: "localhost"
        logtopic: "tt2"
     # scan_frequency: 1s
    
    
    output.kafka:
      enabled: true
      hosts: ["127.0.0.1:9092"]
      topic: '%{[fields.logtopic]}' ##匹配fileds字段下的logtopic
      required_acks: 1
      compression: gzip
      max_message_bytes: 10000000
    
    processors:
    - drop_fields: # 不需要发送的字段
       fields: ["beat", "input", "source", "offset"]
    
    logging.level: error
    
    

    3)启动filebeat

    nohup ./filebeat -e -c filebeat.yml &
    

    常用的命令

    
    1. 启动zookeeper&kafka
    
    修改zookeeper参数  vim config/zookeeper.properties
    修改kafka参数          vim config/server.properties
    ./bin/zookeeper-server-start.sh config/zookeeper.properties
    
    #后台启动zookeeper
    nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties &
    
     后台启动kafka
    ./bin/kafka-server-start.sh config/server.properties
    nohup bin/kafka-server-start.sh config/server.properties &
    2. 启动 filebeat
    配置filebeat  vim filebeat.yml
    nohup ./filebeat -e -c filebeat.yml &
    
    3. 启动kafka-eagle
    cd ../bin/
    chmod +x ke.sh
    ./ke.sh start
    
    4. 启动 ElasticSearch(必须切换到 非 root 用户下启动)
    vim config/elasticsearch.yml 修改 eslasticsearch 配置
    chown -R elk.elk /home/admin/project/elk/elasticsearch-6.2.4
    ./bin/elasticsearch -d #后台启动
    netstat -luntp |grep 9200 #查看监听端口9200
    curl 10.2.151.203:9200
    
    5. 启动 grunt (elastic-header  elastic 的界面化工具,依赖grunt)
    vim Gruntfile.js  修改grunt
    grunt server &
    nohup grunt server &  #后台启动
    
    6. 启动kibana
    nohup ./bin/kibana &
    
    7. 启动logstash
    nohup ./bin/logstash -f config/test.conf &  指定配置文件
    
    

    创建topic (test)

    bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
    

    查看topic

    bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
    

    启动生产进程测试

    bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test
    

    启动启动消费者进程

    bin/kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic test --from-beginning
    
    1. 启动 filebeat
    配置filebeat  vim filebeat.yml
    nohup ./filebeat -e -c filebeat.yml &
    

    linux 配置参数过小的处理办法:
    https://www.cnblogs.com/gentle-awen/p/10114759.html
    转自https://blog.csdn.net/szchenchao/article/details/83008359

    相关文章

      网友评论

          本文标题:1. ELK6+kafka+filebeat基础安装

          本文链接:https://www.haomeiwen.com/subject/lgighktx.html