美文网首页
1)ELK5.1+Redis+Filebeat日志系统的部署与测

1)ELK5.1+Redis+Filebeat日志系统的部署与测

作者: Jane_5W | 来源:发表于2017-01-16 12:23 被阅读0次

    一,文件说明

    • centos7
    • jdk-8u112-linux-x64.tar.gz
    • elasticsearch-5.1.1.tar.gz
    • filebeat-5.1.1-x86_64.rpm
    • kibana-5.1.1-linux-x86_64.tar.gz
    • logstash-5.1.1.tar.gz
    • redis-3.2.6.tar.gz

    二,环境配置

    elk5.1之后需要jdk1.8以上的支持,所以需要配置jdk环境变量

    1 用java -version命令查看centos是否自带openJDK 如果自带则卸载,卸载命令如下:
    rpm -e --nodeps 'rpm -qa | grep java'
    2 解压,重命名jdk到 /usr/local/java目录 并记录了jdk根目录

    # tar -zxf jdk-8u112-linux-x64.tar.gz
    # mv jdk1.8.0_112/ jdk1.8/
    # pwd
    /usr/local/java/jdk1.8
    

    3 配置java环境变量

    # vi /etc/profile
    加入如下内容:
    export NODE_HOME=/opt/soft/node
    export PATH=$NODE_HOME/bin:$PATH # 这是node.js的环境变量
    
    export JAVA_HOME=/usr/local/java/jdk1.8
    export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$PATH:$JAVA_HOME/bin
    
    # source /etc/profile
    # java -version
    java version "1.8.0_112"
    Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
    Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
    
    

    三 解压并授权elk文件

    # tar -zxf elasticsearch-5.1.1.tar.gz
    # tar -zxf kibana-5.1.1-linux-x86_64.tar.gz
    # tar -zxf logstash-5.1.1.tar.gz
    
    # chmod -R 777 elasticsearch-5.1.1
    # chmod -R 777 kibana-5.1.1-linux-x86_64
    # chmod -R 777 logstash-5.1.1
    

    四 新建centos系统用户并授予root权限,防火墙关闭

    # useradd elkuser
    # passwd elkuser
    
    

    授予root权限

    # vi /etc/sudoers 在root   ALL=(ALL)       ALL后添加
    elkuser    ALL=(ALL)       ALL
    注意:强行写入用 wq!
    # source /etc/sudoers 使文件生效
    

    关闭防火墙

    查看状态
    # firewall-cmd --state
    # service iptables status
    centos7 如果没有安装iptables-services  那么第二条命令不生效
    如果是运行状态则关闭(关闭方式请自行百度)
    

    五 安装配置filebeat

    进入filebeat文件目录,如果不成功,则去掉sudo

    sudo rpm -vi filebeat-5.1.2-x86_64.rpm
    

    查看filebeat安装位置 rpm -ql filebeat

    配置filebeat

    ###################### Filebeat Configuration Example #########################
    #=========================== Filebeat prospectors =============================
    
    filebeat.prospectors:
    
    - input_type: log
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /var/elklog/Linux/*.log
      document_type: linux
    - input_type: log  
      paths:
        - /var/elklog/openstackMitakaLog/*.log
      document_type: api
    
    #================================ Outputs =============================
    
    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      hosts: ["192.168.1.19:5043"]
    

    启动filbeat

    sudo /etc/init.d/filebeat start
    或 /etc/init.d/filebeat start
    
    结果:Starting filebeat: 2017/01/16 11:19:40.838639 beat.go:267: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
    2017/01/16 11:19:40.838687 beat.go:177: INFO Setup Beat: filebeat; Version: 5.1.1
    2017/01/16 11:19:40.838827 logp.go:219: INFO Metrics logging every 30s
    2017/01/16 11:19:40.838824 logstash.go:90: INFO Max Retries set to: 3
    2017/01/16 11:19:40.839014 outputs.go:106: INFO Activated logstash as output plugin.
    2017/01/16 11:19:40.839337 publish.go:291: INFO Publisher name: localhost.localdomain
    2017/01/16 11:19:40.847865 async.go:63: INFO Flush Interval set to: 1s
    2017/01/16 11:19:40.847879 async.go:64: INFO Max Bulk Size set to: 2048
    Config OK                                                         [确定]
    
    

    六 配置运行elasticsearch

    配置es(都以此为简称) 没配置的即为默认配置

    # ======================== Elasticsearch Configuration =========================
    # ---------------------------------- Cluster -----------------------------------
    cluster.name: bjhit-cluster
    # ------------------------------------ Node ------------------------------------
    node.name: node-bjhit
    # ---------------------------------- Network -----------------------------------
    network.host: 192.168.1.19
    http.port: 9200
    

    运行es: 进入elasticsearch解压文件目录
    es不能用root用户启动,只能用我们刚刚建立的elkuser来启动否则或报错

    ./bin/elasticsearch

    七 安装运行redis

    安装redis需要依赖gcc 所以先确定gcc安装了 如果没有则请安装;确认安装命令如下:

    确认:gcc -v
    安装:yum install -y gcc g++ gcc-c++ make
    完成gcc安装后:
    
    $ tar xzf redis-3.2.6.tar.gz
    $ cd redis-3.2.6
    $ make
    启动测试redis
    $ src/redis-server
    另开窗口测试
    $ src/redis-cli
    redis> set foo bar
    OK
    redis> get foo
    "bar"
    

    测试成功后,启动redis 如果redis安装在另一个linux下的话则运行:

    ./src/redis-server --protected-mode no 否则直接运行./src/redis-server

    八 安装logstash插件以及配置启动logstash

    filebeat和logstash之间必须要在logstash目录下安装一个插件:

    进入logstash目录后执行: ./bin/logstash-plugin install logstash-input-beats
    安装过程可能不成功,那就坚持不懈再次安装,最后会成功的;以下是成功报文:

    Validating logstash-input-beats
    Installing logstash-input-beats
    WARNING: SSLSocket#session= is not supported
    Installation successful
    

    配置logstash 新建且配置完后放到logstash的bin目录:

    1,redis-input.conf

    #input { stdin { } }
    #input {
    #    file {
    #        path => ["/var/elklog/*.log"]
    #        type => "system"
    #        start_position => "beginning"
    #    }
    #}
    input {
     beats {
        port => "5043"
     }
    }
    # The filter part of this file is commented out to indicate that it is
    # optional.
    # filter {
    #
    # }
    output {
       #  elasticsearch {
       #  hosts => [ "192.168.1.19:9200" ]
    #   index => "logstash-%{type}-%{+YYYY.MM.dd}"
     #       document_type => "%{type}"
      #      flush_size => 20000
       #     idle_flush_time => 10
        #    template_overwrite => true
       # }
    
       redis {
            host => "192.168.1.211"
            port => 6379
            data_type => "list"
            key => "logstash:redis"
        }
        stdout { codec => rubydebug }
    
       
    }
    

    2,redis-output.conf

    #input { stdin { } }
    #input {
    #    file {
    #        path => ["/var/elklog/*.log"]
    #        type => "system"
    #        start_position => "beginning"
    #    }
    #}
    #input {
    # beats {
    #    port => "5043"
    # }
    #}
    # The filter part of this file is commented out to indicate that it is
    # optional.
    # filter {
    #
    # }
    input {
        redis {
            data_type => "list"
            key => "logstash:redis"
            host => "192.168.1.211"
            port => 6379
        }
    }
    
    output {
        if[type] =="linux"{
         elasticsearch {
            hosts => [ "192.168.1.19:9200" ]
        index => "linux-%{+YYYY.MM.dd}"
            document_type => "%{[@metadata][type]}"
            flush_size => 20000
            idle_flush_time => 10
            template_overwrite => true
        }
       }else if[type] =="api"{
         elasticsearch {
            hosts => [ "192.168.1.19:9200" ]
            index => "api-%{+YYYY.MM.dd}"
            document_type => "%{[@metadata][type]}"
            flush_size => 20000
            idle_flush_time => 10
            template_overwrite => true
        }
       }
    
    
        stdout { codec => rubydebug }
    
       
    }
    
    
    

    启动logstash
    分别用如下参数启动
    ./logstash -f redis-output.conf
    ./logstash -f redis-input.conf

    九配置启动kibana

    配置kibana

    server.port: 5601
    server.host: "192.168.1.19"
    server.name: "localhost"
    elasticsearch.url: "http://192.168.1.19:9200"
    
    

    启动kibana
    进入kibana目录(我测试的时候elk都是用的elkuser用户来运行)
    ./bin/kibana

    相关文章

      网友评论

          本文标题:1)ELK5.1+Redis+Filebeat日志系统的部署与测

          本文链接:https://www.haomeiwen.com/subject/fczubttx.html