美文网首页es
ELK-6.2部署及实战

ELK-6.2部署及实战

作者: 天夭夭 | 来源:发表于2018-11-14 00:00 被阅读0次

    架构设计:

    设计架构

    这是来到新公司后搭建的,目的是满足日均百G的存储量。之前使用5.X版本时一般使用redis的list模式,由于redis集群具有一定的不兼容性,所以本次搭建将中间的消息队列更换为能满足高吞吐量的kafka,来提高数据收集的速率。


    搭建:

    系统版本:centos7.3        

    IP                      hostname      配置                      部署服务                

    10.10.97.64          elk-1      16核16G      zookeeper,kafka,es,logstash

    10.10.97.65          elk-2        8核16G      zookeeper,kafka,es,kibana

    10.10.97.66          elk-3      16核16G      zookeeper,kafka,es,logstash

    备注:实际部署过程中,不推荐将多个服务放在一台机子上。

    一、初始化环境

    1、修改主机名

    hostnamectl --static set-hostname elk-1

    2、关闭防火墙及SELinux

    systemctl stop firewalld.service

    systemctl disable firewalld.service

    vi /etc/selinux/config

        SELINUX=disabled

    3、安装jdk1.8

    rpm -vih jdk-8u151-linux-x64.rpm

    4、系统参数设置

    vi + /etc/security/limits.conf

    * soft nofile 65536

    * hard nofile 131072

    * soft nproc 2048

    * hard nproc 4096

    vi + /etc/sysctl.conf

    vm.max_map_count=655360

    sysctl -p

    5、根据需要挂载数据盘

    mkdir /data

    mkfs.xfs -f /dev/xvdb

    mount -t xfs /dev/xvdb /data

    echo "/dev/xvdb              /data                  xfs    defaults        0 0" >> /etc/fstab

    6、创建相关目录

    mkdir /data/es /data/logs /data/zookeeper /data/logs/kafka /data/logs/logstash -p

            /data/logs 存放各类日志

            /data/es 存放es数据

            /data/logs/kafka 存放kafkas数据

            /data/zookeeper 存放zookeeper数据

            /data/logs/logstash 存放logstash日志

    7、下载安装包

    cd /tmp

    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.2.tar.gz

    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.2.tar.gz

    wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.2-linux-x86_64.tar.gz

    wget https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgz

    wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz

    scp /tmp/* elk-*:/tmp/ 集群同步

    8、解压:

    tar zxf elasticsearch-6.2.2.tar.gz

    tar zxf logstash-6.2.2.tar.gz

    tar zxf kibana-6.2.2-linux-x86_64.tar.gz

    tar zxf kafka_2.11-0.10.0.0.tgz

    tar zxf zookeeper-3.4.8.tar.gz

    cd /opt

    mv /tmp/elasticsearch-6.2.2  ./elasticsearch

    mv /tmp/logstash-6.2.2  ./logstash

    mv /tmp/kibana-6.2.2-linux-x86_64 ./kibana

    mv /tmp/zookeeper-3.4.8 ./zookeeper

    mv /tmp/kafka_2.11-0.10.0.0 ./kafka

    9、添加hosts

    vi + /etc/hosts

    10.10.97.64 elk-1

    10.10.97.65    elk-2

    10.10.97.66    elk-3

    二、zookeeper集群搭建

    1、修改主配置

    vim  /opt/zookeeper/conf/zoo.cfg

    ===============多节点相同配置===============

    tickTime=2000

    initLimit=10

    syncLimit=5

    dataDir=/data/zookeeper

    dataLogDir=/data/logs

    clientPort=2181

    maxClientCnxns=1000

    autopurge.snapRetainCount=2

    utopurge.purgeInterval=1

    server.1=10.10.97.64:2888:3888

    server.2=10.10.97.65:2888:3888

    server.3=10.10.97.66:2888:3888

    ===============END===============

    2、修改日志配置

    vim /opt/zookeeper/conf/log4j.properties

    zookeeper.log.dir=/data/logs/

    vim +124 /opt/zookeeper/bin/zkServer.sh 【125 行添加】

    export ZOO_LOG_DIR=/data/logs

    3、为各节点创建集群唯一标识文件【对应配置文件上的service.id】

    elk-1服务器:  echo "1" > /data/zookeeper/myid

    elk-2服务器:  echo "2" > /data/zookeeper/myid

    elk-3服务器:  echo "3" > /data/zookeeper/myid

    4、启动

    sh /opt/zookeeper/bin/zkServer.sh start

    5、验证:

    sh /opt/zookeeper/bin/zkServer.sh status 【一个为:leader(用为2888端口),其他为:follower】

    三、kafka集群搭建

    备注:此处使用kafka_2.11-0.10版本,kafka_2.10-0.9版本无法正常运行。

    1、限定内存使用

    vim +16 /opt/kafka/bin/kafka-server-start.sh  【根据机器配置优化】

    export KAFKA_HEAP_OPTS="-Xms6g -Xmx6g"

    2、修改主配置

    mv  /opt/kafka/config/server.properties  /opt/kafka/config/server.properties.bak

    vim /opt/kafka/config/server.properties

    ===============elk-1配置===============

    broker.id=0

    port=9092

    host.name=10.10.97.64

    num.network.threads=3

    num.io.threads=8

    socket.send.buffer.bytes=102400

    socket.receive.buffer.bytes=102400

    socket.request.max.bytes=104857600

    log.dirs=/data/logs/kafka

    log.retention.hours=72

    message.max.byte=5242880

    default.replication.factor=2

    replica.fetch.max.bytes=5242880

    num.partitions=2

    num.recovery.threads.per.data.dir=1

    log.segment.bytes=1073741824

    log.retention.check.interval.ms=300000

    log.cleaner.enable=false

    zookeeper.connect=10.10.97.64:2181,10.10.7.65:2181,10.10.7.66:2181

    zookeeper.connection.timeout.ms=6000

    ===============elk-2配置===============

    broker.id=1

    port=9092

    host.name=10.10.99.95

    num.network.threads=3

    num.io.threads=8

    socket.send.buffer.bytes=102400

    socket.receive.buffer.bytes=102400

    socket.request.max.bytes=104857600

    log.dirs=/data/logs/kafka

    log.retention.hours=72

    message.max.byte=5242880

    default.replication.factor=2

    replica.fetch.max.bytes=5242880

    num.partitions=2

    num.recovery.threads.per.data.dir=1

    log.segment.bytes=1073741824

    log.retention.check.interval.ms=300000

    log.cleaner.enable=false

    zookeeper.connect=10.10.99.104:2181,10.10.99.95:2181,10.10.99.111:2181

    zookeeper.connection.timeout.ms=6000

    ===============elk-3配置===============

    broker.id=2

    port=9092

    host.name=10.10.97.66

    num.network.threads=3

    num.io.threads=8

    socket.send.buffer.bytes=102400

    socket.receive.buffer.bytes=102400

    socket.request.max.bytes=104857600

    log.dirs=/data/logs/kafka

    log.retention.hours=72

    message.max.byte=5242880

    default.replication.factor=2

    replica.fetch.max.bytes=5242880

    num.partitions=2

    num.recovery.threads.per.data.dir=1

    log.segment.bytes=1073741824

    log.retention.check.interval.ms=300000

    log.cleaner.enable=false

    zookeeper.connect=10.10.99.104:2181,10.10.99.95:2181,10.10.99.111:2181

    zookeeper.connection.timeout.ms=6000

    ===============END===============

    3、启动:

    nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties >> /data/logs/kafka-nohup.out 2>&1 &

    4、验证:

    4.1、查看端口号  netstat |grep 9092

    4.2、创建topic

    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

    4.3、查看topic是否创建成果

    /opt/kafka/bin/kafka-topics.sh -list -zookeeper localhost:2181

    4.4、验证可用性

    1、服务端生产消息

    /opt/kafka/bin/kafka-console-producer.sh --broker-list 10.10.97.64:9092 --topic test

    2、客户端消费消息

    /opt/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

    四、elasticsearch集群搭建

    useradd es

    chown es: -R elasticsearch

    chown es: -R /data/es

    chown root:es /data/logs

    chmod 775 /data/logs

    1、修改主配置文件

    mv  /opt/elasticsearch/config/elasticsearch.yml  /opt/elasticsearch/config/elasticsearch.yml.bak

    vi /opt/elasticsearch/config/elasticsearch.yml

    ===============elk-1配置===============

    cluster.name: elk-es

    node.name: elk-1

    node.attr.rack: r1

    path.data: /data/es

    path.logs: /data/logs

    network.host: 10.10.97.64

    http.port: 9200

    transport.tcp.port: 9300

    discovery.zen.ping.unicast.hosts: ["10.10.97.65", "10.10.97.66","[::1]"]

    discovery.zen.minimum_master_nodes: 2

    action.destructive_requires_name: true

    thread_pool.bulk.queue_size: 1000

    ===============elk-2配置===============

    cluster.name: elk-es

    node.name: elk-2

    node.attr.rack: r1

    path.data: /data/es

    path.logs: /data/logs

    network.host: 10.10.97.65

    http.port: 9200

    transport.tcp.port: 9300

    discovery.zen.ping.unicast.hosts: ["10.10.97.64", "10.10.97.66","[::1]"]

    discovery.zen.minimum_master_nodes: 2

    action.destructive_requires_name: true

    ===============elk-3配置===============

    cluster.name: elk-es

    node.name: elk-3

    node.attr.rack: r1

    path.data: /data/es

    path.logs: /data/logs

    network.host: 10.10.97.66

    http.port: 9200

    transport.tcp.port: 9300

    discovery.zen.ping.unicast.hosts: ["10.10.97.64", "10.10.97.65","[::1]"]

    discovery.zen.minimum_master_nodes: 2

    action.destructive_requires_name: true

    ===============END===============

    2、修改限制内存参数【根据配置优化】

    vim /opt/elasticsearch/config/jvm.options

    -Xms6g

    -Xmx6g

    3、启动:

    sudo su - es -c  "/opt/elasticsearch/bin/elasticsearch -d"

    4、验证:

    tail -fn111 /data/logs/elk-es.log

    4.1、查看服务

    netstat -lntp|grep "9200\|9300"

    4.2、查看程序信息

    curl  localhost:9200

    4.3、查看集群状态

    curl  http://10.10.97.64:9200/_cat/health?v

    五、kibana搭建

    mv /opt/kibana/config/kibana.yml /opt/kibana/config/kibana.yml.bak

    vi /opt/kibana/config/kibana.yml

    server.port: 5601

    server.host: "0.0.0.0"

    elasticsearch.url: "http://10.10.97.64:9200"

    1、启动:

    nohup /opt/kibana/bin/kibana >> /data/logs/kibana-nohup.out 2>&1 &

    2、验证:

    curl localhost:5601 -I 【返回200则成功】

    六、logstash搭建

    1、修改logstash可用内存数量(根据机器配置设定)

    vi /opt/logstash/config/jvm.options

    -Xms4g

    -Xmx4g

    vi /opt/logstash/bin/logstash.conf(ELK-6起取消了document_type,过滤类型配置有变化)

    ===================logstash-配置参考=======================

    input {

            kafka {

                    codec => "json"

                    topics => ["test"]

                    bootstrap_servers => "10.10.97.64:9092,10.10.97.65:9092,10.10.97.66:9092"

                    auto_offset_reset => "latest"

                    group_id => "logstash-gl"

            }

    }

    filter {

            mutate {

              remove_field => "beat.name"

              remove_field => "beat.version"

              remove_field => "@version"

              remove_field => "offset"

              remove_field => "fields.service"

            }

    }

    output {

            elasticsearch {

                    hosts => ["10.10.97.64:9200","10.10.97.65:9200","10.10.97.66:9200"]

                    index => "logstash-%{[fields][service]}-%{+YYYY.MM.dd}"

                    document_type => "%{[fields][service]}"

                    workers => 1

            }

            stdout {codec => rubydebug}

    }

    ==================================END======================================

    2、启动:

    nohup /opt/logstash/bin/logstash -f /opt/logstash/bin/logstash.conf >> /data/logs/logstash/nohup.out 2>&1 &

    3、验证:

    netstat -lntp |grep 9600

    tailf  /data/logs/logstash/nohup.out

    七、客户端配置(日志采集):

    cd /opt

    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-linux-x86_64.tar.gz

    tar zxf filebeat-6.2.2-linux-x86_64.tar.gz

    mv filebeat-6.2.2-linux-x86_64 filebeat

    rm -fr filebeat-6.2.2-linux-x86_64.tar.gz

    mkdir /data/logs/filebeat

    >/opt/filebeat/filebeat.yml

    vi  /opt/filebeat/filebeat.yml

    ======================filebeat-配置参考========================

    filebeat.prospectors:

    - input_type: log

      paths:

        - /data/logs/jetty_9504/run*.log

      ignore_older: "24h"

      fields:

          service: jetty_srm

      multiline.pattern:  '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

      multiline.negate: true

      multiline.match: after

      multiline.timeout: 10s

      multiline.max_lines: 1000

    - input_type: log

      paths:

        - /data/logs/jetty_9505/run*.log

      ignore_older: "24h"

      fields:

          service: jetty_crm

      multiline.pattern:  '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

      multiline.negate: true

      multiline.match: after

      multiline.timeout: 10s

      multiline.max_lines: 1000

    max_bytes: 104857600

    tail_files: true

    output.kafka:

        enabled: true

        hosts: ["10.10.97.64:9092","10.10.97.65:9092","10.10.97.66:9092"]

        topic: test

        timeout: 10

    =============================END===============================

    启动:

    nohup /opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml >> /data/logs/filebeat-nohup.out 2>&1 &

    验证:

    1、netstat -lntp |grep 5601

    2、tailf /data/logs/filebeat-nohup.out

    部署就到此结束啦。

    今天刚完成nginx日志的可视化,分享下,地图。

    30分钟数据量 热点图

    总结:ELK6.X体验还是比5好很多,感觉kibana6界面顺畅多了,其次是kibana6界面上多了很多的功能,如APM、深度学习等,都想去玩一下,无奈最近太忙了。如果你看到这里,这篇文章可能就对你有些用处了,上面的配置文件可以在我的github上获取到,自行修改IP即可。更多玩法欢迎一起探讨:QQ~546020152

    相关文章

      网友评论

        本文标题:ELK-6.2部署及实战

        本文链接:https://www.haomeiwen.com/subject/dsbrfqtx.html