美文网首页
filebeat+kafka+elk集群部署

filebeat+kafka+elk集群部署

作者: su酥饼 | 来源:发表于2023-03-20 11:25 被阅读0次

    elk架构设计


    image.png

    kafka缓存版本 elk部署文档转至元数据结尾转至元数据起始
    1、ELK环境介绍
    一、ELK是什么
    ELK 是elastic公司提供的一套完整的日志收集以及展示的解决方案,是三个产品的首字母缩写,分别是ElasticSearch、Logstash 和 Kibana。

    ElasticSearch简称ES,它是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写。
    Logstash是一个具有实时传输能力的数据收集引擎,用来进行数据收集(如:读取文本文件)、解析,并将数据发送给ES。
    Kibana为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度表格、图形。
    

    二、ELK用途
    ELK是日志分析的一个开源解决方案。日志分析并不仅仅包括系统产生的错误日志,异常,也包括业务逻辑,或者任何文本类的分析。而基于日志的分析,能够在其上产生非常多的解决方案,譬如:

    1.问题排查。能够快速的定位问题,甚至防微杜渐,把问题杀死在摇篮里。日志分析技术显然问题排查的基石。
    2.监控和预警。 日志,监控,预警是相辅相成的。基于日志的监控,预警使得运维有自己的机械战队,大大节省人力以及延长运维的寿命。
    3.数据分析。 这个对于数据分析师有所裨益的。
    

    三、ELK stack架构

    数据源 → filebeat
    
        filebeat进行数据采集
    
    filebeat → kafka
    
        filebeat将根据设置tiops的不同将之转发给kafka做缓存。
    
    kafka → logstash
    
        logstash开启多个pipeline通道分别读取kafka库中的数据,并将之使用fileter解析,最后分发给不同的elasticsearch索引。logstash会在接下来文章中具体讲解。
    
    
    logstash → elasticsearch
    
        logstash将数据传输到elasticsearch中时,会在es中自动创建索引,为了使es中的新建索引符合一定格式,我使用了es新建索引模板,指定新建索引的Mapping。会在接下来的文章中具体讲解。
    
    elasticsearch → kibana
    
        Kibana通过得到的数据进行统计分析,来实时监控应用的状况。
    

    2、Elasticsearch安装和集群部署
    一、搭建环境信息

    操作系统:centos 7.6 64位
    
    elasticsearch版本:7.7.1
    
    10.10.0.147 es-master1  端口:9200
    
    10.10.0.220 es-master2  端口:9200
    
    10.10.0.221 es-master3  端口:9200
    
    10.10.0.224 es-data1    端口:9200
    
    10.10.0.186 es-data2    端口:9200
    
    10.10.0.188 es-data3    端口:9200
    
    

    二、安装Elasticsearch
    在七台服务器上新增elasticsearch的yum源并安装elasticsearch

    vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
    
    [elasticsearch]
    name=Elasticsearch repository for 7.x packages
    baseurl=https://mirrors.tset.com/repository/elk
    gpgcheck=0
    enabled=1
    autorefresh=1
    type=rpm-md
    
    yum clean all && yum makecache
    yum install elasticsearch -y
    

    三、在七台服务器上做准备工作
    1.备份原配置文件

    cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.original
    

    2.配置堆内存Elasticsearch 默认安装后设置的堆内存是 1 GB。修改 jvm.options配置文件(当前服务器内存的一半,不超过32G)。

    vim /etc/elasticsearch/jvm.options
    
    -Xms2g
    
    -Xmx2g
    
    最大可分配堆内存大小为: 32GB与当前ES宿主机内存二者的最小值。
    

    3.禁止swapping操作

    elasticsearch.yml 文件中的Memory部分,修改设置如下:

    vim /etc/elasticsearch/elasticsearch.yml
    
    bootstrap.memory_lock : true
    
    核心原因:内存交换 到磁盘对服务器性能来说是 致命 的。
    

    4.配置文件描述符数目

    vim /etc/profile
    
    ulimit -n 65535
    
    使得命令生效
    
    source /etc/profile
    

    5.修改最大映射数量MMP

    Elasticsearch 对各种文件混合使用了 NioFs( 非阻塞文件系统)和 MMapFs ( 内存映射文件系统)。请确保你配置的最大映射数量,以便有足够的虚拟内存可用于 mmapped 文件。在 /etc/sysctl.conf 通过修改 vm.max_map_count 永久设置它。

    vim /etc/sysctl.conf
    
    vm.max_map_count=262144
    

    6.配置开机自启动

    vim /usr/lib/systemd/system/elasticsearch.service
    
    在[Service]部分的结尾添加如下两行
    
    #tset custom settings about memory
    
    LimitMEMLOCK=infinity
    
    
    
    systemctl daemon-reload  && systemctl enable elasticsearch  &&  systemctl start elasticsearch
    

    7.更改path.data对应目录权限

    master节点:
    
    chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
    
    data节点:
    
    chown -R elasticsearch:elasticsearch /es_data/elasticsearch
    

    四、搭建集群

    1. 10.10.0.147的配置文件
    cluster.name: my-elk
    
    node.name: es-master1
    
    node.attr.rack: elk
    
    node.master: true
    
    node.data: false
    
    path.data: /var/lib/elasticsearch
    
    path.logs: /var/log/elasticsearch
    
    bootstrap.memory_lock: true
    
    transport.tcp.compress: true
    
    network.host: 10.10.0.147
    
    http.port: 9200
    
    discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
    
    cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
    
    #gateway.recover_after_nodes: 2
    
    #gateway.expected_nodes: 2
    
    http.cors.enabled: true
    
    http.cors.allow-origin: "*"
    
    indices.memory.index_buffer_size: 40%
    
    thread_pool.write.size: 5
    
    thread_pool.write.queue_size: 1000
    
    xpack.security.enabled: true
    
    xpack.security.transport.ssl.enabled: true
    
    xpack.security.transport.ssl.verification_mode: certificate
    
    xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    
    xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
    
    xpack:
    
      security:
    
        authc:
    
          realms:
    
            ldap:
    
              ldap1:
    
                order: 0
    
                url: "ldap://ldap.tset.com:389"
    
                bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
    
                user_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                  filter: "(cn={0})"
    
                group_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                files:
    
                  role_mapping: "/etc/elasticsearch/role_mapping.yml"
    
                unmapped_groups_as_roles: false
    
    重启elasticsearch
    
    systemctl restart elasticsearch
    

    10.10.0.220 配置文件

    cat /etc/elasticsearch/elasticsearch.yml
    
    
    
    cluster.name: my-elk
    
    node.name: es-master2
    
    node.attr.rack: elk
    
    node.master: true
    
    node.data: false
    
    path.data: /var/lib/elasticsearch
    
    path.logs: /var/log/elasticsearch
    
    bootstrap.memory_lock: true
    
    transport.tcp.compress: true
    
    network.host: 10.10.0.220
    
    http.port: 9200
    
    discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
    
    cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
    
    #gateway.recover_after_nodes: 2
    
    #gateway.expected_nodes: 2
    
    http.cors.enabled: true
    
    http.cors.allow-origin: "*"
    
    indices.memory.index_buffer_size: 40%
    
    thread_pool.write.size: 5
    
    thread_pool.write.queue_size: 1000
    
    xpack.security.enabled: true
    
    xpack.security.transport.ssl.enabled: true
    
    xpack.security.transport.ssl.verification_mode: certificate
    
    xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    
    xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
    
    xpack:
    
     security:
    
       authc:
    
         realms:
    
           ldap:
    
             ldap1:
    
               order: 0
    
               url: "ldap://ldap.tset.com:389"
    
               bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
    
               user_search:
    
                 base_dn: "dc=chinatset,dc=com"
    
                 filter: "(cn={0})"
    
               group_search:
    
                 base_dn: "dc=chinatset,dc=com"
    
               files:
    
                 role_mapping: "/etc/elasticsearch/role_mapping.yml"
    
               unmapped_groups_as_roles: false
    
    重启elasticsearch
    
    systemctl restart elasticsearch
    

    10.10.0.221的配置文件

    cluster.name: my-elk
    
    node.name: es-master3
    
    node.attr.rack: elk
    
    node.master: true
    
    node.data: false
    
    path.data: /var/lib/elasticsearch
    
    path.logs: /var/log/elasticsearch
    
    bootstrap.memory_lock: true
    
    transport.tcp.compress: true
    
    network.host: 10.10.0.221
    
    http.port: 9200
    
    discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
    
    cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
    
    #gateway.recover_after_nodes: 2
    
    #gateway.expected_nodes: 2
    
    http.cors.enabled: true
    
    http.cors.allow-origin: "*"
    
    indices.memory.index_buffer_size: 40%
    
    thread_pool.write.size: 5
    
    thread_pool.write.queue_size: 1000
    
    xpack.security.enabled: true
    
    xpack.security.transport.ssl.enabled: true
    
    xpack.security.transport.ssl.verification_mode: certificate
    
    xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    
    xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
    
    xpack:
    
      security:
    
        authc:
    
          realms:
    
            ldap:
    
              ldap1:
    
                order: 0
    
                url: "ldap://ldap.tset.com:389"
    
                bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
    
                user_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                  filter: "(cn={0})"
    
                group_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                files:
    
                  role_mapping: "/etc/elasticsearch/role_mapping.yml"
    
                unmapped_groups_as_roles: false
    

    data节点配置文件

    每个data节点都是用如下配置,需要更改 node.name 和network.host 两个配置信息
    
    
    
    已 10.10.0.224 为例:
    
    cluster.name: my-elk
    
    node.name: es-data1
    
    node.attr.rack: elk
    
    node.master: false            # 非master节点
    
    node.data: true               # data节点
    
    path.data: /es_data/elasticsearch
    
    path.logs: /var/log/elasticsearch
    
    bootstrap.memory_lock: true
    
    transport.tcp.compress: true
    
    network.host: 10.10.0.224
    
    http.port: 9200
    
    discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
    
    cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
    
    gateway.recover_after_nodes: 2
    
    gateway.expected_nodes: 2
    
    http.cors.enabled: true
    
    http.cors.allow-origin: "*"
    
    indices.memory.index_buffer_size: 40%
    
    indices.breaker.total.limit: 80%
    
    indices.fielddata.cache.size: 10%
    
    indices.breaker.fielddata.limit: 60%
    
    indices.breaker.request.limit: 60%
    
    #xpack.security.enabled: true
    
    #xpack.security.transport.ssl.enabled: true
    
    #xpack.security.transport.ssl.verification_mode: certificate
    
    #xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    
    #xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
    
    xpack:
    
      security:
    
        authc:
    
          realms:
    
            ldap:
    
              ldap1:
    
                order: 0
    
                url: "ldap://ldap.tset.com:389"
    
                bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
    
                user_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                  filter: "(cn={0})"
    
                group_search:
    
                  base_dn: "dc=chinatset,dc=com"
    
                files:
    
                  role_mapping: "/etc/elasticsearch/role_mapping.yml"
    
                unmapped_groups_as_roles: false
    
    
    
    配置修改完后重启es
    
    systemctl restart elasticsearch
    

    浏览器通过插件访问可查看集群状态:

    10.10.0.147:9200

    五、配置TLS加密通信及身份验证
    1.步骤

    1.1 生成CA证书
    
    cd /usr/share/elasticsearch # 使用yum方式安装的可执行文件路径
    
    bin/elasticsearch-certutil ca (CA证书:elastic-stack-ca.p12)
    
    
    
    1.2生成节点证书
    
    bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 (节点证书:elastic-certificates.p12)
    
    chmod 644 elastic-certificates.p12
    
    
    
    1.3 copy证书到 /etc/elasticsearch/
    
    bin/elasticsearch-certutil cert -out /etc/elasticsearch/elastic-certificates.p12 –pass
    
    
    

    2.修改配置文件,开启证书访问
    因为上面配置文件已经是修改好的了,所以无需修改

    编辑配置文件/etc/elasticsearch/elasticsearch.yml
    
    xpack.security.enabled: true
    
    xpack.security.transport.ssl.enabled: true
    
    xpack.security.transport.ssl.verification_mode: certificate # 证书认证级别
    
    xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
    
    xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
    
    
    

    3.将elastic-certificates.p12配置文件拷贝至其他节点,并修改配置文件

    scp elastic-certificates.p12 root@es-master2:/etc/elasticsearch
    
    scp elastic-certificates.p12 root@es-master3:/etc/elasticsearch
    
    
    
    scp elastic-certificates.p12 es-data1:/etc/elasticsearch
    
    scp elastic-certificates.p12 es-data2:/etc/elasticsearch
    
    scp elastic-certificates.p12 es-data3:/etc/elasticsearch
    

    4.分发完后重启所有节点(按需分配重启)

    systemctl restart elasticsearch
    

    5.设置密码
    启动所有节点,待节点启动完毕之后,进入第一个节点elasticsearch目录,执行以下命令,进行密码设置:

    cd /usr/share/elasticsearch
    
    bin/elasticsearch-setup-passwords interactive
    
    
    
    # 输出结果
    
    Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
    
    You will be prompted to enter passwords as the process progresses.
    
    Please confirm that you would like to continue [y/N]y # 输入y
    
    
    
    # 直接输入密码,然后再重复一遍密码,中括号里是账号
    
    Enter password for [elastic]:
    
    Reenter password for [elastic]:
    
    Enter password for [apm_system]:
    
    Reenter password for [apm_system]:
    
    Enter password for [kibana]:
    
    Reenter password for [kibana]:
    
    Enter password for [logstash_system]:
    
    Reenter password for [logstash_system]:
    
    Enter password for [beats_system]:
    
    Reenter password for [beats_system]:
    
    Enter password for [remote_monitoring_user]:
    
    Reenter password for [remote_monitoring_user]:
    
    Changed password for user [apm_system]
    
    Changed password for user [kibana]
    
    Changed password for user [logstash_system]
    
    Changed password for user [beats_system]
    
    Changed password for user [remote_monitoring_user]
    
    Changed password for user [elastic]
    

    6注意事项

    
    1)、设置密码都是elk123456
    
    验证集群设置的账号和密码
    
    打开浏览器访问这个地址,出现需要输入账号密码的界面证明设置成功,集群的一个节点
    
    
    
    2)、elasticsearch-head访问es集群的用户及密码
    
    elasticsearch-head插件此时再去访问有安全认证的es集群时,会发现无法进行查看,打开控制台可以看到报错:401 unauthorized
    
    解决办法是修改elasticsearch.yml文件,增加以下配置。
    
    http.cors.allow-headers: Authorization,content-type
    
    修改所有es节点,然后重新启动,再次url+认证信息方式可以正常访问es集群。
    
    
    
    3)、logstash过滤数据之后往es中推送的时候,需要添加权限认证。增加访问es集群的用户及密码,格式如下:
    
    
    
    output {
    
    
    
      if [fields][log_source] == 'messages' {
    
        elasticsearch {
    
          hosts => ["http://192.168.x.x:9200", "http://192.168.x.x:9200","http://192.168.x.x:9200"]
    
          index => "messages-%{+YYYY.MM.dd}"
    
          user => "elastic"
    
          password => "elk123456"
    
        }
    
      }
    
    
    
      if [fields][log_source] == "secure" {
    
        elasticsearch {
    
          hosts => ["http://192.168.x.x:9200", "http://192.168.x.x:9200","http://192.168.x.x:9200"]
    
          index => "secure-%{+YYYY.MM.dd}"
    
          user => "elastic"
    
          password => "elk123456"
    
        }
    
      }
    
    
    
    }
    
    
    
    
    

    4)、Kibana组件访问带有安全认证的Elasticsearch集群

    配置文件kibana.yml中需要加入以下配置

    
    elasticsearch.username: "kibana"  # 注意:此处不用超级账号elastic,而是使用kibana跟es连接的账号kibana
    
    elasticsearch.password: "elk123456"
    

    然后重启kibana,再次访问的话就就需要输入上述账号密码才能登陆访问了。

    3、ELK之kafka集群部署
    一、搭建环境

    操作系统:centos 7.6 64位

    Jdk版本: 1.8.0

    Zookpeer版本: v3.6.3

    Kafka版本:v2.8.1

    主服务器:10.10.0.181 端口:9092

    从服务器:10.10.0.182 端口:9092

    从服务器:10.10.0.183 端口:9092

    1、安装java环境

    yum -y install java-1.8.0-openjdk-devel
    
    #查看
    
    java –version
    
    
    
    openjdk version "1.8.0_332"
    
    OpenJDK Runtime Environment (build 1.8.0_332-b09)
    
    OpenJDK 64-Bit Server VM (build 25.332-b09, mixed mode)
    

    2、安装Zookeeper

    #创建目录解压zookeeper安装包
    
    cd /data/nfs_share/zk-kafka
    
    mkdir /usr/local/zookeeper
    
    tar -xzvf apache-zookeeper-3.6.3-bin.tar.gz -C /usr/local/zookeeper
    

    创建数据和日志目录

    cd  /usr/local/zookeeper
    
    mkdir data
    
    mkdir logs
    
    #备份配置文件并修改内容
    
    mv apache-zookeeper-3.6.3-bin zookeeper-3.6.3
    
    cd zookeeper-3.6.3/conf/
    
    cp zoo_sample.cfg zoo.cfg
    
    #在配置文件zoo.cfg中配置数据和日志目录:
    
    vim zoo.cfg
    
    #添加修改
    
    dataDir=/usr/local/zookeeper/data
    
    dataLogDir=/usr/local/zookeeper/logs
    
    

    因为要搭建集群,所以需要给每一个zookeeper节点一个id,这里是在data目录下新建一个myid文件,里面只包括该节点的id

    cd /usr/local/zookeeper/data
    
    
    
    #此id可设置为服务器ip最后一位
    
    vim  myid
    
    1
    
    #三个节点分别配置为1、2、3,对应下面配置中的server.xx
    
    
    
    然后三台虚拟机配完之后,还要到conf下的zoo.cfg把集群的节点都配置上
    
    cd /usr/local/zookeeper/zookeeper-3.6.3/conf
    
    vim zoo.cfg
    
    #每个zookeeper节点都配置一下
    
    server.1=10.10.0.181:2888:3888
    
    server.2=10.10.0.182:2888:3888
    
    server.3=10.10.0.183:2888:3888
    

    为了我们使用,我们配置一下zookeeper的环境变量,否则的话每次启动zookeeper都要到zookeeper的bin目录下启动

    vim /etc/profile
    
    #把下面内容复制到文件末尾(第一行的目录根据自己的安装路径来配置,可以使用pwd查看):
    
    # zk env
    
    export ZOOKEEPER_HOME=/usr/local/zookeeper/zookeeper-3.6.3
    
    export PATH=$ZOOKEEPER_HOME/bin:$PATH
    
    export PATH
    
    
    
    source /etc/profile
    
    三台电脑都这样配置上。 然后就可以启动了!
    
    zkServer.sh start
    

    3、安装Kafka

    上传并解压

    tar -xzvf kafka_2.13-2.8.1.tgz -C /usr/local/
    
    
    
    mv /usr/local/kafka_2.13-2.8.1 /usr/local/kafka2.8.1
    
    
    
    配置环境变量
    
    cat <<EOF> /etc/profile.d/kafka.sh
    
    export KAFKA_HOME=/usr/local/kafka2.8.1
    
    export PATH=$PATH:$KAFKA_HOME/bin
    
    EOF
    
    source /etc/profile.d/kafka.sh
    
    修改停止脚本
    
    
    
    vim bin/kafka-server-stop.sh
    
    
    
    #kill -s $SIGNAL $PIDS
    
    #修改
    
    kill -9 $PIDS
    
    用于监控的配置,修改 bin/kafka-server-start.sh,增加 JMX_PORT,可以获取更多指标
    
    注意端口被占用
    
    if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    
        export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
    
        export JMX_PORT="9198"
    
    fi
    
    修改配置文件,三个节点id不一样(可用1.2.3) 和listeners (使用本机ip)
    
    [root@zk-node1 kafka2.8.1]# grep -Ev "^#" /usr/local/kafka2.8.1/config/server.properties|grep -v "^$"
    
    #修改后内容
    
    broker.id=1
    
    listeners=PLAINTEXT:// 10.10.0.181:9092
    
    num.network.threads=3
    
    num.io.threads=8
    
    socket.send.buffer.bytes=102400
    
    socket.receive.buffer.bytes=102400
    
    socket.request.max.bytes=104857600
    
    log.dirs=/usr/local/kafka2.8.1/logs
    
    num.partitions=1
    
    num.recovery.threads.per.data.dir=1
    
    offsets.topic.replication.factor=1
    
    transaction.state.log.replication.factor=1
    
    transaction.state.log.min.isr=1
    
    log.retention.hours=168
    
    log.segment.bytes=1073741824
    
    log.retention.check.interval.ms=300000
    
    zookeeper.connect=10.10.0.181:2181, 10.10.0.182:2181, 10.10.0.183:2181
    
    zookeeper.connection.timeout.ms=18000
    
    group.initial.rebalance.delay.ms=0
    
    以上配置文件为基础配置
    
    
    
    
    

    目前kafka默认分片和副本数为1,存在单点故障问题
    解决:

    修改kafka配置文件

    vim /usr/local/kafka2.8.1/config/server.properties
    
    
    
    #修改一下配置
    
    #num.partitions: 默认partition数量,如果topic在创建时没有指定partition数量,默认使用此值
    
    # default.replication.factor: 默认副本数
    
    num.partitions=3
    
    offsets.topic.replication.factor=3
    
    transaction.state.log.replication.factor=3
    
    transaction.state.log.min.isr=3
    
    default.replication.factor=3
    
    
    
    新建启动脚本
    
    
    
    #!/bin/bash
    
    nohup /usr/local/kafka2.8.1/bin/kafka-server-start.sh /usr/local/kafka2.8.1/config/server.properties >> /usr/local/kafka2.8.1/nohup.out 2>&1 &
    
    新建重启脚本
    
    #!/bin/bash
    
    /usr/local/kafka2.8.1/bin/kafka-server-stop.sh
    
    nohup /usr/local/kafka2.8.1/bin/kafka-server-start.sh /usr/local/kafka2.8.1/config/server.properties >> /usr/local/kafka2.8.1/nohup.out 2>&1 &
    
    
    
    编写service文件
    
    [root@zk-node1 kafka2.8.1]# cat /usr/lib/systemd/system/kafka.service
    
    [Unit]
    
    Description=kafka
    
    After=network.target remote-fs.target nss-lookup.target zookeeper.service
    
    
    
    [Service]
    
    Type=forking
    
    Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/java/jdk1.8.0_161/bin"
    
    ExecStart=/usr/local/kafka2.8.1/kafka_start_my.sh -daemon /usr/local/kafka2.8.1/config/server.properties
    
    ExecReload=/bin/kill -s HUP $MAINPID
    
    ExecStop=/usr/local/kafka2.8.1/bin/kafka-server-stop.sh
    
    #PrivateTmp=true
    
    
    
    [Install]
    
    WantedBy=multi-user.target
    
    启动kafak,并设置开机自启
    
    systemctl restart kafka.service
    
    systemctl status kafka.service
    
    systemctl enable kafka.service
    

    安装web页面----(kowl)在 10.10.0.181

    上传镜像包,并load成镜像

     docker load -i kowl.tar
    

    在任意服务器创建web页面(4.162)

    docker run -d -p 19092:8080 -e KAFKA_BROKERS=10.10.0.181:9092, 10.10.0.182:9092, 10.10.0.183:9092 rsmnarts/kowl:latest
    
    # KAFKA_BROKERS 节点地址:端口
    
    #查看信息访问
    
    10.10.0.181:19092
    

    登录界面查看kafka状态:

    4.ELK之安装kibana
    一、搭建环境
    操作系统:centos 7.6 64位
    
    kibana版本:7.7.1
    
    主服务器:10.10.0.16 端口:5601
    
    二、安装kibana
    添加kibana 的yum源
    
    vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
    
    
    
    [kibana-7.x]
    name=Kibana repository for 7.x packages
    baseurl=https://mirrors.tset.com/repository/Kibana-CentOS/
    gpgcheck=0
    enabled=1
    
    yum clean all && yum makecache
    yum install kibana –y
    
    
    
    备份原配置文件
    
    cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.original
    
    
    
    更改配置文件
    
    [root@elk-kibana ~]# grep -Ev "#|^$" /etc/kibana/kibana.yml
    
    server.port: 5601
    
    server.host: "10.10.0.16"
    
    server.name: "10.10.0.16"
    
    elasticsearch.hosts: ["http://10.10.0.230:9200", "http://10.10.0.220:9200", "http://10.10.0.147:9200", "http://10.10.0.188:9200", "http://10.10.0.186:9200"]
    
    kibana.index: ".kibana"
    
    elasticsearch.username: "kibana"
    
    elasticsearch.password: "elk123456"
    
    logging.dest: /var/log/kibana/kibana.log
    
    logging.verbose: true
    
    i18n.locale: "zh-CN"设置开机自启并启动kibana
    
    
    
    systemctl enable kibana
    systemctl start kibana
    
    
    
    
    
    访问kibana
    
    10.10.0.16:5601
    
    
    
    5.安装logstash
    一、搭建环境
    操作系统:centos 7.6 64位
    
    logstash版本:7.7.1
    
    jdk版本:1.8.0_202
    
    ip:10.10.0.184
    
    二、配置jdk
    
    
    wget https://mirrors.tset.com/package/jdk/jdk-8u202-linux-x64_2019.tar.gz
    mkdir /usr/local/java
    tar -zxvf jdk-8u202-linux-x64_2019.tar.gz -C /usr/local/java/
    vim /etc/profile
    
    #Custom settings
    export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
    export JAVA_HOME=/usr/local/java/jdk1.8.0_202
    
    export PATH=$PATH:${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin
    export CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar:${JAVA_HOME}/jre/lib
    
    source /etc/profile
    
    
    

    三、安装logstash

    #添加logstash的yum源
    
    vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
    
    [logstash-7.x]
    name=Elastic repository for 7.x packages
    baseurl=https://mirrors.tset.com/repository/logstash-centos/
    gpgcheck=0
    enabled=1
    
    安装
    
    yum clean all && yum makecache && yum install logstash -y
    
    新增配置文件,并配置输入源和输出源
    
    cd /etc/logstash/conf.d/
    
    vim esb.conf
    
    input {
    
      kafka {
    
        bootstrap_servers => "10.10.0.181:9092,10.10.0.182:9092,10.10.0.183:9092"
    
        auto_offset_reset => "latest"
    
        consumer_threads => 5
    
        topics_pattern => ".*"
    
        decorate_events => true
    
        topics => "esb_log"
    
        codec => json
    
    #        charset => "UTF-8"
    
    
    
      }
    
    }
    
    
    
    
    
    output {
    
        if "esb_log" in [tags] {
    
            elasticsearch {
    
            hosts => "http://10.10.0.147:9200"
    
            index => "elg-log-%{[host][ip][0]}-%{+YYYY.MM}"
    
            user => "elastic"
    
            password => "elk123456"
    
                }
    
      }
    
    }
    
    启动
    
    cd /root/ && nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/messages.conf  --config.reload.automatic &
    
    查看日志
    
    tail -f nohup.out
    
    1. Filebeat环境部署
      一、搭建环境
      操作系统:centos 7.6 64位

    10.10.0.128 监控esb_log

    二、安装filebeat

    #下载rpm包并安装
    
    wget https://mirrors.tset.com/package/ELK/filebeat-7.7.0-x86_64.rpm
    
    rpm -ivh filebeat-7.7.0-x86_64.rpm
    
    cp -a /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.original
    

    三、编辑配置文件

    vim /etc/filebeat/filebeat.yml
    
    filebeat.inputs:
    
    - type: log
    
      enabled: true
    
      paths:
    
        - /data/esblog/*.log
    
      tags: ["esb_log"]
    
      fields:
    
        filebeat_tag: esb_log
    
      fields_under_root: true
    
    filebeat.config.modules:
    
      path: ${path.config}/modules.d/*.yml
    
      reload.enabled: false
    
    setup.template.settings:
    
      index.number_of_shards: 2
    
    setup.kibana:
    
      host: "10.10.0.16:5601"
    
    
    
    output.kafka:
    
      hosts: ["10.10.0.181:9092", "10.0.0.182:9092", "10.0.0.183:9092"]
    
      compression: gzip
    
      max_message_bytes: 100000000
    
      topic: esb_log
    
    processors:
    
      - add_host_metadata: ~
    
      - add_cloud_metadata: ~
    
      - add_docker_metadata: ~
    
      - add_kubernetes_metadata: ~
    
    
    
    
    
    重启服务
    
    systemctl restart filebeat
    

    7.服务的启停

    Es :
    
    systemctl start elasticsearch  启动
    
    systemctl stop elasticsearch   停止
    
    
    
    Zookeeper:
    
    zkServer.sh start   启动
    
    zkServer.sh stop    停止
    
    
    
    
    
    Kafka:
    
    systemctl start kafka.service 启动
    
    systemctl stop kafka.service   停止
    
    
    
    Kibana:
    
    systemctl start kibana
    
    systemctl stop kibana
    
    
    
    Logstash:
    
    
    
    cd /root/ && nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/messages.conf  --config.reload.automatic &
    
    
    
    Filebeat:
    
    systemctl restart filebeat
    

    相关文章

      网友评论

          本文标题:filebeat+kafka+elk集群部署

          本文链接:https://www.haomeiwen.com/subject/twcabrtx.html