美文网首页
Docker-ELK 集群搭建

Docker-ELK 集群搭建

作者: 南城忆往 | 来源:发表于2020-11-19 22:49 被阅读0次

    ELK集群搭建

    ELK 是三个开源项目的首字母缩写:
    Elasticsearch、Logstash 和 Kibana。

    • Elasticsearch 是一个搜索和分析引擎。
    • Logstash 是服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,将数据发送到Elasticsearch等存储库中。
    • Kibana 则可以让用户在 Elasticsearch 中使用图形和图表对数据进行可视化。
    image.png

    集群搭建

    # docker版本:
    Docker version 19.03.13, build 4484c46d9d
    

    准备镜像

    # 镜像拉取
    docker pull elasticsearch:7.7.0
    docker pull kibana:7.7.0
    docker pull lastash:7.7.0
    

    创建容器挂载目录、配置文件

    部署的主目录
    /home/elasticsearch/v7.7/

    # 主目录在/home/elasticsearch/v7.7/
    
    切换到主目录下
    cd /home/elasticsearch/v7.7/
    
    # 配置文件
    mkdir -p node-1/config
    mkdir -p node-2/config
    mkdir -p node-3/config
    
    # 数据存储
    mkdir -p /node-1/data
    mkdir -p /node-2/data
    mkdir -p /node-3/data
    
    # 日志存储
    mkdir -p /node-1/logs
    mkdir -p /node-2/logs
    mkdir -p /node-3/logs
    
    # 插件管理
    mkdir -p /node-1/plugins
    mkdir -p /node-2/plugins
    mkdir -p /node-3/plugins
    
    # 开放权限
    chmod 777 /home/elasticsearch/v7.7/node-1/data
    chmod 777 /home/elasticsearch/v7.7/node-2/data
    chmod 777 /home/elasticsearch/v7.7/node-3/data
    
    chmod 777 /home/elasticsearch/v7.7/node-1/logs
    chmod 777 /home/elasticsearch/v7.7/node-2/logs
    chmod 777 /home/elasticsearch/v7.7/node-3/logs
    
    

    elasticsearch配置文件编写

    我们是在一台物理机上部署3个容器实现elasticsearch的集群环境。创建了私有网络,并设置了固定IP地址。所以每个节点都需要注意其IP地址以及端口号的配置是否正确。

    ## 节点1配置信息如下:
    # 文件路径 /home/elasticsearch/v7.7/node-1/config/elasticsearch.yml
    
    cluster.name: elk-v7
    node.name: node-1
    node.master: true
    node.data: true
    node.max_local_storage_nodes: 3
    path.data: /usr/share/elasticsearch/data
    path.logs: /usr/share/elasticsearch/log
    bootstrap.memory_lock: true
    network.host: 10.10.10.11
    http.port: 9200
    transport.tcp.port: 9300
    discovery.seed_hosts: ["10.10.10.12:9300","10.10.10.13:9300"]
    cluster.initial_master_nodes: ["node-1","node-2","node-3"]
    
    ## 节点2配置信息如下:
    # 文件路径 /home/elasticsearch/v7.7/node-2/config/elasticsearch.yml
    
    cluster.name: elk-v7
    node.name: node-2
    node.master: true
    node.data: true
    node.max_local_storage_nodes: 3
    path.data: /usr/share/elasticsearch/data
    path.logs: /usr/share/elasticsearch/log
    bootstrap.memory_lock: true
    network.host: 10.10.10.12
    http.port: 9200
    transport.tcp.port: 9300
    discovery.seed_hosts: ["10.10.10.11:9300","10.10.10.13:9300"]
    cluster.initial_master_nodes: ["node-1","node-2","node-3"]
    
    ## 节点3配置信息如下:
    # 文件路径 /home/elasticsearch/v7.7/node-3/config/elasticsearch.yml
    
    cluster.name: elk-v7
    node.name: node-3
    node.master: true
    node.data: true
    node.max_local_storage_nodes: 3
    path.data: /usr/share/elasticsearch/data
    path.logs: /usr/share/elasticsearch/log
    bootstrap.memory_lock: true
    network.host: 10.10.10.13
    http.port: 9200
    transport.tcp.port: 9300
    discovery.seed_hosts: ["10.10.10.11:9300","10.10.10.12:9300"]
    cluster.initial_master_nodes: ["node-1","node-2","node-3"]
    
    

    配置参数说明:

    cluster.name: 集群名称
    
    node.name: 节点的名称
    node.master: true  # 是不是有资格竞选主节点
    node.data: true    # 是否存储数据
    
    node.max_local_storage_nodes: 3  #最大集群节点数
    bootstrap.memory_lock: true #是否开启时锁定内存(默认为是)
    
    # 注意这两个路径不要配置物理机的路径了,这是【容器内部】的路径!!
    path.data: /usr/share/elasticsearch/data # 数据存档位置
    path.logs: /usr/share/elasticsearch/log   # 日志存放位置
    
    # 配合network.publish_host 一起使用。参见下文的小窍门:
    network.host: 10.10.10.11 #设置网关地址
    # 设置其它结点和该结点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址,设置当前物理机地址,如果是docker安装节点的IP将会是配置的IP而不是docker网管ip
    # network.publish_host: 10.10.10.11
    
    http.port: 9200  # 设置映射端口
    transport.tcp.port: 9300  # 内部节点之间沟通端口 
    
    # 组播地址
    discovery.seed_hosts: ["10.10.10.12:9300","10.10.10.13:9300"]
    # es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    cluster.initial_master_nodes: ["node-1","node-2","node-3"]
    
    

    另:如果我们想要使用 物理机的IP 地址作为集群的IP其实也可以的。
    修改每个节点的配置文件的如下配置:

    network.host: 0.0.0.0
    network.publish_host: 物理机IP地址(例如:192.168.10.100)
    # 内部节点之间沟通端口 注意每个节点的端口需要不同,因为我们使用的是同一个IP地址
    http.port: 端口 # 每个节点不能相同 例如 9200、9201、9202
    transport.tcp.port: 端口 # 每个节点不能相同例如 9300、9301、9302
    # 每个节点对应的端口需与上面配置的一致。这边只是举例,已实际配置为准。
    discovery.seed_hosts: 
    ["192.168.10.100:9300","192.168.10.100:9301","192.168.10.100:9302"]
    # 将上面修改的部分分别拷贝到三个节点的配置文件中
    

    开放端口(推荐)

    当然你也可以关闭防火墙,但是注意如果关闭防火墙,创建私有网络会失败。

    firewall-cmd --zone=public --add-port=9200/tcp --permanent
    firewall-cmd --zone=public --add-port=9201/tcp --permanent
    firewall-cmd --zone=public --add-port=9202/tcp --permanent
    firewall-cmd --zone=public --add-port=9300/tcp --permanent
    firewall-cmd --zone=public --add-port=9301/tcp --permanent
    firewall-cmd --zone=public --add-port=9302/tcp --permanent
    
    #这个是kibana端口
    firewall-cmd --zone=public --add-port=5601/tcp --permanent
    
    # 更新防火墙规则,使端口生效
    firewall-cmd --complete-reload
    
    # 查看当前所开放的端口
    firewall-cmd --zone=public --list-ports
    

    创建私有网络

    # 私有网络搭建:
    docker network create \
    --driver=bridge \
    --subnet=10.10.0.0/16 \
    --ip-range=10.10.10.0/24 \
    --gateway=10.10.10.254 \
    es-net
    

    启动容器

    切换到主目录下执行,否则会报路径错误问题。

    cd /home/elasticsearch/v7.7/
    
    • 启动节点1
    docker run -d --name es-node-1 
    --network=es-net 
    --ip=10.10.10.11 
    -e ES_JAVA_OPTS="-Xms256m -Xmx256m" 
    -p 9200:9200 
    -v $PWD/node-1/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml 
    -v $PWD/node-1/plugins:/usr/share/elasticsearch/plugins 
    -v $PWD/node-1/data:/usr/share/elasticsearch/data 
    -v $PWD/node-1/logs:/usr/share/elasticsearch/logs 
    --privileged=true elasticsearch:7.7.0
    
    • 启动节点2
    docker run -d --name es-node-2 
    --network=es-net 
    --ip=10.10.10.12 
    -e ES_JAVA_OPTS="-Xms256m -Xmx256m"  
    -p 9201:9200  
    -v $PWD/node-2/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml  
    -v $PWD/node-2/plugins:/usr/share/elasticsearch/plugins   
    -v $PWD/node-2/data:/usr/share/elasticsearch/data 
    -v $PWD/node-2/logs:/usr/share/elasticsearch/logs 
    --privileged=true  elasticsearch:7.7.0
    
    • 启动节点3
    docker run -d --name es-node-3 
    --network=es-net 
    --ip=10.10.10.13 
    -e ES_JAVA_OPTS="-Xms256m -Xmx256m" 
    -p 9202:9200
    -v $PWD/node-3/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml 
    -v $PWD/node-3/plugins:/usr/share/elasticsearch/plugins -v $PWD/node-3/data:/usr/share/elasticsearch/data/ 
    -v $PWD/node-3/logs:/usr/share/elasticsearch/logs 
    --privileged=true elasticsearch:7.7.0
    

    补更 2020-12-14 在新版本新机器上进行部署时报了两个错误:

    ...
    出现错误。。。。。
    ERROR: [2] bootstrap checks failed
    [1]: memory locking requested for elasticsearch process but memory is not locked
    [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/log/elk-v7.log
    {"type": "server", "timestamp": "2020-12-14T09:05:19,181Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopping ..." }
    {"type": "server", "timestamp": "2020-12-14T09:05:19,199Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopped" }
    {"type": "server", "timestamp": "2020-12-14T09:05:19,199Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closing ..." }
    {"type": "server", "timestamp": "2020-12-14T09:05:19,216Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closed" }
    ...
    

    解决:memory locking requested for elasticsearch process but memory is not locked

    网上的说法总结下来有两种:

    • 方法一
    # 此方案适用于非systemd管理的linux发行版,centos 6及以下可以仅通过这个方案解决
    
    # 临时解决,测试时可以使用
    ulimit -l unlimited
    
    # 永久解决方法:root权限编辑/etc/security/limits.conf
    vim /etc/security/limits.conf
    # 添加如下信息 
    * soft memlock unlimited
    * hard memlock unlimited
    # PS:
    # 这里的*代表的是所有用户名称,可以更换为指定用户名
    # 另:坑!如果/etc/security/limits.d文件夹下有配置文件,
    # 会覆盖刚才修改的文件,确认删除
    
    
    # 修改/etc/sysctl.conf
    echo "vm.swappiness=0" >> /etc/sysctl.conf
    
    # 重新登录或重启服务器方可生效
    # 然而,并没有解决我的问题。那我们看看其他方法。
    
    • 方法二
    我们是通过Docker部署的,上面的方法可能不适用这种方式。可以在配置下如下配置。
    # 全局生效方式:
    sudo vim /etc/systemd/system.conf
    # 添加:
    DefaultLimitNOFILE=65536
    DefaultLimitNPROC=32000
    DefaultLimitMEMLOCK=infinity
    
    # 保存重启。
    
    ERROR: [1] bootstrap checks failed
    [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/log/elk-v7.log
    {"type": "server", "timestamp": "2020-12-14T09:15:31,072Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopping ..." }
    {"type": "server", "timestamp": "2020-12-14T09:15:31,093Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopped" }
    {"type": "server", "timestamp": "2020-12-14T09:15:31,094Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closing ..." }
    {"type": "server", "timestamp": "2020-12-14T09:15:31,116Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closed" }
    {"type": "server", "timestamp": "2020-12-14T09:15:31,123Z", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "elk-v7", "node.name": "node-1", "message": "Native controller process has stopped - no new native processes can be started" }
    [root@k8s-node-3 ~]#
    

    解决:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

    在  /etc/sysctl.conf文件最后添加一行
    vm.max_map_count=262144
    # 重启服务
    
    image.png

    Kibana部署

    设置kibana挂载目录

    mkdir -p /home/kibana/config
    

    创建文件

    vim /home/kibana/config/kibana.yml
    

    配置信息

    #Kibana的映射端口
    server.port: 5601
    #网关地址
    server.host: "0.0.0.0"
    #Kibana实例对外展示的名称
    server.name: "kibana"
    #Elasticsearch的集群地址,也就是说所有的集群IP
    elasticsearch.hosts: ["http://10.10.10.11:9200","http://10.10.10.12:9201","http://10.10.10.13:9202"]
    #设置页面语言,中文使用zh-CN,英文使用en
    i18n.locale: "zh-CN"
    xpack.monitoring.ui.container.elasticsearch.enabled: true
    
    

    启动Kibana容器

    docker run -d --name kibana 
    --network=es-net --ip=10.10.10.14 -p 5601:5601 
    -v $PWD/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml  
    --privileged=true kibana:7.7.0
    

    登录Kibana网址

    http://IP:5601
    
    image.png

    部署Logstash

    logstash一般是一个服务器部署一个logstash,所以按需进行扩展即可。

    创建挂载目录

    mkdir /home/logstsah/
    

    配置文件

    # 启动容器
    docker run -d --name logstash logstash:7.7.0
    # 拷贝logstash的配置文件
    docker cp logstash:/usr/share/logstash/config /home/logstsah/
    # config下的文件:
    ➜  config ls
    jvm.options          log4j2.properties    logstash-sample.conf logstash.yml         pipelines.yml        startup.options
    

    修改配置信息

    # 修改配置文件logstash.yml
    http.host: "0.0.0.0"
    # 可以配置多个elasticsearch地址
    xpack.monitoring.elasticsearch.hosts: [ "http://10.10.10.11:9200" ]
    

    创建pipelines目录下的配置文件logstash.conf

    # Sample Logstash configuration for creating a simple
    # Beats -> Logstash -> Elasticsearch pipeline.
    
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => ["http://10.10.10.11:9200"]
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
      }
    }
    

    启动容器

    docker run -d --name logstash -
    -network=es-net --ip=10.10.10.15 -v $PWD/logstash/config/:/usr/share/logstash/config/ 
    -v $PWD/logstash/pipeline:/usr/share/logstash/pipeline 
    -p 5044:5044 
    -p 9600:9600
    --privileged=true logstash:7.7.0
    

    在kibana的界面可以看的logstash已经加入集群中

    image.png

    IK分词器安装

    下载ik分词器

    https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.7.0/elasticsearch-analysis-ik-7.7.0.zip

    将下载好的文件放到物理机的映射目录/home/elasticsearch/v7.7/node-1/plugins下

    # 切换到node-1的plugins目录下,解压文件到ik文件夹
    unzip elasticsearch-analysis-ik-7.7.0.zip -d ik
    
    # 重启容器
    docker restart es-node-1
    
    

    其他节点同样方法操作,或者直接复制这个ik文件夹到其他节点,然后重启节点容器即可。

    验证是否生效

    es默认的分词器

    GET _analyze
    {
      "text": "共和国国歌"
    }
    # 结果
    {
      "tokens" : [
        {
          "token" : "共",
          "start_offset" : 0,
          "end_offset" : 1,
          "type" : "<IDEOGRAPHIC>",
          "position" : 0
        },
        {
          "token" : "和",
          "start_offset" : 1,
          "end_offset" : 2,
          "type" : "<IDEOGRAPHIC>",
          "position" : 1
        },
        {
          "token" : "国",
          "start_offset" : 2,
          "end_offset" : 3,
          "type" : "<IDEOGRAPHIC>",
          "position" : 2
        },
        {
          "token" : "国",
          "start_offset" : 3,
          "end_offset" : 4,
          "type" : "<IDEOGRAPHIC>",
          "position" : 3
        },
        {
          "token" : "歌",
          "start_offset" : 4,
          "end_offset" : 5,
          "type" : "<IDEOGRAPHIC>",
          "position" : 4
        }
      ]
    }
    

    ik分词器。ik_smart 分词

    GET _analyze 
    { 
     "analyzer":"ik_smart", 
     "text":"中华人民共和国中央人民政府万岁"
    }
    
    # 结果
    {
      "tokens" : [
        {
          "token" : "中华人民共和国",
          "start_offset" : 0,
          "end_offset" : 7,
          "type" : "CN_WORD",
          "position" : 0
        },
        {
          "token" : "中央人民政府",
          "start_offset" : 7,
          "end_offset" : 13,
          "type" : "CN_WORD",
          "position" : 1
        },
        {
          "token" : "万岁",
          "start_offset" : 13,
          "end_offset" : 15,
          "type" : "CN_WORD",
          "position" : 2
        }
      ]
    }
    
    

    ik分词器 ik_max_word分词

    GET _analyze 
    { 
     "analyzer":"ik_max_word", 
     "text":"中华人民共和国中央人民政府万岁"
    }
    # 结果
    {
      "tokens" : [
        {
          "token" : "中华人民共和国",
          "start_offset" : 0,
          "end_offset" : 7,
          "type" : "CN_WORD",
          "position" : 0
        },
        {
          "token" : "中华人民",
          "start_offset" : 0,
          "end_offset" : 4,
          "type" : "CN_WORD",
          "position" : 1
        },
        {
          "token" : "中华",
          "start_offset" : 0,
          "end_offset" : 2,
          "type" : "CN_WORD",
          "position" : 2
        },
        {
          "token" : "华人",
          "start_offset" : 1,
          "end_offset" : 3,
          "type" : "CN_WORD",
          "position" : 3
        },
        {
          "token" : "人民共和国",
          "start_offset" : 2,
          "end_offset" : 7,
          "type" : "CN_WORD",
          "position" : 4
        },
        {
          "token" : "人民",
          "start_offset" : 2,
          "end_offset" : 4,
          "type" : "CN_WORD",
          "position" : 5
        },
        {
          "token" : "共和国",
          "start_offset" : 4,
          "end_offset" : 7,
          "type" : "CN_WORD",
          "position" : 6
        },
        {
          "token" : "共和",
          "start_offset" : 4,
          "end_offset" : 6,
          "type" : "CN_WORD",
          "position" : 7
        },
        {
          "token" : "国中",
          "start_offset" : 6,
          "end_offset" : 8,
          "type" : "CN_WORD",
          "position" : 8
        },
        {
          "token" : "中央人民政府",
          "start_offset" : 7,
          "end_offset" : 13,
          "type" : "CN_WORD",
          "position" : 9
        },
        {
          "token" : "中央",
          "start_offset" : 7,
          "end_offset" : 9,
          "type" : "CN_WORD",
          "position" : 10
        },
        {
          "token" : "人民政府",
          "start_offset" : 9,
          "end_offset" : 13,
          "type" : "CN_WORD",
          "position" : 11
        },
        {
          "token" : "人民",
          "start_offset" : 9,
          "end_offset" : 11,
          "type" : "CN_WORD",
          "position" : 12
        },
        {
          "token" : "民政",
          "start_offset" : 10,
          "end_offset" : 12,
          "type" : "CN_WORD",
          "position" : 13
        },
        {
          "token" : "政府",
          "start_offset" : 11,
          "end_offset" : 13,
          "type" : "CN_WORD",
          "position" : 14
        },
        {
          "token" : "万岁",
          "start_offset" : 13,
          "end_offset" : 15,
          "type" : "CN_WORD",
          "position" : 15
        },
        {
          "token" : "万",
          "start_offset" : 13,
          "end_offset" : 14,
          "type" : "TYPE_CNUM",
          "position" : 16
        },
        {
          "token" : "岁",
          "start_offset" : 14,
          "end_offset" : 15,
          "type" : "COUNT",
          "position" : 17
        }
      ]
    }
    

    相关文章

      网友评论

          本文标题:Docker-ELK 集群搭建

          本文链接:https://www.haomeiwen.com/subject/gpvkiktx.html