美文网首页
[ELK] elasticsearch + kibana + f

[ELK] elasticsearch + kibana + f

作者: 咦咦咦萨 | 来源:发表于2020-09-08 15:51 被阅读0次

    本文基于7.9版本

    1. Docker 搭建 elasticsearch集群

    https://www.elastic.co/guide/en/elasticsearch/reference/7.9/docker.html

    1.1 分配机器

    • 192.168.0.71
    • 192.168.0.72
    • 192.168.0.73

    1.2 目录结构

    ~/elasticsearch
       - node
          - elasticsearch.yml
          - data
          - log
          - start.sh
          - stop.sh
    

    1.2 启动脚本

    # node1节点
    
    docker run -e ES_JAVA_OPTS="-Xms6g -Xmx6g" -d \
    -p 9200:9200 -p 9300:9300 \
    --restart=always \
    -v /yourhome/elasticsearch/node1/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    -v /yourhome/elasticsearch/node1/data:/var/lib/elasticsearch \
    -v /yourhome/elasticsearch/node1/log:/var/log/elasticsearch \
    --name ES01 elasticsearch:7.9.0
    
    
    # node2节点
    
    docker run -e ES_JAVA_OPTS="-Xms6g -Xmx6g" -d \
    -p 9200:9200 -p 9300:9300 \
    --restart=always \
    -v /yourhome/elasticsearch/node2/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    -v /yourhome/elasticsearch/node2/data:/var/lib/elasticsearch \
    -v /yourhome/elasticsearch/node2/log:/var/log/elasticsearch \
    --name ES02 elasticsearch:7.9.0
    
    
    # node3节点
    
    docker run -e ES_JAVA_OPTS="-Xms6g -Xmx6g" -d \
    -p 9200:9200 -p 9300:9300 \
    --restart=always \
    -v /yourhome/elasticsearch/node3/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    -v /yourhome/elasticsearch/node3/data:/var/lib/elasticsearch \
    -v /yourhome/elasticsearch/node3/log:/var/log/elasticsearch \
    --name ES03 elasticsearch:7.9.0
    
    

    1.3 配置文件

    elasticsearch.yml

    cluster.name: elasticsearch-cluster # 集群名称
    node.name: es_node1   # 节点名称,每个机器的节点名称不能相同
    path.data: /var/lib/elasticsearch  # 索引和数据存储目录,支持配置多个
    path.logs: /var/log/elasticsearch  # 日志存储目录,我这里默认注释
    network.host: 0.0.0.0 # 监听地址
    network.publish_host: 192.168.0.71
    http.port: 9200       # 配置es绑定主机的ip地址,可以设置为当前实际的ip,如果有多个网卡都想绑定的话,可以设置为:0.0.0.0
    transport.port: 9300  # tcp的端口号,默认是9300
    discovery.seed_hosts: ["192.168.0.71:9300","192.168.0.72:9300","192.168.0.73:9300"] #集群个节点IP地址
    cluster.initial_master_nodes: ["es_node3"] # 配置启动全新的集群时,那些节点可以作为master节点进行选票
    http.cors.enabled: true     #是否开启跨域访问
    http.cors.allow-origin: "*" #开启跨域访问后的地址限制,*表示无限制
    

    1.4 验证

    依次启动3个节点后,访问http://192.168.0.71:9200/_cat验证信息如下:

    =^.^=
    /_cat/allocation
    /_cat/shards
    /_cat/shards/{index}
    /_cat/master
    /_cat/nodes
    /_cat/tasks
    /_cat/indices
    /_cat/indices/{index}
    /_cat/segments
    /_cat/segments/{index}
    /_cat/count
    /_cat/count/{index}
    /_cat/recovery
    /_cat/recovery/{index}
    /_cat/health
    /_cat/pending_tasks
    /_cat/aliases
    /_cat/aliases/{alias}
    /_cat/thread_pool
    /_cat/thread_pool/{thread_pools}
    /_cat/plugins
    /_cat/fielddata
    /_cat/fielddata/{fields}
    /_cat/nodeattrs
    /_cat/repositories
    /_cat/snapshots/{repository}
    /_cat/templates
    /_cat/ml/anomaly_detectors
    /_cat/ml/anomaly_detectors/{job_id}
    /_cat/ml/trained_models
    /_cat/ml/trained_models/{model_id}
    /_cat/ml/datafeeds
    /_cat/ml/datafeeds/{datafeed_id}
    /_cat/ml/data_frame/analytics
    /_cat/ml/data_frame/analytics/{id}
    /_cat/transforms
    /_cat/transforms/{transform_id}
    
    

    我们查看一下集群情况,访问http://192.168.0.71:9200/_cat/nodes

    2. Docker安装kibana

    https://www.elastic.co/guide/en/kibana/current/docker.html

    2.1 目录结构

    ~/kibana
       - conf
          - kibana.yml
       - start.sh
    

    2.2 启动脚本

    start.sh

    docker run -d -p 5601:5601 \
    -v /yourhome/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml  \
    --name kibana kibana:7.9.0
    

    2.3 配置文件

    conf/kibana.yml

    server.host: "0.0.0.0"
    
    elasticsearch.hosts: ["http://192.168.0.71:9200","http://192.168.0.72:9200", "http://192.168.0.73:9200"]
    
    i18n.locale: "zh-CN"
    
    

    2.4 验证

    启动脚本,访问http://127.0.0.1:5601

    kibana

    3. filebeat 安装与使用

    https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-starting.html

    3.1 下载filebeat

    以linux为例,更多参见官方文档

    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.1-linux-x86_64.tar.gz
    tar xzvf filebeat-7.9.1-linux-x86_64.tar.gz
    

    3.2 配置

    根据实际需求,对filebeat.yml文件进行修改,重点inputs示例如下:

    filebeat.inputs:
    - type: log
     # 此处必须为true,激活配置
      enabled: true
    # 具体日志路径
      paths:
        - /your/home/apps/logs/*.log
     # 额外的自定义字段,可在es中帮助筛选
      fields:
        app: myapp
    # 自定义字段是否作为根节点存储
      fields_under_root: true
    # 处理多行日志,此处以行首是否为‘yyyy-mm-dd’来判断
      multiline.pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
    
    

    多行日志配置重点

    详情参见:官方配置示例

    3.3 启动

    sudo chown root filebeat.yml 
    sudo chown root modules.d/system.yml 
    sudo ./filebeat -e
    

    filebeat要求以root权限启动,如果不想用root权限,可以添加--strict.perms=false参数:

     ./filebeat -e --strict.perms=false
    

    相关文章

      网友评论

          本文标题:[ELK] elasticsearch + kibana + f

          本文链接:https://www.haomeiwen.com/subject/lkadektx.html