Learning ELK

作者: Megahorn | 来源:发表于2019-01-17 10:54 被阅读0次

    Filebeat

    Reference

    installation:https://www.elastic.co/guide/en/beats/filebeat/6.5/filebeat-installation.html
    config:https://www.elastic.co/guide/en/logstash/6.5/advanced-pipeline.html#configuring-filebeat
    directory layouts:https://www.elastic.co/guide/en/beats/filebeat/6.5/directory-layout.html

    Working principle:https://www.jianshu.com/p/62fbde3f0a11
    compare to logstash:https://blog.csdn.net/u010871982/article/details/79035317

    filebeat settings(from log file to logstash):

    cd /app/filebeat
    cp filebeat.yml filebeat.yml_bak
    vi filebeat.yml

    filebeat.prospectors:
     - type: log
      paths:
       - /path/to/file/logstash-tutorial.log
    output.logstash:
      hosts: ["localhost:5044"]
    

    filebeat settings(examples, multi input files)

    #=========================== Filebeat inputs =============================
    filebeat.inputs:
    
    - type: log
      enabled: true
      paths:
         /app/avcp-main/logs/info.log
      fields: {log_type: main_info}
      #align enabled,paths and fileds elements
    
    - type: log
      enabled: true
      paths:
         /app/avcp-main/logs/error.log
      fields: {log_type: main_error}
    
    - type: log
      enabled: true
      paths:
         /app/avcp-main/logs/up-info.log
      fields: {log_type: up_info}
    
    - type: log
      enabled: true
      paths:
         /home/appdeploy/info1.log
      fields: {log_type: testdata}
    
    - type: log
      enabled: true
      paths:
         /home/appdeploy/info2.log
      fields: {log_type: testdata}
    
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    #==================== Elasticsearch template setting ==========================
    setup.template.settings:
      index.number_of_shards: 3
    #================================ Outputs =====================================
    #----------------------------- Logstash output --------------------------------
    output.logstash:
      hosts: ["localhost:5044"]
    #================================ Processors =====================================
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    

    run filebeat

    cd /app/filebeat/
    ./filebeat -e -c filebeat.yml -d "publish"


    Logstash

    Reference

    installation:download tar file and unzip it to /usr/share/logstash folder
    config:https://www.elastic.co/guide/en/logstash/6.5/advanced-pipeline.html#_configuring_logstash_for_filebeat_input
    config files:https://www.elastic.co/guide/en/logstash/6.5/config-setting-files.html
    directory layouts:https://www.elastic.co/guide/en/logstash/6.5/dir-layout.html

    grok debugger:http://grokdebug.herokuapp.com/?
    filter:https://blog.csdn.net/qq_34021712/article/details/79754356
    wroking principle:https://blog.csdn.net/u010739163/article/details/82022327

    logstash settings(from stdin to stdout):

    run following cmd without configs:

    • cd /app/logstash
    • bin/logstash -e 'input { stdin { } } output { stdout {} }'

    logstash settings(from filebeat to stdout):

    • cd /app/logstash/bin
    • vi test.conf
    input {
        beats {
            port => "5044"
        }
    }
    # The filter part of this file is commented out to indicate that it is
    # optional.
    # filter {
    #
    # }
    output {
        stdout { codec => rubydebug }
    }
    
    • cd /usr/share/logstash
    • /app/logstash/bin/logstash -f /app/logstash/bin/logstash.conf --config.reload.automatic
      OR
    • nohup /app/logstash/bin/logstash -f /app/logstash/bin/logstash.conf >/dev/null &
      #para --config.reload.automatic for auto reload test.conf, modify test.conf take effect without restart logstash

    logstash settings(from filebeat to stdout, with filters):

    grok patterns for logstash:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns

    • vi test.conf
    filter {
       grok {
           match => { "message" => "%{COMBINEDAPACHELOG}"}
       }
    }
    

    customize your patterns by create a pattern folder like /usr/share/logstash/bin/patterns
    chmod 777 patterns
    cd patterns
    vi yourpattern

    ALL_CHAR [\s\S]*
    

    for matching all characters, edit test.conf like

    filter {
       grok {
           patterns_dir => ["bin/patterns/"]  # a relative path from ur launch path
           match => { "message" => "%{ALL_CHAR}"}
       }
    }
    

    logstash settings(from filebeat to elasticsearch):

    • vi test.conf
    output {
        elasticsearch {
            hosts => [ "localhost:9200" ]
            action => "index"
            codec => rubydebug
            index => "%{type}-%{+YYYY.MM.dd}"
            template_name => "%{type}"
        }
    }
    

    logstash settings(from filebeat to es, examples):

    you can debug you patterns on http://grokdebug.herokuapp.com/?#
    vi test.conf

    input {
        beats {
            port => "5044"
        }
    }
    # The filter part of this file is commented out to indicate that it is
    # optional.
    filter {
            #All In Use
        #grok {
        #    match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*([\s\S]*小车上报日志[\s\S]*)\s*(payload:UgvData\()\s*(carRegisterStatus\=)(?<carRegisterStatus>((\d*)(\.\d*)?|0))[,\s]*(hardwareStatus\=)(?<hardwareStatus>((\d*)(\.\d*)?|0))[,\s]*(driveStatus\=)(?<driveStatus>((\d*)(\.\d*)?|0))[,\s]*(longitude\=)(?<longitude>((\d*)(\.\d*)?|0))[,\s]*(latitude\=)(?<latitude>((\d*)(\.\d*)?|0))[,\s]*(speed\=)(?<speed>([-]*(\d*)(\.\d*)?|0))[,\s]*(odometer\=)(?<odometer>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryPercentage\=)(?<batteryPercentage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryVoltage\=)(?<batteryVoltage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryCurrent\=)(?<batteryCurrent>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryAvgTemp\=)(?<batteryAvgTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryHighTemp\=)(?<batteryHighTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryLowTemp\=)(?<batteryLowTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryHighVoltage\=)(?<batteryHighVoltage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryLowVoltage\=)(?<batteryLowVoltage>([-]*(\d*)(\.\d*)?|0))[,\s]*(remoteTaskId\=)(?<remoteTaskId>([-]*(\d*)(\.\d*)?|0))[,\s]*(taskId\=)(?<taskId>([-]*(\d*)(\.\d*)?|0))[,\s]*(\))\s*"}
        #}
            #grok {
            #    match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*([\s\S]*小车上报日志[\s\S]*)\s*(payload:UgvData\()\s*(carRegisterStatus\=)(?<carRegisterStatus>((\d*)(\.\d*)?|0))[,\s]*(hardwareStatus\=)(?<hardwareStatus>((\d*)(\.\d*)?|0))[,\s]*(driveStatus\=)(?<driveStatus>((\d*)(\.\d*)?|0))[,\s]*(longitude\=)(?<longitude>((\d*)(\.\d*)?|0))[,\s]*(latitude\=)(?<latitude>((\d*)(\.\d*)?|0))[,\s]*(speed\=)(?<speed>([-]*(\d*)(\.\d*)?|0))[,\s]*(odometer\=)(?<odometer>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryPercentage\=)(?<batteryPercentage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryVoltage\=)(?<batteryVoltage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryCurrent\=)(?<batteryCurrent>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryAvgTemp\=)(?<batteryAvgTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryHighTemp\=)(?<batteryHighTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryLowTemp\=)(?<batteryLowTemp>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryHighVoltage\=)(?<batteryHighVoltage>([-]*(\d*)(\.\d*)?|0))[,\s]*(batteryLowVoltage\=)(?<batteryLowVoltage>([-]*(\d*)(\.[E|\d]*)?|0))[,\s]*(remoteTaskId\=)(?<remoteTaskId>([-]*(\d*)(\.\d*)?|0))[,\s]*(taskId\=)(?<taskId>([-]*(\d*)(\.\d*)?|0))[,\s]*(\))\s*"}
        #}
            #Group Battery info except BatteryPercentage
            #小车上报日志
            grok {
                add_tag => ["geoloc"]
                match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:UgvData\()\s*carRegisterStatus=%{NUMBER:carRegisterStatus}, hardwareStatus=%{NUMBER:hardwareStatus}, driveStatus=%{NUMBER:driveStatus}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}, speed=%{NUMBER:speed}, odometer=%{NUMBER:odometer}, batteryPercentage=%{NUMBER:batteryPercentage}, [\s\S]*remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}"}
            }
            #小车上报心跳
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:HeartbeatData\()\s*status=%{NUMBER:status}\)"}
            }
            #开始行驶通知
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:DriveStartData\()\s*remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}, taskType=%{NUMBER:taskType}, startTime=%{NUMBER:startTime}, startSpot=%{NUMBER:startSpot}, destination=%{NUMBER:destination}, odometer=%{NUMBER:odometer}\)"}
            }
            #行驶异常告警
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:DriveExceptionData\()\s*carRegisterStatus=%{NUMBER:carRegisterStatus}, driveStatus=%{NUMBER:driveStatus}, exceptionLevel=%{NUMBER:exceptionLevel}\)"}
            }
            #接管操作指令
            grok {
                add_tag => ["geoloc"]
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:TakeOverData\()\s*takeOverType=%{NUMBER:takeOverType}, remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}, carRegisterStatus=%{NUMBER:carRegisterStatus}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}\)"}
            }
            #设备诊断消息
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:EquipmentDiagnosisData\()\s*diagnosisResult=%{NUMBER:diagnosisResult}, hardwareStatus=%{NUMBER:hardwareStatus}, electricInfo=%{NUMBER:electricInfo}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}\)"}
            }
            #任务结束通知
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:TaskOverData\()\s*overType=%{NUMBER:overType}, remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}, overReason=%{NUMBER:overReason}, overTime=%{NUMBER:overTime}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}, responseOperation=%{NUMBER:responseOperation}, odometer=%{NUMBER:odometer}\)"}
            }
            #上报规划路径
            grok {
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:PlanningPathData\()\s*remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}, pointNum=%{NUMBER:pointNum}, planningPath=(?<planningPath>[\s\S]*)\)"}
            }
    
    
            if "geoloc" in [tags]
            {
                    mutate {
                                add_field => { "[geoloc][lon]" => "%{longitude}" }
                                add_field => { "[geoloc][lat]" => "%{latitude}" }
                    }
                    mutate {
                                convert => { "[geoloc][lon]" => "float" }
                                convert => { "[geoloc][lat]" => "float" }
                    }
            }
    
    }
    
    output {
        stdout { codec => rubydebug }
    }
    
    output {
        elasticsearch {
            hosts => [ "localhost:9200" ]
            action => "index"
            codec => rubydebug
            index => "%{[fields][log_type]}-%{+YYYY.MM.dd}"
            document_type => "log"
        }
    }
    

    logstash other config

    examples:https://www.elastic.co/guide/en/logstash/6.5/config-examples.html


    Elasticsearch

    Reference

    installation:download gz file and unzip it in /app/elasticsearch folder

    Config

    by non-root user,
    vi /app/elasticsearch/config/elasticsearch.yml

    #any application name you define
    cluster.name: anyname_log_es
    #node number you define
    node.name: node-1         
    #es data path  
    path.data: /path/to/data
    #es log path
    path.logs: /path/to/logs
    #if run on centos refer to 'https://www.jianshu.com/p/89f8099a6d09'
    bootstrap.memory_lock: false
    #if run on centos refer to 'https://www.jianshu.com/p/89f8099a6d09'
    bootstrap.system_call_filter: false
    network.host: 0.0.0.0
    http.port: 9200
    

    Launch

    cd /app/elasticsearch/bin
    ./elasticsearch &
    or run following command to make it background
    ./elasticsearch -d
    if show any error, fix it

    #ex.
    [ERROR][o.e.b.Bootstrap          ] [kSH2rCN] node validation exception bootstrap checks failed
    max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    #resolve by
    su root
    vim /etc/sysctl.conf \#add following line
    vm.max_map_count=262144
    sysctl -p
    

    Verify

    run following command:
    curl 'http://xx.xx.xx.xx:9200/?pretty'
    and confirm the json returned.

    N/A
    https://www.oschina.net/translate/elasticsearch-getting-started
    一些坑:https://www.jianshu.com/p/89f8099a6d09


    Kibana

    Reference

    installation:download gz file and unzip it in /app/kibana folder

    Config

    cd /app/kibana/config
    vi kibana.yml

    server.port: 8810
    server.host: "0.0.0.0"
    #(xx.xx.xx.xx is the ip of your elasticsearch server)
    elasticsearch.hosts: ["http://xx.xx.xx.xx:9200"]
    

    for using 高德地图
    vi kibana.yml

    tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
    

    Launch

    cd /app/kibana/bin
    ./kibana &

    Verify

    check server port and process by
    fuser -n tcp 8810 #server port of kibana
    or access
    "http://yy.yy.yy.yy:8810" on your own browser.(xx.xx.xx.xx is the ip of your kibana server)

    Searching on kibana

    timestamp filter example

    {
      "range": {
        "@timestamp": {
          "gte": "2019/03/13",
          "lt": "now",
          "format": "yyyy/MM/dd"
        }
      }
    }
    

    Visualization On kibana

    refer to:http://www.eryajf.net/2367.html

    Back to config logstash rules

    https://blog.csdn.net/napoay/article/details/62885899

    Using ELK : About log

    Refer:
    MDC介绍 -- 一种多线程下日志管理实践方式
    https://blog.csdn.net/sunzhenhua0608/article/details/29175283
    https://www.cnblogs.com/sealedbook/p/6227452.html

    Using ELK : About geo point(change latitude/longitude to type geo_point)

    https://discuss.elastic.co/t/create-geo-point-from-nested-coordinates/60835/10
    1.update your logstash.conf

    filter {
        grok {
            add_tag => ["geoloc"]
            match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:UgvData\()\s*carRegisterStatus=%{NUMBER:carRegisterStatus}, hardwareStatus=%{NUMBER:hardwareStatus}, driveStatus=%{NUMBER:driveStatus}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}, speed=%{NUMBER:speed}, odometer=%{NUMBER:odometer}, batteryPercentage=%{NUMBER:batteryPercentage}, [\s\S]*remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}"}
        }
        if "geoloc" in [tags]
        {
            mutate {
                add_field => { "[geoloc][lon]" => "%{longitude}" }
                add_field => { "[geoloc][lat]" => "%{latitude}" }
            }
            mutate {
                convert => { "[geoloc][lon]" => "float" }
                convert => { "[geoloc][lat]" => "float" }
            }
        }
    }
    

    2.Kibana > Dev Tools > Console

    PUT _template/logstash/
    {
      "template": "up_info-*",
      "mappings": {
         "_default_" : { 
           "properties" : { 
             "geoloc": { "type": "geo_point" },
             "batteryPercentage":{ "type": "float" },
             "odometer":{ "type": "float" }
           }
         }
      }
    }
    

    replace "template": "up_info-*", to your own index name (on es)
    replace mappings as your location path
    3.create index for Kibana


    部分效果示例

    Logstash

    源日志

    2019-03-27 10:19:37.885 [http-nio-8080-exec-3] INFO UP_LOGGER - 小车上报日志,payload:UgvData(carRegisterStatus=3, hardwareStatus=0, driveStatus=2, longitude=113.96702, latitude=22.580927, speed=1.4961835, odometer=451.7, batteryPercentage=98, batteryVoltage=0.0, batteryCurrent=0.0, batteryAvgTemp=0.0, batteryHighTemp=0.0, batteryLowTemp=0.0, batteryHighVoltage=0.0, batteryLowVoltage=3.319, remoteTaskId=1553653174, taskId=9154)
    

    Grok规则

    \s*%{TIMESTAMP_ISO8601:time}\s*\[([\s\S]*)\]\s*%{LOGLEVEL:level}\s*(?<msgTag>[\s\S]*)\s*(payload:UgvData\()\s*carRegisterStatus=%{NUMBER:carRegisterStatus}, hardwareStatus=%{NUMBER:hardwareStatus}, driveStatus=%{NUMBER:driveStatus}, longitude=%{NUMBER:longitude}, latitude=%{NUMBER:latitude}, speed=%{NUMBER:speed}, odometer=%{NUMBER:odometer}, batteryPercentage=%{NUMBER:batteryPercentage}, [\s\S]*remoteTaskId=%{NUMBER:remoteTaskId}, taskId=%{NUMBER:taskId}
    

    解析结果

    { 
    "odometer": "451.7",  
    "level": "INFO",  
    "latitude": "22.580927",  
    "hardwareStatus": "0",  
    "speed": "1.4961835",  
    "remoteTaskId": "1553653174",  
    "driveStatus": "2",  
    "carRegisterStatus": "3",  
    "batteryPercentage": "98",  
    "time": "2019-03-27 10:19:37.885",  
    "msgTag": "UP_LOGGER - 小车上报日志,",  
    "taskId": "9154",  
    "longitude": "113.96702"
    }
    

    Kibana

    Kibana Kibana

    相关文章

      网友评论

        本文标题:Learning ELK

        本文链接:https://www.haomeiwen.com/subject/hshddqtx.html