美文网首页
ELK搭建-终极篇

ELK搭建-终极篇

作者: 长腿小西瓜 | 来源:发表于2017-11-15 20:39 被阅读55次

    Logstash

    参考

    logstash管道有两个必须插件:inputoutput,还有一个可选插件:filters
    其中input使用来自数据源的数据,filters根据你的指定修改数据,output将数据写入指定目标,例如:Elasticsearch。如下图所示

    logstash结构图

    安装

    ➜  wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.zip
    ➜  unzip logstash-6.0.0.zip
    

    基本的测试命令

    ➜  cd logstash-6.0.0
    ➜  bin/logstash -e 'input { stdin { } } output { stdout {} }'
    

    启动信息:

    ➜  logstash-5.6.4 bin/logstash -e 'input { stdin { } } output { stdout {} }'
    Sending Logstash's logs to /Users/zhangjh/work/software/logstash-5.6.4/logs which is now configured via log4j2.properties
    [2017-11-15T10:51:34,511][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/Users/zhangjh/work/software/logstash-5.6.4/modules/netflow/configuration"}
    [2017-11-15T10:51:34,515][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/Users/zhangjh/work/software/logstash-5.6.4/modules/fb_apache/configuration"}
    [2017-11-15T10:51:34,682][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
    [2017-11-15T10:51:34,696][INFO ][logstash.pipeline        ] Pipeline main started
    The stdin plugin is now waiting for input:
    [2017-11-15T10:51:34,764][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    

    在命令行输入hello world:

    hello world
    2017-11-15T03:08:08.909Z localhost hello world
    

    输出了时间戳和主机ip,以及helloword

    配置logstash

    在config目录下面创建first-pipeline.conf

    内容如下:

    input {
        beats {
            port => "5043"
        }
    }
     filter {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}"}
        }
        geoip {
            source => "clientip"
        }
    }
    output {
        stdout { codec => rubydebug }
        elasticsearch {
            hosts => [ "localhost:9200" ]
        }
    }
    

    运行logstash

    ➜ bin/logstash -f config/first-pipeline.conf --config.reload.automatic

    filebeat

    官网

    安装filebeat

    ➜ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-darwin-x86_64.tar.gz

    ➜ tar -zxvf filebeat-6.0.0-darwin-x86_64.tar.gz

    下载测试文件

    点击这里,下载测试日志文件

    ➜ wget https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz

    将logstash-tutorial.log.gz解压到一个目录,供下面filebeat配置后,监听。

    例如下目录:
    /Users/zhangjh/work/project/elk/data/logstash-tutorial.log

    配置filebeat

    ➜ cd filebeat-6.0.0-darwin-x86_64

    需要将filebeat.yml赋权给root:

    ➜ su - root

    ➜ cd /Users/zhangjh/work/software/filebeat-6.0.0-darwin-x86_64

    ➜ chown root filebeat.yml

    ➜ vi filebeat.yml

    修改两个地方

    1. 设置解析的日志目录,相当于输入来源
    #=========================== Filebeat prospectors =============================
    
    filebeat.prospectors:
    - type: log
    
      # Change to true to enable this prospector configuration.
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /Users/zhangjh/work/project/elk/data/logstash-tutorial.log
    

    ==说明:

    enabled一定要改为true;

    paths为日志目录==

    1. 输出到logstash
    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      hosts: ["127.0.0.1:5043"]
      worker: 2
      loadbalance: true
      index: filebeat
    

    运行filebeat

    配置

    ➜ sudo ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &

    ➜ ps -ef | grep filebeat

    ➜ kill -9 pid

    运用Logstash解析日志

    上面的例子只是简单测试了logstash的安装,实际应用中,logstash例子更加复杂。它可以包含多个输入和输出,以及过滤器。可以通过Filebeat,将Apache web logs作为输入,然后将这些日志信息解析成特定的命名字段,最后将数据输出到ElasticSearch集群中。

    启动Logstash

    ➜ bin/logstash -f config/first-pipeline.conf --config.reload.automatic

    启动后保留该窗口

    修改日志文件内容

    重新打开一个终端,向日志文件中追加一行信息

    ➜ echo "zhangjh" >> /Users/zhangjh/work/project/elk/data/logstash-tutorial.log

    观察logstash窗口,会出现如下内容,表示监听成功:

    {
        "@timestamp" => 2017-11-15T13:08:30.850Z,
            "offset" => 24582,
          "@version" => "1",
              "beat" => {
                "name" => "localhost",
            "hostname" => "localhost",
             "version" => "6.0.0"
        },
              "host" => "localhost",
        "prospector" => {
            "type" => "log"
        },
            "source" => "/Users/zhangjh/work/project/elk/data/logstash-tutorial.log",
           "message" => "zhangjh",
              "tags" => [
            [0] "beats_input_codec_plain_applied",
            [1] "_grokparsefailure",
            [2] "_geoip_lookup_failure"
        ]
    }
    

    elaticsearch

    安装

    运行elaticsearch

    ➜ bin ./elasticsearch

    相关文章

      网友评论

          本文标题:ELK搭建-终极篇

          本文链接:https://www.haomeiwen.com/subject/psghvxtx.html