ELK

作者: XiaoMing丶 | 来源:发表于2019-03-21 02:58 被阅读0次

    目录

    一、ELK介绍
    二、ELK安装准备工作
    三、安装es
    四、 配置es
    五、curl查看es
    六、安装kibanan
    七、安装logstash
    八、配置logstash
    九、kibanan上查看日志
    十、收集nginx日志
    十一、使用beats采集日志
    十二、扩展部分

    一、ELK介绍

    需求背景
    业务发展越来越庞大,服务器越来越多
    各种访问日志、应用日志、错误日志量越来越多
    开发人员排查问题,需要到服务器上查日志,不方便
    运营人员需要一些数据,需要我们运维到服务器上分析日志

    ELK介绍
    官网https://www.elastic.co/cn/
    中文指南https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details
    ELK Stack (5.0版本之后→ Elastic Stack ==(ELK Stack + Beats)
    ELK Stack包含:ElasticSearch、Logstash、Kibana
    ElasticSearch是一个搜索引擎,用来搜索、分析、存储日志。它是分布式的,也就是说可以横向扩容,可以自动发现,索引自动分片,总之很强大。文档https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
    Logstash用来采集日志,把日志解析为json格式交给ElasticSearch。
    Kibana是一个数据可视化组件,把处理后的结果通过web界面展示
    Beats在这里是一个轻量级日志采集器,其实Beats家族有5个成员
    早期的ELK架构中使用Logstash收集、解析日志,但是Logstash对内存、cpu、io等资源消耗比较高。相比 Logstash,Beats所占系统的CPU和内存几乎可以忽略不计
    x-pack对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,是收费的

    ELK架构

    二、ELK安装准备工作

    准备3台机器130,132,128
    角色划分:
    3台全部安装elasticsearch(后续简称es) ,1主节点130,2数据节点132,128
    es主130上安装kibana
    1台es数据节点132上安装logstash 3台机器全部安装jdk8(openjdk即可)
    yum install -y java-1.8.0-openjdk

    #三个机器的hosts都改一下,方便管理
    [root@minglinux-01 ~] vim /etc/hosts
    ···
      8 192.168.162.130 minglinux-01
      9 192.168.162.132 minglinux-02
     10 192.168.162.128 minglinux-03
    
    #安装jdk
    [root@minglinux-01 ~] java -version
    java version "1.8.0_191"
    Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
    Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)
    [root@minglinux-01 ~] which java
    /usr/local/jdk1.8/bin/java
    
    [root@minglinux-02 ~] java -version
    java version "1.8.0_191"
    Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
    Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)
    [root@minglinux-02 ~] which java
    /usr/local/jdk1.8/bin/java
    
    [root@minglinux-03 ~] java -version
    -bash: java: 未找到命令
    [root@minglinux-03 ~] yum install -y java-1.8.0-openjdk
    [root@minglinux-03 ~] java -version
    openjdk version "1.8.0_201"
    OpenJDK Runtime Environment (build 1.8.0_201-b09)
    OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode)
    [root@minglinux-03 ~] which java
    /usr/bin/java
    

    三、安装es

    官方文档 https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
    以下操作3台机器上都要执行
    rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    vim /etc/yum.repos.d/elastic.repo //加入如下内容
    [elasticsearch-6.x]
    name=Elasticsearch repository for 6.x packages
    baseurl=https://artifacts.elastic.co/packages/6.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md
    yum install -y elasticsearch //也可以直接下载rpm文件,然后安装
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
    rpm -ivh elasticsearch-6.0.0.rpm

    #3台机器配置yum源,然后yum安装elasticsearch
    [root@minglinux-01 ~] rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    [root@minglinux-01 ~] vim /etc/yum.repos.d/elastic.repo
    #加入以下内容
      1 [elasticsearch-6.x]
      2 name=Elasticsearch repository for 6.x packages
      3 baseurl=https://artifacts.elastic.co/packages/6.x/yum
      4 gpgcheck=1
      5 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      6 enabled=1
      7 autorefresh=1
      8 type=rpm-md
    [root@minglinux-01 ~] yum list |grep elastic
    apm-server.i686                         6.6.2-1                        elasticsearch-6.x
    apm-server.x86_64                       6.6.2-1                        elasticsearch-6.x
    auditbeat.i686                          6.6.2-1                        elasticsearch-6.x
    auditbeat.x86_64                        6.6.2-1                        elasticsearch-6.x
    elastic-curator.noarch                  3.2.3-1.el7                    epel     
    elasticdump.noarch                      2.2.0-2.el7                    epel     
    elasticsearch.noarch                    6.6.2-1                        elasticsearch-6.x
    filebeat.i686                           6.6.2-1                        elasticsearch-6.x
    filebeat.x86_64                         6.6.2-1                        elasticsearch-6.x
    heartbeat-elastic.i686                  6.6.2-1                        elasticsearch-6.x
    heartbeat-elastic.x86_64                6.6.2-1                        elasticsearch-6.x
    journalbeat.i686                        6.6.2-1                        elasticsearch-6.x
    journalbeat.x86_64                      6.6.2-1                        elasticsearch-6.x
    kibana.x86_64                           6.6.2-1                        elasticsearch-6.x
    kibana-oss.x86_64                       6.3.0-1                        elasticsearch-6.x
    logstash.noarch                         1:6.6.2-1                      elasticsearch-6.x
    metricbeat.i686                         6.6.2-1                        elasticsearch-6.x
    metricbeat.x86_64                       6.6.2-1                        elasticsearch-6.x
    packetbeat.i686                         6.6.2-1                        elasticsearch-6.x
    packetbeat.x86_64                       6.6.2-1                        elasticsearch-6.x
    pcp-pmda-elasticsearch.x86_64           4.1.0-5.el7_6                  updates  
    python-elasticsearch.noarch             1.9.0-1.el7                    epel     
    rsyslog-elasticsearch.x86_64            8.24.0-34.el7                  base     
    [root@minglinux-01 ~] yum install -y elasticsearch
    [root@minglinux-01 ~] ln -s /usr/local/jdk1.8/bin/java /usr/bin/java
    #报错了,做一个软连接后安装成功
    could not find java; set JAVA_HOME or ensure java is in PATH
    error: %pre(elasticsearch-0:6.6.2-1.noarch) scriptlet failed, exit status 1
    Error in PREIN scriptlet in rpm package elasticsearch-6.6.2-1.noarch
    [root@minglinux-01 ~] yum install -y elasticsearch
    
    [root@minglinux-02 ~] rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    [root@minglinux-02 ~] vim /etc/yum.repos.d/elastic.repo
    [root@minglinux-02 ~] yum install -y elasticsearch
    [root@minglinux-02 ~] ln -s /usr/local/jdk1.8/bin/java /usr/bin/java
    [root@minglinux-02 ~] yum install -y elasticsearch
    
    [root@minglinux-03 ~] rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    [root@minglinux-03 ~] vim /etc/yum.repos.d/elastic.repo
    [root@minglinux-03 ~] yum install -y elasticsearch 
    
    

    四、 配置es

    elasticsearch配置文件/etc/elasticsearch和/etc/sysconfig/elasticsearch
    参考https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html
    在130上编辑配置文件vim /etc/elasticsearch/elasticsearch.yml//增加或更改
    cluster.name: minglinux
    node,name: minglinux-01
    node.master: true//意思是该节点为主节点
    node.data: false //是不是数据节点
    network.host: 192.168.162.130 //0.0.0.0是监听全部ip,不安全
    discovery.zen.ping.unicast.hosts: ["192.168.162.130", "192.168.162.132", "192.168.162.128"]
    在132和128上同样编辑配置文件vim /etc/elasticsearch/elasticsearch.yml//增加或更改
    cluster.name: minglinux
    node.name: minglinux-02 或minglinux-03
    node.master: false
    node.data: true //是数据节点
    network.host: 192.168.162.132 或192.168.162.128
    discovery.zen.ping.unicast.hosts: ["192.168.162.130", "192.168.162.132", "192.168.162.128"]

    #130编辑配置文件
    [root@minglinux-01 ~] vim /etc/elasticsearch/elasticsearch.yml
    #再对应区域添加如下内容
     18 cluster.name: minglinux
     26 node.name: minglinux-01
     30 node.master: true
     31 node.data: false
     60 network.host: 192.168.162.130
     74 discovery.zen.ping.unicast.hosts: ["192.168.162.130", "192.168.162.132", "192.168.162.128"]
    
    #将01机器修改好的配置文件传到02和03机器,然后修改
    [root@minglinux-01 ~] scp /etc/elasticsearch/elasticsearch.yml minglinux-02:/tmp/
    elasticsearch.yml                                                      100% 3076     1.6MB/s   00:00    
    [root@minglinux-01 ~] scp /etc/elasticsearch/elasticsearch.yml minglinux-03:/tmp/
    elasticsearch.yml                                                      100% 3076     2.6MB/s   00:00    
    
    [root@minglinux-02 ~] cp /tmp/elasticsearch
    elasticsearch-6356716019382047520/ elasticsearch.yml                  
    [root@minglinux-02 ~] cp /tmp/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml 
    cp:是否覆盖"/etc/elasticsearch/elasticsearch.yml"? y
    [root@minglinux-02 ~] vim /etc/elasticsearch/elasticsearch.yml
    ···
    cluster.name: minglinux
    node.name: minglinux-02 
    node.master: false
    node.data: true
    network.host: 192.168.162.132
    discovery.zen.ping.unicast.hosts: ["192.168.162.130", "192.168.162.132", "192.168.162.128"]
    
    [root@minglinux-03 ~] cp /tmp/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml
    cp:是否覆盖"/etc/elasticsearch/elasticsearch.yml"? y
    [root@minglinux-03 ~] vim !$
    vim /etc/elasticsearch/elasticsearch.yml
    ···
    cluster.name: minglinux
    node.name: minglinux-03
    node.master: false
    node.data: true
    network.host: 192.168.162.128
    discovery.zen.ping.unicast.hosts: ["192.168.162.130", "192.168.162.132", "192.168.162.128"]
    
    • ELK安装 – 安装x-pack(可省略)

    3台机器上都要执行
    cd /usr/share/elasticsearch/bin/ (可省略)
    ./elasticsearch-plugin install x-pack //如果速度慢,就下载x-pack压缩包(可省略)
    cd /tmp/; wget https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.0.0.zip (可省略)
    ./elasticsearch-plugin install file:///tmp/x-pack-6.0.0.zip (可省略)
    启动elasticsearch服务
    启动失败查看日志/var/log/elasticsearch/minglinux.log
    systemctl enable elasticsearch.service
    systemctl start elasticsearch.service
    以下操作只需要在130上执行
    安装x-pack后就可以为内置用户设置密码了,如下
    /usr/share/elasticsearch/bin/x-pack/setup-passwords interactive (可省略)
    curl localhost:9200 -u elastic //输入密码,可以查看到输出信息(可省略)

    #由于x-pack是收费的就不安装了,接下来启动服务
    [root@minglinux-01 ~] systemctl start elasticsearch
    [root@minglinux-01 ~] ps aux |grep elasticsearch
    elastic+  35666 34.5 52.4 3444748 977676 ?      Ssl  20:25   0:22 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-4079948052346171381 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
    root      35840  0.0  0.0 112720   984 pts/0    R+   20:26   0:00 grep --color=auto elasticsearch
    #01机器监听的端口一直看不到,而且启动起来后会自动退出,检查到有iptables规则,所以清空规则,然后在杀掉一些进程,服务器负载太高了。启动都要等好一会才看到9200和9300端口。
    [root@minglinux-01 ~] systemctl stop iptables.service 
    [root@minglinux-01 ~] iptables -nvL
    [root@minglinux-01 ~] pkill mongod
    [root@minglinux-01 ~] ps aux |grep mongo
    root      41950  0.0  0.0 112720   984 pts/0    S+   21:15   0:00 grep --color=auto mongo
          
    
    [root@minglinux-02 ~] systemctl start elasticsearch
    [root@minglinux-02 ~] ps aux |grep elasticsearch
    elastic+   5101 37.8 60.5 1591216 696520 ?      Ssl  20:48   0:02 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-8969078006330607850 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
    root       5155 23.0  0.0 112720   984 pts/1    S+   20:48   0:00 grep --color=auto elasticsearch
    
    [root@minglinux-03 ~] systemctl start elasticsearch
    [root@minglinux-03 ~] ps aux |grep elasticsearch
    elastic+  21046 69.0 74.7 1550420 746264 ?      Ssl  20:49   0:02 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-7191065286840389837 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
    root      21093 36.0  0.0 112720   984 pts/0    S+   20:49   0:00 grep --color=auto elasticsearch
    
    

    五、curl查看es

    130上执行
    curl '192.168.162.130:9200/_cluster/health?pretty' 健康检查
    curl '192.168.162.130/_cluster/state?pretty' 集群详细信息
    参考 http://zhaoyanblog.com/archives/732.html

    [root@minglinux-01 ~] curl '192.168.162.130:9200/_cluster/health?pretty' 
    {
      "cluster_name" : "minglinux",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 2,
      "active_primary_shards" : 0,
      "active_shards" : 0,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }
    
    [root@minglinux-01 ~] curl '192.168.162.130:9200/_cluster/state?pretty' 
    #太长了不贴出来了
    

    六、安装kibanan

    以下在130上执行
    前面已经配置过yum源,这里就不用再配置了
    yum install -y kibana
    若速度太慢,可以直接下载rpm包
    wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
    rpm -ivh kibana-6.0.0-x86_64.rpm
    kibana同样也需要安装x-pack(可省略)
    安装方法同elasticsearch的x-pack
    cd /usr/share/kibana/bin (可省略)
    ./kibana-plugin install x-pack //如果这样安装比较慢,也可以下载zip文件(可省略)
    wget https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.0.0.zip//这个文件和前面下载的那个其实是一个(可省略)
    ./kibana-plugin install file:///tmp/x-pack-6.0.0.zip (可省略)

    以下也是在130上执行
    vim /etc/kibana/kibana.yml //增加
    server.host: 192.168.162.130 //未安装x-pack无需密码登录,安全起见只监听内网
    elasticsearch.url: "http://192.168.162.130:9200"
    logging.dest: /var/log/kibana.log
    touch /var/log/kibana.log; chmod 777 /var/log/kibana.log
    systemctl restart kibana
    浏览器里访问http://192.168.162.130:5601/
    用户名elastic,密码为之前你设置过的密码(如果未安装x-pack,不需要用户名密码)
    若无法输入用户名密码,查日志/var/log/kibana.log
    出现错误 Status changed from uninitialized to red - Elasticsearch is still initializing the kibana index.
    解决办法:curl -XDELETE http://192.168.162.130:9200/.kibana -uelastic

    #安装
    [root@minglinux-01 ~] yum install -y kibana
    
    #修改配置文件
    [root@minglinux-01 ~] vim /etc/kibana/kibana.yml 
    #新增如下内容
      2 server.port: 5601
      8 server.host: 192.168.162.130
     28 elasticsearch.hosts: ["http://192.168.162.130:9200"]
     97 logging.dest: /var/log/kibana.log
    
    #创建日志目录并修改权限
    [root@minglinux-01 ~] touch /var/log/kibana.log; chmod 777 /var/log/kibana.log
    
    #启动kibana
    [root@minglinux-01 ~] systemctl restart kibana
    [root@minglinux-01 ~] ps aux |grep kibana
    kibana    49103  109  8.0 1236344 149788 ?      Rsl  22:13   0:14 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
    root      49142  0.0  0.0 112720   984 pts/0    S+   22:13   0:00 grep --color=auto kibana
    [root@minglinux-01 ~] netstat -lntp |grep 5601
    tcp        0      0 192.168.162.130:5601    0.0.0.0:*               LISTEN      49103/node          
    
    
    • 浏览器访问


    七、安装logstash

    以下在132上执行
    logstash目前不支持java9
    直接yum安装(配置源同前面es的源)
    yum install -y logstash //如果慢,就下载rpm包
    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
    cd /usr/share/logstash/bin/(可省略)
    ./logstash-plugin install file:///tmp/x-pack-6.0.0.zip (可省略)

    #02机器上安装logstash
    [root@minglinux-02 ~] yum install -y logstash  
    
    • 先进行logstash收集syslog日志的一个测试

    以下在132上操作
    编辑配置文件 vim /etc/logstash/conf.d/syslog.conf//加入如下内容
    input {
    syslog {
    type => "system-syslog" #这里将syslog输出到logstash监听端口10514
    port => 10514 #也可以直接指定文件
    }
    }
    output {
    stdout {
    codec => rubydebug
    }
    }
    检测配置文件是否有错
    cd /usr/share/logstash/bin
    ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit

    以下在132上操作
    前台形式启动logstash
    ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf//这样可以在屏幕上查看到日志输出,不能敲命令
    再开一个终端
    检测是否开启10514端口:netstat -lnp |grep 10514
    vi /etc/rsyslog.conf//在#### RULES下面增加一行
    . @@127.0.0.1:10514
    systemctl restart rsyslog
    从130ssh到132上,可以在logstash前台的终端上看到ssh登录的相关日志
    结束logstash,在前台的那个终端上按ctrl c

    [root@minglinux-02 ~] vim /etc/logstash/conf.d/syslog.conf
    #新增如下内容
    input {
    syslog {
    type => "system-syslog"
    port => 10514
    }
    }
    output {
    stdout {
    codec => rubydebug
    }
    }
    
    #检测配置文件是否有错
    [root@minglinux-02 ~] cd /usr/share/logstash/bin
    [root@minglinux-02 /usr/share/logstash/bin] ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2019-03-20T23:28:03,361][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
    [2019-03-20T23:28:03,398][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
    [2019-03-20T23:28:05,583][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
    Configuration OK
    [2019-03-20T23:28:20,681][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
    
    #前台形式启动logstash
    [root@minglinux-02 /usr/share/logstash/bin] ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
    #等待在屏幕上查看日志输出,不能敲命令
    
    
    #另开一个终端执行一些操作,然后查看日志是否输出到10514端口,即显示在启动logstash的终端的前台
    [root@minglinux-02 /usr/share/logstash/bin] vim /etc/rsyslog.conf
    #RULES下面增加一行
    #### RULES ####
    *.* @@192.168.162.132:10514
    
    [root@minglinux-02 ~] systemctl restart rsyslog
    [root@minglinux-02 ~] netstat -lnp |grep 10514
    tcp6       0      0 :::10514                :::*                    LISTEN      5681/java           
    udp        0      0 0.0.0.0:10514           0.0.0.0:*                           5681/java           
    
    #进行ssh登录
    [root@minglinux-01 ~] ssh minglinux-02
    Last login: Wed Mar 20 23:33:54 2019 from 192.168.162.1
    [root@minglinux-02 ~] 登出
    Connection to minglinux-02 closed.
    [root@minglinux-01 ~] ssh minglinux-02
    Last login: Wed Mar 20 23:40:30 2019 from gitlab.example.com
    [root@minglinux-02 ~] 登出
    Connection to minglinux-02 closed.
    
    
    #可以查看到ssh登录的相关日志
    [root@minglinux-02 /usr/share/logstash/bin] ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2019-03-20T23:49:07,728][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [2019-03-20T23:49:07,772][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.2"}
    [2019-03-20T23:49:20,782][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
    [2019-03-20T23:49:21,498][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x9b299a7 run>"}
    [2019-03-20T23:49:21,719][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
    [2019-03-20T23:49:21,728][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:10514"}
    [2019-03-20T23:49:21,749][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
    [2019-03-20T23:49:22,257][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    [2019-03-20T23:50:01,797][INFO ][logstash.inputs.syslog   ] new connection {:client=>"192.168.162.132:53582"}
    {
               "message" => "Started Session 38 of user root.\n",
              "@version" => "1",
              "priority" => 30,
             "timestamp" => "Mar 20 23:50:01",
              "severity" => 6,
                  "host" => "192.168.162.132",
                  "type" => "system-syslog",
        "facility_label" => "system",
        "severity_label" => "Informational",
            "@timestamp" => 2019-03-20T15:50:01.000Z,
               "program" => "systemd",
             "logsource" => "minglinux-02",
              "facility" => 3
    }
    {
               "message" => "action 'action 0' resumed (module 'builtin:omfwd') [v8.24.0 try http://www.rsyslog.com/e/2359 ]\n",
              "@version" => "1",
              "priority" => 46,
             "timestamp" => "Mar 20 23:50:01",
              "severity" => 6,
                  "host" => "192.168.162.132",
                  "type" => "system-syslog",
        "facility_label" => "syslogd",
        "severity_label" => "Informational",
            "@timestamp" => 2019-03-20T15:50:01.000Z,
               "program" => "rsyslogd",
             "logsource" => "minglinux-02",
              "facility" => 5
    }
    
    

    八、配置logstash

    以下在132上操作
    后台形式启动logstash
    编辑配置文件 vim /etc/logstash/conf.d/syslog.conf//配置文件内容改为如下
    input {
    syslog {
    type => "system-syslog"
    port => 10514
    }
    }
    output {
    elasticsearch {
    hosts => ["192.168.162.132:9200"]
    index => "system-syslog-%{+YYYY.MM}"
    }
    }
    systemctl start logstash //启动需要一些时间,启动完成后,可以看到9600端口和10514端口已被监听
    130上执行curl 'localhost:9200/_cat/indices?v' 可以获取索引信息
    curl -XGET 'localhost:9200/indexname?pretty' 可以获指定索引详细信息
    curl -XDELETE 'localhost:9200/logstash-xxx-*' 可以删除指定索引
    浏览器访问192.168.162.130:5601,到kibana配置索引
    左侧点击“Managerment”-> “Index Patterns”-> “Create Index Pattern”
    Index pattern这里需要根据前面curl查询到的索引名字来写,否则下面的按钮是无法点击的

    [root@minglinux-02 /usr/share/logstash/bin] vim /etc/logstash/conf.d/syslog.conf
    #配置文件改为如下
    input {
      syslog {
        type => "system-syslog"
        port => 10514
      } 
    } 
    output {
      elasticsearch {
        hosts => ["192.168.162.132:9200"]
        index => "system-syslog-%{+YYYY.MM}"
      }
    }
    
    #检测一下配置文件有没有错
    [root@minglinux-02 /usr/share/logstash/bin] ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2019-03-21T00:07:23,462][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
    Configuration OK
    [2019-03-21T00:07:33,481][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
    
    #后台以服务方式启动logstash
    [root@minglinux-02 /usr/share/logstash/bin] systemctl start logstash 
    Failed to start logstash.service: Unit not found.
    [root@minglinux-02 /usr/share/logstash/bin] /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd  #生成logstash.service
    Using provided startup.options file: /etc/logstash/startup.options
    Manually creating startup for specified platform: systemd
    Successfully created system startup script for Logstash
    [root@minglinux-02 /usr/share/logstash/bin] systemctl start logstash 
    [root@minglinux-02 /usr/share/logstash/bin] ps aux |grep logstash
    logstash   6481  190 33.7 3477180 387680 ?      SNsl 00:14   0:47 /bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.22.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash
    root       6509  0.0  0.0 112720   980 pts/1    R+   00:14   0:00 grep --color=auto logstash
    
    #启动过程较久,可以看看日志是否有错
    [root@minglinux-02 /usr/share/logstash/bin] less /var/log/logstash/logstash-plain.log 
    #日志没有更新,判断是权限问题,更改所属用户
    [root@minglinux-02 /usr/share/logstash/bin] chown logstash /var/log/logstash/logstash-plain.log
    [root@minglinux-02 /usr/share/logstash/bin] chown -R logstash /var/lib/logstash
    [root@minglinux-02 /usr/share/logstash/bin] systemctl restart logstash 
    #终于成功了
    
    #等到10514和9600端口被监听才是启动成功
    [root@minglinux-02 /usr/share/logstash/bin] netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      4232/master         
    tcp        0      0 0.0.0.0:10050           0.0.0.0:*               LISTEN      3818/zabbix_agentd  
    tcp        0      0 192.168.162.132:27017   0.0.0.0:*               LISTEN      3981/mongod         
    tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      3981/mongod         
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3804/sshd           
    tcp6       0      0 ::1:25                  :::*                    LISTEN      4232/master         
    tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      7323/java           
    tcp6       0      0 :::10050                :::*                    LISTEN      3818/zabbix_agentd  
    tcp6       0      0 :::3306                 :::*                    LISTEN      4276/mysqld         
    tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
    tcp6       0      0 192.168.162.132:9200    :::*                    LISTEN      5101/java           
    tcp6       0      0 :::10514                :::*                    LISTEN      7323/java           
    tcp6       0      0 192.168.162.132:9300    :::*                    LISTEN      5101/java           
    tcp6       0      0 :::22                   :::*                    LISTEN      3804/sshd           
    
    

    九、kibanan上查看日志

    130上执行curl '192.168.162.130:9200/_cat/indices?v' 可以获取索引信息
    curl -XGET '192.168.162.130:9200/indexname?pretty' 可以获指定索引详细信息
    curl -XDELETE '192.168.162.130:9200/logstash-xxx-*' 可以删除指定索引
    浏览器访问192.168.162.130:5601,到kibana配置索引
    左侧点击“Managerment”-> “Index Patterns”-> “Create Index Pattern”
    Index pattern这里需要根据前面curl查询到的索引名字来写,否则下面的按钮是无法点击的

    #获取索引信息
    [root@minglinux-01 ~] curl '192.168.162.130:9200/_cat/indices?v' 
    health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   .kibana_1             IpJkISvGSBecrHJR5dNMtA   1   1          3            0       24kb           12kb
    green  open   system-syslog-2019.03 cWR1XimDQE6PkexAIHCYkA   5   1          5            0     81.5kb         40.7kb
    [root@minglinux-01 ~]  curl -XGET '192.168.162.130:9200/system-syslog-2019.03?pretty'
    
    
    • 到kibana配置索引查看日志




    [root@minglinux-02 /usr/share/logstash/bin] tail -f /var/log/messages
    Mar 21 00:38:33 minglinux-02 logstash: [2019-03-21T00:38:33,852][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1354ed0d run>"}
    Mar 21 00:38:34 minglinux-02 logstash: [2019-03-21T00:38:34,787][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:10514"}
    Mar 21 00:38:34 minglinux-02 logstash: [2019-03-21T00:38:34,792][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
    Mar 21 00:38:34 minglinux-02 logstash: [2019-03-21T00:38:34,789][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
    Mar 21 00:38:35 minglinux-02 logstash: [2019-03-21T00:38:35,908][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    Mar 21 00:40:01 minglinux-02 systemd: Started Session 45 of user root.
    Mar 21 00:40:01 minglinux-02 rsyslogd: action 'action 0' resumed (module 'builtin:omfwd') [v8.24.0 try http://www.rsyslog.com/e/2359 ]
    Mar 21 00:40:01 minglinux-02 rsyslogd: action 'action 0' resumed (module 'builtin:omfwd') [v8.24.0 try http://www.rsyslog.com/e/2359 ]
    Mar 21 00:40:01 minglinux-02 logstash: [2019-03-21T00:40:01,528][INFO ][logstash.inputs.syslog   ] new connection {:client=>"192.168.162.132:34460"}
    Mar 21 00:50:01 minglinux-02 systemd: Started Session 46 of user root.
    

    十、收集nginx日志

    132上 编辑配置文件 vi /etc/logstash/conf.d/nginx.conf//加入如下内容
    input {
    file {
    path => "/tmp/elk_access.log"
    start_position => "beginning"
    type => "nginx"
    }
    }
    filter {
    grok {
    match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} [%{HTTPDATE:timestamp}] "(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
    }
    geoip {
    source => "clientip"
    }
    }
    output {
    stdout { codec => rubydebug }
    elasticsearch {
    hosts => ["192.168.162.132:9200"]
    index => "nginx-test-%{+YYYY.MM.dd}"
    }
    }
    检测配置文件是否有错
    cd /usr/share/logstash/bin
    ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
    yum install -y nginx
    vim /usr/local/nginx/conf/vhost/elk.conf//创建虚拟主机配置文件写入如下内容
    server {
    listen 80;
    server_name elk.ming.com;
    location / {
    proxy_pass http://192.168.162.130:5601;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
    access_log /tmp/elk_access.log main2;
    }

    vim /etc/nginx/nginx.conf//增加如下内容
    log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$upstream_addr" $request_time';
    /usr/local/nginx/sbin/nginx -t
    systemctl start nginx
    windows绑定hosts 192.168.162.132 elk.ming.com
    浏览器访问,检查是否有日志产生
    systemctl restart logstash

    [root@minglinux-02 /usr/share/logstash/bin] vi /etc/logstash/conf.d/nginx.conf
    #加入以下内容
    input {
      file {
        path => "/tmp/elk_access.log"
        start_position => "beginning"
        type => "nginx"
      }
    }
    filter {
        grok {
        }
        geoip {
    input {
      file {
        path => "/tmp/elk_access.log"
        start_position => "beginning"
        type => "nginx"
      }
    }
    filter {
        grok {
            match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
        }
        geoip {
            source => "clientip"
        }
    }
    output {
        stdout { codec => rubydebug }
        elasticsearch {
            hosts => ["192.168.162.132:9200"]
            index => "nginx-test-%{+YYYY.MM.dd}"
      }
    }
    #检测
    [root@minglinux-02 /usr/share/logstash/bin] ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2019-03-21T02:03:17,049][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
    Configuration OK
    [2019-03-21T02:03:33,526][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
    #重启logstash 
    [root@minglinux-02 /usr/share/logstash/bin] systemctl restart logstash 
    
    
    #新建虚拟主机配置文件
    [root@minglinux-02 /usr/local/nginx/conf/vhost] vim elk.conf
    #内容如下
    server {
                listen 80;
                server_name elk.ming.com;
    
                location / {
                    proxy_pass      http://192.168.162.130:5601;
                    proxy_set_header Host   $host;
                    proxy_set_header X-Real-IP      $remote_addr;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                }
                access_log  /tmp/elk_access.log main2;
            }
    
    
    [root@minglinux-02 /usr/local/nginx/conf/vhost] vim /usr/local/nginx/conf/nginx.conf
    #加入以下内容
    ···
        log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request"'
                          '$status $body_bytes_sent "$http_referer"'
                          '"$http_user_agent" "$upstream_addr" $request_time';
    ···
    
    #检测nginx配置文件是否有错
    [root@minglinux-02 /usr/local/nginx/conf/vhost] /usr/local/nginx/sbin/nginx -t
    nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
    nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
    [root@minglinux-02 /usr/local/nginx/conf/vhost] /usr/local/nginx/sbin/nginx -s reload
    
    

    最后
    130上curl '192.168.162.130:9200/_cat/indices?v'
    检查是否有nginx-test开头的索引生成
    如果有,才能到kibana里去配置该索引
    左侧点击“Managerment”-> “Index Patterns”-> “Create Index Pattern”
    Index pattern这里写nginx-test-*
    之后点击左侧的Discover

    #130上查看索引
    [root@minglinux-01 ~] curl '192.168.162.130:9200/_cat/indices?v' 
    health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   .kibana_1             IpJkISvGSBecrHJR5dNMtA   1   1          4            0     39.4kb         19.7kb
    green  open   system-syslog-2019.03 cWR1XimDQE6PkexAIHCYkA   5   1       2887            0      2.2mb            1mb
    green  open   nginx-test-2019.03.20 hVWCcygQRvSfmE54H_vyIg   5   1       2821            0      1.4mb        703.4kb
    
    
    • 到kibana里去配置新索引


    • 日志页面


    [root@minglinux-02 /usr/share/logstash/bin] tail -f /tmp/elk_access.log
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:11:54 +0800] "GET /bundles/kibana.bundle.js HTTP/1.1"304 0 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.038
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:11:59 +0800] "GET /api/console/api_server?sense_version=%40%40SENSE_VERSION&apis=es_6_0 HTTP/1.1"200 19139 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.104
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /api/spaces/space HTTP/1.1"200 114 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.095
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /api/saved_objects/_find?type=index-pattern&fields=title&search=*&search_fields=title&per_page=1 HTTP/1.1"200 179 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.091
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /bundles/ebdca7741674eca4e1fadeca157f3ae6.svg HTTP/1.1"304 0 "http://elk.ming.com/bundles/commons.style.css""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.193
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /api/security/v1/me HTTP/1.1"200 2 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.175
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /ui/fonts/open_sans/open_sans_v15_latin_regular.woff2 HTTP/1.1"304 0 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.224
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:01 +0800] "GET /api/xpack/v1/info HTTP/1.1"200 679 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.079
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:02 +0800] "GET /ui/fonts/open_sans/open_sans_v15_latin_700.woff2 HTTP/1.1"200 14720 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.157
    elk.ming.com 192.168.162.1 - - [21/Mar/2019:02:12:02 +0800] "GET /ui/fonts/open_sans/open_sans_v15_latin_600.woff2 HTTP/1.1"200 14544 "http://elk.ming.com/app/kibana""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" "192.168.162.130:5601" 0.161
    
    

    十一、使用beats采集日志

    https://www.elastic.co/cn/products/beats
    filebeat metricbeat packetbeat winlogbeat auditbeat heartbeat
    可扩展,支持自定义构建
    在128上执行
    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
    rpm -ivh filebeat-6.0.0-x86_64.rpm
    首先编辑配置文件
    vim /etc/filebeat/filebeat.yml //增加或者更改
    filebeat.prospectors:
    - type: log
    paths:
    - /var/log/messages
    output.console:
    enable: true
    /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml //可以在屏幕上看到对应的日志信息
    再编辑配置文件
    vim /etc/filebeat/filebeat.yml //增加或者更改
    filebeat.prospectors:
    - input_type: log
    paths:
    - /var/log/elasticsearch/minglinux.log
    output.elasticsearch:
    hosts: ["192.168.128.130:9200"]
    systemctl start filebeat

    [root@minglinux-03 ~] wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
    [root@minglinux-03 ~] rpm -ivh  filebeat-6.0.0-x86_64.rpm
    准备中...                          ################################# [100%]
    正在升级/安装...
       1:filebeat-6.0.0-1                 ################################# [100%]
    
    
    #修改配置文件
    [root@minglinux-03 ~] vim /etc/filebeat/filebeat.yml 
    #修改或新增如下内容
    15 filebeat.prospectors:
     21 - type: log
     24 # enabled: false
     27   paths:
     28     - /var/log/messages
    141 output.console:
    142   enable: true
    143 #-------------------------- Elasticsearch output ------------------------------
    144 #output.elasticsearch:
    
    #远程登录
    [root@minglinux-02 ~] ssh minglinux-03
    root@minglinux-03's password: 
    Last login: Thu Mar 21 02:30:37 2019 from minglinux-02
    [root@minglinux-03 ~] 登出
    Connection to minglinux-03 closed.
    
    #屏幕可以显示对应的日志信息
    ^C[root@minglinux-03 ~] /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml 
    ···
    ···
    {"@timestamp":"2019-03-20T18:35:59.219Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"source":"/var/log/messages","offset":13019,"message":"Mar 21 02:35:56 minglinux-03 systemd: Starting Session 73 of user root.","prospector":{"type":"log"},"beat":{"version":"6.0.0","name":"minglinux-03","hostname":"minglinux-03"}}
    {"@timestamp":"2019-03-20T18:35:59.219Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"prospector":{"type":"log"},"beat":{"name":"minglinux-03","hostname":"minglinux-03","version":"6.0.0"},"source":"/var/log/messages","offset":13084,"message":"Mar 21 02:35:57 minglinux-03 systemd-logind: Removed session 73."}
    
    
    #再编辑配置文件
    
    #增加或者更改如下内容
    [root@minglinux-03 ~] vim /etc/filebeat/filebeat.yml 
    ···
     27   paths:
     28     - /var/log/elasticsearch/minglinux.log  #es的日志
    
    141 #output.console:
    142 #  enable: true
    
    144 output.elasticsearch:
    145   # Array of hosts to connect to.
    146   hosts: ["192.168.162.130:9200"]
    ···
    
    #启动filebeat
    [root@minglinux-03 ~] systemctl start  filebeat
    [root@minglinux-03 ~] ps aux |grep filebeat
    root      21859 12.5  1.2 287348 12112 ?        Ssl  02:50   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
    root      21867  0.0  0.0 112720   984 pts/0    S+   02:50   0:00 grep --color=auto filebeat
    
    #查看索引
    [root@minglinux-01 ~] curl '192.168.162.130:9200/_cat/indices?v' 
    health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   .kibana_1                 IpJkISvGSBecrHJR5dNMtA   1   1          5            0     52.8kb         26.4kb
    green  open   system-syslog-2019.03     cWR1XimDQE6PkexAIHCYkA   5   1      16469            0        7mb          3.5mb
    green  open   filebeat-6.0.0-2019.03.21 WKCdeKzARK2npJwjkgHFtA   3   1          9            0     48.6kb         21.6kb
    green  open   nginx-test-2019.03.20     hVWCcygQRvSfmE54H_vyIg   5   1      16403            0      6.5mb          3.5mb
    
    
    • 到kibana里去配置新索引


    • 日志页面


    十二、扩展部分

    x-pack 收费,免费 http://www.jianshu.com/p/a49d93212eca
    https://www.elastic.co/subscriptions
    Elastic stack演进 http://70data.net/1505.html
    基于kafka和elasticsearch,linkedin构建实时日志分析系统 http://t.cn/RYffDoE
    使用redis http://blog.lishiming.net/?p=463
    ELK+Filebeat+Kafka+ZooKeeper 构建海量日志分析平台 https://www.cnblogs.com/delgyd/p/elk.html
    http://www.jianshu.com/p/d65aed756587

    相关文章

      网友评论

          本文标题:ELK

          本文链接:https://www.haomeiwen.com/subject/fhjypqtx.html