美文网首页
分布式日志收集ELK

分布式日志收集ELK

作者: 执子之手_0a0f | 来源:发表于2019-05-08 23:13 被阅读0次

    SpringBoot集成ELK实现分布式日志收集

    作者:Bob Zhang01
    前两天项目中想用ELK收集日志,用了一天时间查了很多资料,踩了很多坑,一步步找到了解决办法,本文通过安装和集成springboot,主要解决ElasticSearc、Logstash、Kibana配置及集成到springboot中的一些设置问题,分为安装和代码案例两部分(代码在最后),供初学者参考(不足的地方请指教);
    一. ELK简介

    由于框架是java写的所以依赖jdk,安装前先安装jdk,我的版本是jdk 1.8.0_191

    Elasticsearch:简称ES,是一个基于Lucene、分布式、RESTful web接口的全文搜索引擎,同时也是一个面向文档的数据库,用于存储日志信息。

    安装:我是用的mac,直接用brew安装的:

    $ brew install elasticsearch
    

    linux和windows安装也很简单,主要是修改配置文件:找到安装目录,下的/config/elasticsearch.yml配置文件,主要修改四个地方:也就是没有被#注释的地方

    1.cluster.name: elasticsearch_toudaizhi 此处名字默认就行,我的默认就是elasticsearch_toudaizhi

    2.path.data: /usr/local/var/lib/elasticsearch_toudaizhi/

    3.path.logs: /usr/local/var/log/elasticsearch/

    4.network.host: 192.168.0.1,此处替换自己的ip

    # ==================Elasticsearch Configuration =====================
    
    #
    
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    
    #      Before you set out to tweak and tune the configuration, make sure you
    
    #      understand what are you trying to accomplish and the consequences.
    
    #
    
    # The primary way of configuring a node is via this file. This template lists
    
    # the most important settings you may want to configure for a production cluster.
    
    #
    
    # Please consult the documentation for further information on configuration options:
    
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    
    #
    
    # ---------------------------------- Cluster -----------------------------------
    
    #
    
    # Use a descriptive name for your cluster:
    
    #
    
    cluster.name: elasticsearch_toudaizhi
    
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    #node.name: node-1
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    path.data: /usr/local/var/lib/elasticsearch_toudaizhi/
    #
    # Path to log files:
    #
    path.logs: /usr/local/var/log/elasticsearch/
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 192.168.0.1
    #
    # Set a custom port for HTTP:
    #
    #http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when new node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    #
    # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
    #
    #discovery.zen.minimum_master_nodes:
    #
    # For more information, consult the zen discovery module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    

    修改完成保存,启动:

    ./elasticsearch
    

    浏览器访问:http://自己的ip:9200 (自己的ip,一定要访问成功,后面很多地方要使用),显示下图表示安装成功

    image

    网上说不能以root权限启动,我没有遇到,解决办法很多,这里省略。

    Logstash:是一个开源的服务器端数据处理管道,可以同时从多个数据源获取数据,并对其进行转换。

    安装:$ brew install logstash

    安装完成不用修改配置文件,需要添加一个配置文件,在安装目录下的/bin/目录下新建conf目录,在conf目录下创建logstash-indexer.conf文件,文件名可以自己设置。添加如下配置:

    input {
    
      file {
    
        path =>["/users/toudaizhi/log/logstash/info.*.log","/users/toudaizhi/log/logstash/error.*.log"]
    
      }
    
    }
    
    output {
    
      elasticsearch {
    
        hosts => ["自己的ip:9200"]
    
      }
    
      stdout {
    
        codec => rubydebug
    
      }
    
    }
    

    其中,需要替换 path =>["/users/toudaizhi/log/logstash/info..log","/users/toudaizhi/log/logstash/error..log"]

    因为我的日志是根据时间滚动存储的,所以设置的logstash的input是file,也可以设置tcp,

    tcp {
    
        port => 4560
    
        codec => json_lines
    
      }
    

    注意:文件位置和名称要和springboot中的日志目录和名称一致。

    启动:

    ./logstash -f conf/logstash-indexer.conf
    

    终端显示:Successfully started Logstash API endpoint {:port=>9600}表示启动成功;

    找到logstash-indexer.conf 文件中path=>对应的文件,在其中添加一些内容:例如添加:Hi,What are you doing?

    logstash会打印出:

    image

    Kibana:提供可搜索的Web可视化界面,查看数据简单、方便

    安装:

    $ brew install kibana
    

    设置配置文件:找到安装目录下的/config/kibana.yml

    修改

    1.server.port: 5601,去掉注释

    2.server.host: 自己的ip,去掉注释,替换成自己的ip

    3.elasticsearch.hosts: "http://自己的ip:9200" ,去掉注释,替换成自己的ip

    # Kibana is served by a back end server. This setting specifies the port to use.
    server.port: 5601
    
    # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
    # The default is 'localhost', which usually means remote machines will not be able to connect.
    # To allow connections from remote users, set this parameter to a non-loopback address.
    server.host: 自己的ip
    
    # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
    # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
    # from requests it receives, and to prevent a deprecation warning at startup.
    # This setting cannot end in a slash.
    #server.basePath: ""
    
    # Specifies whether Kibana should rewrite requests that are prefixed with
    # `server.basePath` or require that they are rewritten by your reverse proxy.
    # This setting was effectively always `false` before Kibana 6.3 and will
    # default to `true` starting in Kibana 7.0.
    #server.rewriteBasePath: false
    
    # The maximum payload size in bytes for incoming server requests.
    #server.maxPayloadBytes: 1048576
    
    # The Kibana server's name.  This is used for display purposes.
    #server.name: "your-hostname"
    
    # The URLs of the Elasticsearch instances to use for all your queries.
    elasticsearch.hosts: "http://自己的ip:9200"
    
    # When this setting's value is true Kibana uses the hostname specified in the server.host
    # setting. When the value of this setting is false, Kibana uses the hostname of the host
    # that connects to this Kibana instance.
    #elasticsearch.preserveHost: true
    
    # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
    # dashboards. Kibana creates a new index if the index doesn't already exist.
    #kibana.index: ".kibana"
    
    # The default application to load.
    #kibana.defaultAppId: "home"
    
    # If your Elasticsearch is protected with basic authentication, these settings provide
    # the username and password that the Kibana server uses to perform maintenance on the Kibana
    # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
    # is proxied through the Kibana server.
    #elasticsearch.username: "user"
    #elasticsearch.password: "pass"
    
    # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
    # These settings enable SSL for outgoing requests from the Kibana server to the browser.
    #server.ssl.enabled: false
    #server.ssl.certificate: /path/to/your/server.crt
    #server.ssl.key: /path/to/your/server.key
    
    # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
    # These files validate that your Elasticsearch backend uses the same key files.
    #elasticsearch.ssl.certificate: /path/to/your/client.crt
    #elasticsearch.ssl.key: /path/to/your/client.key
    
    # Optional setting that enables you to specify a path to the PEM file for the certificate
    # authority for your Elasticsearch instance.
    #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
    
    # To disregard the validity of SSL certificates, change this setting's value to 'none'.
    #elasticsearch.ssl.verificationMode: full
    
    # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
    # the elasticsearch.requestTimeout setting.
    

    修改完成,保存后,浏览器访问http://自己的ip:5601,

    第一次登录

    image

    点击第一个按钮,进入

    image

    第一次,进入会提示

    In order to visualize and explore data in Kibana, you'll need to create an index pattern to retrieve data from Elasticsearch.

    就是需要创建一个索引

    image

    我用的是第二个,logstash-2019.04.29,是logstash默认的

    填写logstash-2019.04.29后,Next step变为可点击

    image

    下一步,

    image

    点击Create index pattern,

    image

    点击Discover,在message中查看到自己的日志

    image

    二.代码部分

    第一次用简书,格式控制不好,所以把代码传到了github上了,地址:

    https://github.com/ZhangBob01/demo.git

    其中把logback-spring.xml文件中的自己的ip替换一下

    相关文章

      网友评论

          本文标题:分布式日志收集ELK

          本文链接:https://www.haomeiwen.com/subject/ttiboqtx.html