ELK安装

作者: 习惯了沉默乄 | 来源:发表于2019-11-22 16:33 被阅读0次

    记录一下ELK部署过程

    一、部署ElasticSearch

    下载解压ElasticSearch

    wget -b https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
    tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
    

    1.部署master节点

    创建master目录

    mv elasticsearch-7.4.2 es_master
    cd es_master
    

    修改elasticsearch配置文件

    vi config/elasticsearch.yml
    
    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    cluster.name: elasticsearch
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    node.name: master
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    #path.data: /path/to/data
    #
    # Path to log files:
    #
    #path.logs: /path/to/logs
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: IP地址
    #
    # Set a custom port for HTTP:
    #
    http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when this node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.seed_hosts: ["host1", "host2"]
    #
    # Bootstrap the cluster using an initial set of master-eligible nodes:
    #
    cluster.initial_master_nodes: ["master"]
    #
    # For more information, consult the discovery and cluster formation module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    #
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    node.master: true
    
    

    如果机器内存比较小,可以调整JVM内存

    vi config/jvm.options
    
    -Xms1g
    -Xmx1g
    

    启动master

    bin/elasticsearch -d
    

    查看master是否启动成功


    master节点

    2.部署slave1节点

    创建slave文件夹

    mkdir es_slave
    cp elasticsearch-7.4.2-linux-x86_64.tar.gz ./es_slave
    cd es_slave
    tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
    mv elasticsearch-7.4.2 es_slave1
    

    修改elasticsearch配置

    vi config/elasticsearch.yml
    
    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    cluster.name: elasticsearch
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    node.name: slave1
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    #path.data: /path/to/data
    #
    # Path to log files:
    #
    #path.logs: /path/to/logs
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: IP地址
    #
    # Set a custom port for HTTP:
    #
    http.port: 8200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when this node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.seed_hosts: ["host1", "host2"]
    #
    # Bootstrap the cluster using an initial set of master-eligible nodes:
    #
    #cluster.initial_master_nodes: ["node-1", "node-2"]
    #
    # For more information, consult the discovery and cluster formation module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    #
    discovery.zen.ping.unicast.hosts: ["IP地址"]
    

    启动slave1节点

    bin/elasticsearch -d
    

    查看slave1部署是否成功


    slave1节点

    3.部署slave2节点

    tar -zxvf elasticsearch-7.4.2-linux-x86_64.tar.gz
    mv elasticsearch-7.4.2 es_slave2
    

    配置elasticsearch配置文件

    vi config/elasticsearch.yml
    
    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    cluster.name: elasticsearch
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    node.name: slave2
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    #path.data: /path/to/data
    #
    # Path to log files:
    #
    #path.logs: /path/to/logs
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: IP地址
    #
    # Set a custom port for HTTP:
    #
    http.port: 7200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when this node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.seed_hosts: ["host1", "host2"]
    #
    # Bootstrap the cluster using an initial set of master-eligible nodes:
    #
    #cluster.initial_master_nodes: ["node-1", "node-2"]
    #
    # For more information, consult the discovery and cluster formation module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    #
    discovery.zen.ping.unicast.hosts: ["IP地址"]
    

    启动slave2节点

    bin/elasticsearch -d
    

    查看slave2节点是否启动成功


    slave2节点

    二、部署Elasticsearch-head插件

    安装noide机npm

    wget -b https://nodejs.org/download/release/v10.14.1/node-v10.14.1-linux-x64.tar.gz
    tar -zxvf node-v10.14.1-linux-x64.tar.gz
    mv node-v10.14.1-linux-x64 /usr/local/node
    vi /etc/profile
    
    export NODE_HOME=/usr/local/node
    export PATH=$NODE_HOME/bin:$PATH
    
    source /etc/profile
    

    安装cnpm

    npm install -g cnpm --registry=https://registry.npm.taobao.org
    

    验证

    npm -v
    6.4.1
    
    cnpm -v
    cnpm@6.1.0 (/usr/local/node/lib/node_modules/cnpm/lib/parse_argv.js)
    npm@6.13.0 (/usr/local/node/lib/node_modules/cnpm/node_modules/npm/lib/npm.js)
    node@10.14.1 (/usr/local/node/bin/node)
    npminstall@3.23.0 (/usr/local/node/lib/node_modules/cnpm/node_modules/npminstall/lib/index.js)
    prefix=/usr/local/node 
    linux x64 3.10.0-957.21.3.el7.x86_64 
    registry=https://r.npm.taobao.org
    

    下载Elasticsearch-head插件

    wget -b https://github.com/mobz/elasticsearch-head/archive/v5.0.0.tar.gz
    tar -zxvf v5.0.0.tar.gz
    

    安装elasticsear-head

    cd elasticsearch-head-5.0.0
    cnpm install
    

    启动elasticsearch-head

    nohup cnpm run start >> elasticsearch-head.log &
    

    验证


    ElasticSearch-head

    三、部署kibana

    下载kibana

    wget -b https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-linux-x86_64.tar.gz
    tar -zxvf kibana-7.4.2-linux-x86_64.tar.gz
    

    修改kibana配置

    vi config/kibana.yml
    
    # Kibana is served by a back end server. This setting specifies the port to use.
    server.port: 5601
    
    # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
    # The default is 'localhost', which usually means remote machines will not be able to connect.
    # To allow connections from remote users, set this parameter to a non-loopback address.
    server.host: "IP地址"
    
    # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
    # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
    # from requests it receives, and to prevent a deprecation warning at startup.
    # This setting cannot end in a slash.
    #server.basePath: ""
    
    # Specifies whether Kibana should rewrite requests that are prefixed with
    # `server.basePath` or require that they are rewritten by your reverse proxy.
    # This setting was effectively always `false` before Kibana 6.3 and will
    # default to `true` starting in Kibana 7.0.
    #server.rewriteBasePath: false
    
    # The maximum payload size in bytes for incoming server requests.
    #server.maxPayloadBytes: 1048576
    
    # The Kibana server's name.  This is used for display purposes.
    #server.name: "your-hostname"
    
    # The URLs of the Elasticsearch instances to use for all your queries.
    elasticsearch.hosts: ["http://IP地址:9200", "http://IP地址:8200", "http://IP地址:7200"]
    #
    # When this setting's value is true Kibana uses the hostname specified in the server.host
    # setting. When the value of this setting is false, Kibana uses the hostname of the host
    # that connects to this Kibana instance.
    #elasticsearch.preserveHost: true
    
    # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
    # dashboards. Kibana creates a new index if the index doesn't already exist.
    #kibana.index: ".kibana"
    
    # The default application to load.
    #kibana.defaultAppId: "home"
    
    # If your Elasticsearch is protected with basic authentication, these settings provide
    # the username and password that the Kibana server uses to perform maintenance on the Kibana
    # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
    # is proxied through the Kibana server.
    #elasticsearch.username: "kibana"
    #elasticsearch.password: "pass"
    
    # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
    # These settings enable SSL for outgoing requests from the Kibana server to the browser.
    #server.ssl.enabled: false
    #server.ssl.certificate: /path/to/your/server.crt
    #server.ssl.key: /path/to/your/server.key
    
    # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
    # These files validate that your Elasticsearch backend uses the same key files.
    #elasticsearch.ssl.certificate: /path/to/your/client.crt
    #elasticsearch.ssl.key: /path/to/your/client.key
    
    # Optional setting that enables you to specify a path to the PEM file for the certificate
    # authority for your Elasticsearch instance.
    #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
    
    # To disregard the validity of SSL certificates, change this setting's value to 'none'.
    #elasticsearch.ssl.verificationMode: full
    
    # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
    # the elasticsearch.requestTimeout setting.
    #elasticsearch.pingTimeout: 1500
    
    # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
    # must be a positive integer.
    #elasticsearch.requestTimeout: 30000
    
    # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
    # headers, set this value to [] (an empty list).
    #elasticsearch.requestHeadersWhitelist: [ authorization ]
    
    # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
    # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
    #elasticsearch.customHeaders: {}
    
    # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
    #elasticsearch.shardTimeout: 30000
    
    # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
    #elasticsearch.startupTimeout: 5000
    
    # Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
    #elasticsearch.logQueries: false
    
    # Specifies the path where Kibana creates the process ID file.
    #pid.file: /var/run/kibana.pid
    
    # Enables you specify a file where Kibana stores log output.
    #logging.dest: stdout
    
    # Set the value of this setting to true to suppress all logging output.
    #logging.silent: false
    
    # Set the value of this setting to true to suppress all logging output other than error messages.
    #logging.quiet: false
    
    # Set the value of this setting to true to log all events, including system usage information
    # and all requests.
    #logging.verbose: false
    
    # Set the interval in milliseconds to sample system and process performance
    # metrics. Minimum is 100ms. Defaults to 5000.
    #ops.interval: 5000
    
    # Specifies locale to be used for all localizable strings, dates and number formats.
    # Supported languages are the following: English - en , by default , Chinese - zh-CN .
    #i18n.locale: "en"
    

    启动kibana

    nohup bin/kibana >> kibana.log &
    

    验证


    Kibana

    四、部署kafka

    下载kafka

    wget -b https://www.apache.org/dyn/closer.cgi?path=/kafka/2.3.1/kafka_2.11-2.3.1.tgz
    tar -zxvf kafka_2.11-2.3.1.tgz
    

    启动zk

    cd kafka_2.11-2.3.1
    bin/zookeeper-server-start.sh config/zookeeper.properties
    

    启动kafka

    bin/kafka-server-start.sh config/server.properties
    

    验证


    kafka

    五、部署nginx

    下载nginx

    wget -b http://nginx.org/download/nginx-1.17.5.tar.gz
    tar -zxvf nginx-1.17.5.tar.gz
    

    安装nginx

    cd nginx-1.17.5
    ./configure  --prefix=/home/elastic/nginx-1.17  --error-log-path=/home/elastic/nginx-1.17/logs/error.log  --http-log-path=/home/elastic/nginx-1.17/logs/access.log  --pid-path=/home/elastic/nginx-1.17/logs/nginx.pid --lock-path=/home/elastic/nginx-1.17/logs/nginx.lock  --user= --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre
    make && make install
    

    修改nginx配置

    cd ngixn-1.17
    
    vi conf/nginx.conf
    
    listen       9999;
    

    启动nginx

    sbin/nginx
    

    验证


    nginx

    六、部署filebeat

    上传解压filebeat

    tar -zxvf filebeat-7.4.2-linux-x86_64.tar.gz
    

    配置nginx.yml

    ###################### Filebeat Configuration Example #########################
    
    # This file is an example configuration file highlighting only the most common
    # options. The filebeat.reference.yml file from the same directory contains all the
    # supported options with more comments. You can use it as a reference.
    #
    # You can find the full configuration reference here:
    # https://www.elastic.co/guide/en/beats/filebeat/index.html
    
    # For more available modules and options, please see the filebeat.reference.yml sample
    # configuration file.
    
    #=========================== Filebeat inputs =============================
    
    filebeat.inputs:
    
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.
    
    - type: log
    
      # Change to true to enable this input configuration.
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /app/udal/nginx-1.17/logs/*.log
        #- c:\programdata\elasticsearch\logs\*
    
    output.kafka:
      enabled: true
      hosts: ["IP地址:9092"]
      topic: nginx
      # Exclude lines. A list of regular expressions to match. It drops the lines that are
      # matching any regular expression from the list.
      #exclude_lines: ['^DBG']
    
      # Include lines. A list of regular expressions to match. It exports the lines that are
      # matching any regular expression from the list.
      #include_lines: ['^ERR', '^WARN']
    
      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      #exclude_files: ['.gz$']
    
      # Optional additional fields. These fields can be freely picked
      # to add additional information to the crawled log files for filtering
      #fields:
      #  level: debug
      #  review: 1
    
      ### Multiline options
    
      # Multiline can be used for log messages spanning multiple lines. This is common
      # for Java Stack Traces or C-Line Continuation
    
      # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
      #multiline.pattern: ^\[
    
      # Defines if the pattern set under pattern should be negated or not. Default is false.
      #multiline.negate: false
    
      # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
      # that was (not) matched before or after or as long as a pattern is not matched based on negate.
      # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
      #multiline.match: after
    
    
    #============================= Filebeat modules ===============================
    
    #filebeat.config.modules:
      # Glob pattern for configuration loading
    #  path: ${path.config}/modules.d/*.yml
    
      # Set to true to enable config reloading
    #  reload.enabled: false
    
      # Period on which files under path should be checked for changes
      #reload.period: 10s
    
    #==================== Elasticsearch template setting ==========================
    
    #setup.template.settings:
    #  index.number_of_shards: 1
      #index.codec: best_compression
      #_source.enabled: false
    
    #================================ General =====================================
    
    # The name of the shipper that publishes the network data. It can be used to group
    # all the transactions sent by a single shipper in the web interface.
    #name:
    
    # The tags of the shipper are included in their own field with each
    # transaction published.
    #tags: ["service-X", "web-tier"]
    
    # Optional fields that you can specify to add additional information to the
    # output.
    #fields:
    #  env: staging
    
    
    #============================== Dashboards =====================================
    # These settings control loading the sample dashboards to the Kibana index. Loading
    # the dashboards is disabled by default and can be enabled either by setting the
    # options here or by using the `setup` command.
    #setup.dashboards.enabled: false
    
    # The URL from where to download the dashboards archive. By default this URL
    # has a value which is computed based on the Beat name and version. For released
    # versions, this URL points to the dashboard archive on the artifacts.elastic.co
    # website.
    #setup.dashboards.url:
    
    #============================== Kibana =====================================
    
    # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
    # This requires a Kibana endpoint configuration.
    #setup.kibana:
    
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "localhost:5601"
    
      # Kibana Space ID
      # ID of the Kibana Space into which the dashboards should be loaded. By default,
      # the Default Space will be used.
      #space.id:
    
    #============================= Elastic Cloud ==================================
    
    # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
    
    # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
    # `setup.kibana.host` options.
    # You can find the `cloud.id` in the Elastic Cloud web UI.
    #cloud.id:
    
    # The cloud.auth setting overwrites the `output.elasticsearch.username` and
    # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
    #cloud.auth:
    
    #================================ Outputs =====================================
    
    # Configure what output to use when sending the data collected by the beat.
    
    #-------------------------- Elasticsearch output ------------------------------
    #output.elasticsearch:
      # Array of hosts to connect to.
    #  hosts: ["localhost:9200"]
    
      # Optional protocol and basic auth credentials.
      #protocol: "https"
      #username: "elastic"
      #password: "changeme"
    
    #----------------------------- Logstash output --------------------------------
    #output.logstash:
      # The Logstash hosts
      #hosts: ["localhost:5044"]
    
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    
      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"
    
      # Client Certificate Key
      #ssl.key: "/etc/pki/client/cert.key"
    
    #================================ Processors =====================================
    
    # Configure processors to enhance or manipulate events generated by the beat.
    
    #processors:
    #  - add_host_metadata: ~
    #  - add_cloud_metadata: ~
    
    #================================ Logging =====================================
    
    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    #logging.level: debug
    
    # At debug level, you can selectively enable logging only for some components.
    # To enable all selectors use ["*"]. Examples of other selectors are "beat",
    # "publish", "service".
    #logging.selectors: ["*"]
    
    #============================== X-Pack Monitoring ===============================
    # filebeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.
    
    # Set to true to enable the monitoring reporter.
    #monitoring.enabled: false
    
    # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
    # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
    # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
    #monitoring.cluster_uuid:
    
    # Uncomment to send the metrics to Elasticsearch. Most settings from the
    # Elasticsearch output are accepted here as well.
    # Note that the settings should point to your Elasticsearch *monitoring* cluster.
    # Any setting that is not set is automatically inherited from the Elasticsearch
    # output configuration, so if you have the Elasticsearch output configured such
    # that it is pointing to your Elasticsearch monitoring cluster, you can simply
    # uncomment the following line.
    #monitoring.elasticsearch:
    
    #================================= Migration ==================================
    
    # This allows to enable 6.7 migration aliases
    #migration.6_to_7.enabled: true
    

    启动filebeat

    nohup ./filebeat -e -c nginx.yml >> nginx.log &
    

    验证:访问nginx,输出日志,kafka中nginx主题消息数量有增加

    [udal@XNJFZCZX01 kafka_2.11-2.3.1]$ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic nginx --time -1
    nginx:0:12
    [udal@XNJFZCZX01 kafka_2.11-2.3.1]$ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic nginx --time -1
    nginx:0:15
    [udal@XNJFZCZX01 kafka_2.11-2.3.1]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic nginx --partition 0 --offset 12 --max-messages 3                           
    {"@timestamp":"2019-11-22T09:36:02.785Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.2","topic":"nginx"},"log":{"file":{"path":"/app/udal/nginx-1.17/logs/access.log"},"offset":1661},"input":{"type":"log"},"ecs":{"version":"1.1.0"},"host":{"name":"XNJFZCZX01"},"agent":{"type":"filebeat","ephemeral_id":"4950c3ca-8e5b-448e-b975-ae7c34970096","hostname":"XNJFZCZX01","id":"13bad6ff-e748-4d12-a6a1-09f90705d71b","version":"7.4.2"},"message":"10.4.12.134 - - [22/Nov/2019:17:35:53 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\""}
    {"@timestamp":"2019-11-22T09:36:02.786Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.2","topic":"nginx"},"message":"10.4.12.134 - - [22/Nov/2019:17:35:53 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\"","input":{"type":"log"},"ecs":{"version":"1.1.0"},"host":{"name":"XNJFZCZX01"},"agent":{"id":"13bad6ff-e748-4d12-a6a1-09f90705d71b","version":"7.4.2","type":"filebeat","ephemeral_id":"4950c3ca-8e5b-448e-b975-ae7c34970096","hostname":"XNJFZCZX01"},"log":{"offset":1851,"file":{"path":"/app/udal/nginx-1.17/logs/access.log"}}}
    {"@timestamp":"2019-11-22T09:36:02.786Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.2","topic":"nginx"},"agent":{"hostname":"XNJFZCZX01","id":"13bad6ff-e748-4d12-a6a1-09f90705d71b","version":"7.4.2","type":"filebeat","ephemeral_id":"4950c3ca-8e5b-448e-b975-ae7c34970096"},"log":{"offset":2041,"file":{"path":"/app/udal/nginx-1.17/logs/access.log"}},"message":"10.4.12.134 - - [22/Nov/2019:17:35:54 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\"","input":{"type":"log"},"ecs":{"version":"1.1.0"},"host":{"name":"XNJFZCZX01"}}
    Processed a total of 3 messages
    

    七、部署logstash

    下载解压logstash安装包

    tar -zxvf logstash-7.4.2.tar.gz
    

    配置logstash

    vi logstash-7.4.2/config/logstash.conf
    input {
        kafka {
            bootstrap_servers => "IP地址:9200"
            codec => "json"
            group_id => "logstash"
            topics => ["nginx"]
        }
    }
    
    output {
        elasticsearch{
            hosts => ["IP地址:9200", "IP地址:8200", "IP地址:7200"]
            index => "nginx_logs_index_%{+YYYY.MM.dd}" 
        }
    }
    

    启动logstash

    nohup bin/logstash -f config/logstash.conf >> logstash.log &
    

    访问nginx验证


    ES数据

    相关文章

      网友评论

        本文标题:ELK安装

        本文链接:https://www.haomeiwen.com/subject/zoshictx.html