美文网首页
基于filebeat 到elasticsearch 的日志传输

基于filebeat 到elasticsearch 的日志传输

作者: 没心没肺最开心 | 来源:发表于2021-09-29 18:09 被阅读0次

    这只是简单的一个快速demo,具体的还是建议上官网,官网很香的哦~

    前提条件:下载安装对应es版本的filebeat
    这边版本号为 6.7

    第一步:直接上配置文件

    filebeat.inputs:
    
    
    - type: log
    
      # Change to true to enable this input configuration.
      enabled: true
    
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - /var/log/phpjob/sdk_partner-fpm-*.log
        #- c:\programdata\elasticsearch\logs\*
      json.keys_under_root: true
      json.add_error_key: true
    
      # Exclude lines. A list of regular expressions to match. It drops the lines that are
      # matching any regular expression from the list.
      #exclude_lines: ['^DBG']
    
      # Include lines. A list of regular expressions to match. It exports the lines that are
      # matching any regular expression from the list.
      #include_lines: ['^ERR', '^WARN']
    
      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      #exclude_files: ['.gz$']
    
      # Optional additional fields. These fields can be freely picked
      # to add additional information to the crawled log files for filtering
      #fields:
      #  level: debug
      #  review: 1
    
      ### Multiline options
    
      # Multiline can be used for log messages spanning multiple lines. This is common
      # for Java Stack Traces or C-Line Continuation
    
      # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
      #multiline.pattern: ^\[
    
      # Defines if the pattern set under pattern should be negated or not. Default is false.
      #multiline.negate: false
    
      # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
      # that was (not) matched before or after or as long as a pattern is not matched based on negate.
      # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
      #multiline.match: after
    
    
    
    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml
    
      # Set to true to enable config reloading
      reload.enabled: false
    
      # Period on which files under path should be checked for changes
      #reload.period: 10s
    
    
    setup.template.settings:
      index.number_of_shards: 3
      #index.codec: best_compression
      #_source.enabled: false
    output.elasticsearch.index: "aisoulog-%{[beat.version]}-%{+yyyy.MM.dd}"
    setup.template.name: "aisoulog"
    setup.template.pattern: "aisoulog-*"
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "192.169.108.172:5601"
    
      # Kibana Space ID
      # ID of the Kibana Space into which the dashboards should be loaded. By default,
      # the Default Space will be used.
      #space.id:
    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["es_addr:9200"]
    
      # Enabled ilm (beta) to use index lifecycle management instead daily indices.
      #ilm.enabled: false
    
      # Optional protocol and basic auth credentials.
      protocol: "http"
      username: "elastic"
      password: "password"
    
      # The Logstash hosts
      #hosts: ["localhost:5044"]
    
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    
      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"
    
      # Client Certificate Key
      #ssl.key: "/etc/pki/client/cert.key"
    
    
    
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    

    这是基于官网配置文件的简单配置。

    如果你的日志文件是json:选择任何一项,就会解析json

    json.keys_under_root: true
    json.add_error_key: true

    一般比较喜欢将日志文件独立命名,fields 配置文件可以直接用默认的。当然这需要es直接开启自动创建索引的权限

    output.elasticsearch.index: "aisoulog-%{[beat.version]}-%{+yyyy.MM.dd}"
    setup.template.name: "aisoulog"
    setup.template.pattern: "aisoulog-*"
    aisoulog 自己定义即可

    启动命令:

    ./filebeat -e -c aisoulog.yml

    后台启动即,当然可以注册服务启动

    nohup 命令 2>&1 &

    相关文章

      网友评论

          本文标题:基于filebeat 到elasticsearch 的日志传输

          本文链接:https://www.haomeiwen.com/subject/mhmjnltx.html