说明
需求的场景需要把nginx,apache,mysql,系统等日志收集到es系统中. 每个服务器都需要单独的索引(每个服务器保存的时间不一样)
主要是通过tags来实现,一个filebeat可以打多个tags.
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/www/logs/apache_log/access/*log*
tags: ["10.244_apache_access","apache"]
- type: log
enabled: true
paths:
- /data/www/logs/apache_log/error/*log*
tags: ["10.244_apache_error","apache"]
output.logstash:
hosts: ["192.168.12.33:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
打了两个tags, 10.244_apache_access 用于在logstash out中单独输出一个索引. apache用于 filter ,比如按apache类写了一种grok,nginx类写了一种grok
logstash.conf
input {
beats {
port => 5044
}
}
filter {
if "apache" in [tags] {
grok {
patterns_dir => "/data/elk/logstash-7.5.0/patterns"
match => {
"message" => "%{APACHEACCESS}"
}
}
}
if "nginx" in [tags] {
grok {
patterns_dir => "/data/elk/logstash-7.5.0/patterns"
match => {
"message" => "%{NGINXACCESS}"
}
}
geoip {
source => ["remote_addr"]
target => ["geoip"]
fields => ["city_name","region_name","country_name","ip"]
}
}
}
output {
if "10.14_nginx_access" in [tags]{
elasticsearch {
hosts => ["http://192.168.112.33:9200"]
index => "192.168.10.14-nginx-access-logstash-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxxxx"
}
}
if "10.14_nginx_error" in [tags]{
elasticsearch {
hosts => ["http://192.168.112.33:9200"]
index => "192.168.10.14-nginx-error-logstash-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxxxxx"
}
}
if "10.244_apache_access" in [tags]{
elasticsearch {
hosts => ["http://192.168.112.33:9200"]
index => "192.168.10.244-apache-access-logstash-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxxxxx"
}
}
if "10.244_apache_error" in [tags]{
elasticsearch {
hosts => ["http://192.168.112.33:9200"]
index => "192.168.10.244-apache-error-logstash-%{+YYYY.MM.dd}"
user => "elastic"
password => "xxxxxx"
}
}
}
- 这样就能实现在es那边每天为每台服务器生成一个独立的索引. 方便定时清理
- 安照几个类别用不同的grok切割
- 添加了geo,可以显示公网地址位置.
网友评论