Elastalert是Yelp公司用python2.6写的一个报警框架,github地址为
https://github.com/Yelp/elastalert
Elastalert通过查询ElasticSearch中的记录进行比对,对服务报警规则对日志发送报警邮件。Elastalert支持对报警规则有:
- blacklist类型 说明:黑名单规则将检查黑名单中的某个字段,如果它在黑名单中则匹配。
- whitelist类型 说明:与黑名单类似,此规则将某个字段与白名单进行比较,如果列表中不包含该字词,则匹配。
- change类型 说明:此规则将监视某个字段,并在该字段更改时进行匹配,该领域必须相对于最后一个事件发生相同的变化。
- frequency类型 说明:当给定时间范围内至少有一定数量的事件时,此规则匹配。 这可以按照每个query_key来计数
- spike类型 说明:当某个时间段内的事件量比上一个时间段的spike_height时间大或小时,这个规则是匹配的。它使用两个滑动窗口来比较事件的当前和参考频率。 我们将这两个窗口称为“参考”和“当前”。
- flatline类型 说明:当一个时间段内的事件总数低于一个给定的阈值时,匹配规则。
- cardinality类型 说明:当一个时间范围内的特定字段的唯一值的总数高于或低于阈值时,该规则匹配
- percentage match类型 说明:当计算窗口内的匹配桶中的文档的百分比高于或低于阈值时,此规则匹配。计算窗口默认为buffer_time。
Elastalert支持对报警方式有:
Email
JIRA
OpsGenie
Commands
HipChat
MS Teams
Slack
Telegram
AWS SNS
VictorOps
PagerDuty
Exotel
Twilio
Gitter
此外也能自己进行定制开发,如使用微信接口进行实现微信推送报警
https://segmentfault.com/a/1190000017553282
https://www.freebuf.com/sectool/164591.html
我这次对目标是对爬虫的运行状态进行报警。设想是使用filebeat将爬虫日志输入logstash,处理后存入elasticsearch数据库,分布针对不同的网站进行不同方式的报警。
elastalert报警的规则有三种:rule1,出现error字样就报警,rule2,一分钟出现5次error报警,rule3,出现“已删除”字样就报警。都已邮件的形式发送到系统管理员。
操作步骤如下:
Elk and elastalert
所需软件: elasticsearch, logstash, kibana, elastalert
172.17.101.164: elasticsearch,logstash.kibana
172.17.101.166: elastalert
Elasticsearch port: 9200
Logstash port: 9600
Kibana port: 5601
Config file: /etc//.yml
Software file: /usr/share//bin/
*指软件名
Elasticsearch
Version: 6.5.3
下载链接: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.3.rpm
配置文件:elasticsearch.yml
Path: /etc/elasticsearch/elasticsearch.yml
Log: /var/log/elasticsearch
Data: /var/lib/elasticsearch
Output network:
Network.host: 172.17.101.164
http.port: 9200
Logstash
Version: 6.5.3
下载链接:https://artifacts.elastic.co/downloads/logstash/logstash-6.5.3.rpm
配置文件: logstash.yml 和 logstash.conf
Path: /etc/logstash/logstash.yml or logstash.conf
Logstash.yml
path.data: /var/lib/logstash
http.host: "172.17.101.164"
path.data: /var/lib/logstash
Logstash.conf
input {
beats {
port => 9600 #port9600接收所有filebeat文件
}
}
filter {
grok{ #匹配三种message
match=> {
"message"=>"%{DATA:date}]%{DATA:index}]%{WORD:location}]%{GREEDYDATA:information}"
}
match=> {
"message"=>"%{DATA:date}]%{DATA:index}]%{GREEDYDATA:information}"
}
match=> {
"message"=>"%{DATA:date} %{DATA:time} -%{WORD:location}-%{WORD:status}-%{GREEDYDATA:information}"
}
}
mutate {
remove_field =>['tags','beat','host',’message’] #删除不需要的数据
}
}
output {
stdout {
codec => rubydebug { }
}
# 通过不同的field的值来区分不同种类的文件,同时用不同的index来区分他们,方便后面elastalert或者kibana获取数据
# 所有的output都指向172.17.101.164:9200
if [fields][service] == "fanwen"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "fanwen"
}
}
if [fields][service] == "changshu"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "changshu"
}
}
if [fields][service] == "gaoxin"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "gaoxin"
}
}
if [fields][service] == "gusu"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "gusu"
}
}
if [fields][service] == "gzjd"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "gzjd"
}
}
if [fields][service] == "hswz"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "hswz"
}
}
if [fields][service] == "kunshan"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "kunshan"
}
}
if [fields][service] == "wujiang"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "wujiang"
}
}
if [fields][service] == "taicang"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "taicang"
}
}
if [fields][service] == "wuzhong"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "wuzhong"
}
}
if [fields][service] == "xiangcheng"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "xiangcheng"
}
}
if [fields][service] == "ylb"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "ylb"
}
}
if [fields][service] == "yuanqu"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "yuanqu"
}
}
if [fields][service] == "zhangjiagang"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "zhangjiagang"
}
}
if [fields][service] == "ip"
{
elasticsearch {
hosts => "172.17.101.164:9200"
index => "ip"
}
}
}
Grok match
%{DATA:date}]%{DATA:index}]%{WORD:location}]%{GREEDYDATA:information}
Matches
[2018-12-12 23:56:56][0][hswz]ip:12131213232231
%{DATA:date}]%{DATA:index}]%{GREEDYDATA:information}
Matches
[2018-12-12 23:56:56][0]ip:12131213232231
%{DATA:date} %{DATA:time}-%{WORD:location}-%{WORD:status}-%{GREEDYDATA:information}
Matches
2018-12-12 23:56:56-hswz-INFO-ip:127.1.1.0 (just an instance)
Filebeat
Version: 6.5.3
下载链接:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.3-x86_64.rpm
配置文件:filebeat.yml
Path: /etc/filebeat/filebeat.yml
Filebeat.yml
IpProxy: 172.17.101.164
filebeat.prospectors:
filebeat.inputs:
- input_type: log
encoding: utf-8
paths: #要传输到logstash的log的地址
- /data/ipProxy*
fields:
service: ip #没啥用,用来区分不同log文件
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
output.logstash:
hosts: ["172.17.101.164:9600"]
yuqing and fanwen: 172.17.101.240 password: Pass@1234
filebeat.prospectors:
- input_type: log
encoding: utf-8
paths:
- /data/fanwen/*/logs/*
fields:
service: fanwen
- input_type: log
encoding: utf-8
paths:
- /data/yuqing/yuanqu/logs/*
fields:
service: yuqing
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
output.logstash:
hosts: ["172.17.101.164:9600"]
Elastalert
安装教程: http://blog.51cto.com/seekerwolf/2121070
Path:/usr/local/elastalert 172.17.101.166
运行指令:python -m elastalert.elastalert –config config.yaml (在elastalert文件夹运行)
Elastalert.yml
# 定义基本信息
rules_folder: /usr/local/elastalert/example_rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
minutes: 1
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 172.17.101.164
# The Elasticsearch port
es_port: 9200
Rule
- Rule1 fanwen
# 只要status为ERROR就报警
# (Optional)
# Elasticsearch host
es_host: 172.17.101.164
# (Optional)
# Elasticsearch port
es_port: 9200
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
name: fanwen
type: blacklist
index: fanwen
timeframe:
minutes: 1
compare_key: status
blacklist:
- "error"
- "ERROR"
smtp_host: smtp.csztv.com
smtp_port: 25
smtp_auth_file: /usr/local/elastalert/example_rules/smtp_auth_file.yaml
email_reply_ro: 648833723@qq.com
from_addr: mocha@csztv.com
alert:
- "email"
email:
- "648833723@qq.com"
- "tml@csztv.com"
- "zhangyongshu@csztv.com"
- “limingfeng@csztv.com”
- Rule2 yuqing
# 一分钟status出现五次ERROR
es_host: 172.17.101.164
es_port: 9200
name: changshu
type: frequency
index: changshu
num_events: 5
timeframe:
minutes: 1
filter:
- query:
query_string:
query: "status: ERROR or error"
smtp_host: smtp.csztv.com
smtp_port: 25
smtp_auth_file: /usr/local/elastalert/example_rules/smtp_auth_file.yaml
email_reply_ro: 648833723@qq.com
from_addr: mocha@csztv.com
alert:
- "email"
email:
- "648833723@qq.com"
- "tml@csztv.com"
- "zhangyongshu@csztv.com"
- "limingfeng@csztv.com"
- Rule3 ip
# 出现“已删除“马上报警
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
es_host: 172.17.101.164
# (Optional)
# Elasticsearch port
es_port: 9200
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
name: ip has run out
type: blacklist
index: ip
timeframe:
minutes: 1
compare_key: information
blacklist:
- "已删除"
smtp_host: smtp.csztv.com
smtp_port: 25
smtp_auth_file: /usr/local/elastalert/example_rules/smtp_auth_file.yaml
email_reply_ro: 648833723@qq.com
from_addr: mocha@csztv.com
alert:
- "email"
email:
- "648833723@qq.com"
- "tml@csztv.com"
- "zhangyongshu@csztv.com"
- "limingfeng@csztv.com"
网友评论