这次我们的目标是搭建一个ELK,结构为
Filebeat--> Logstash --> ela <-- kibana
- Filebeat 将自己读取到的日志文件输出到 Logstash内
- Logstash 将获取到的数据输出到ela内
- kibana从ela内获取要查询的数据
首先先拉取对应的镜像
docker pull docker.elastic.co/beats/filebeat:7.4.2
docker pull docker.elastic.co/logstash/logstash:7.4.2
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.4.2
docker.elastic.co/elasticsearch/elasticsearch:7.4.2
mkdir filebear
cd filebeat
制作一个虚拟的日志文件
文件名:jpx.log
95.213.177.126 - - [18/Jul/2017:00:01:09 +0800] "POST http://check.proxyradar.com/azenv.php HTTP/1.1" 404 326 "https://proxyradar.com/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)" "-"
202.108.211.56 - - [18/Jul/2017:00:03:23 +0800] "GET http://1.1.1.1/ HTTP/1.1" 200 6228 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.21 (KHTML, like Gecko) Chrome/19.0.1042.0 Safari/535.21" "-"
221.228.109.90 - - [18/Jul/2017:01:52:17 +0800] "GET http://www.sharkyun.com/ HTTP/1.1" 200 6228 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:17 +0800] "GET http://www.sharkyun.com/css/style_eeoweb.css HTTP/1.1" 200 11988 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/mobile/js/deviceType.js HTTP/1.1" 200 1055 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/js/jplayer/skin/black/css/style.css HTTP/1.1" 200 3339 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/js/index_eeoweb.js HTTP/1.1" 200 910 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/js/easySlider.js HTTP/1.1" 200 2431 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/js/require_eeoweb.js HTTP/1.1" 200 7161 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
221.228.109.90 - - [18/Jul/2017:01:52:18 +0800] "GET http://www.sharkyun.com/js/jquery.js HTTP/1.1" 200 46467 "https://www.sharkyun.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" "119.61.20.114"
编写filebeat对应的配置文件
容器内的配置文件的路径是/usr/share/filebeat/filebeat.yml
这里我们将日志映射到容器内的/jpx.log
用filebeat对日志进行读取,并输出到Logstash内
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata: ~
filebeat.inputs:
- type: log
paths:
- /*.log
output.logstash:
# The Logstash hosts
hosts: ["logstash:5044"]
#如果想排错 可以将输出输出到控制太慢慢查看
#output.console:
# pretty: true
配置logstash
cd ..
mkdir logstash
cd logstash
编写logstash的配置文件
logstash的配置文件路径: /usr/share/logstash/pipeline/xxx.conf
logstash_stdout.conf
input {
beats {
port => 5044 # 监听的端口
host => "0.0.0.0" # 监听的本地 ip 地址,这里是全部地址
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["elasticsearch:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
# stdout { codec => rubydebug } # 假如有问题,可以打开此行进行调试
}
现在我们已经将logstash获取到的内容输出到了ela内,ela我们可以使用容器的默认值,现在直接配置kibana就可以了
cd ..
mkdir kibana
cd kibana
kibana的配置文件
kibana 的容器其实处于测试性的目的就可以直接运行了。
因为默认的配置文件中集群的 url 就是 http://elasticsearch:9200
下面是容器内默认的配置文件内容
/usr/share/kibana/config/kibana.yml
---
---
# Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
#elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.ui.container.elasticsearch.enabled: true
完全对应的docker-compose文件
version: "3.2"
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.4.2
container_name: kibana
networks:
- "elk-net"
volumes:
- "./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml"
ports:
- "5601:5601"
depends_on:
- "elasticsearch"
- "filebeat"
- "logstash"
filebeat:
image: docker.elastic.co/beats/filebeat:7.4.2
container_name: filebeat
networks:
- "elk-net"
volumes:
- "./filebeat/jpx.log:/jpx.log"
- "./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml"
logstash:
image: docker.elastic.co/logstash/logstash:7.4.2
container_name: logstash
networks:
- "elk-net"
volumes:
- type: bind
source: "./logstash/logstash_stdout.conf"
target: "/usr/share/logstash/pipeline/logstash.conf"
depends_on:
- "filebeat"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
container_name: elasticsearch
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- "elk-net"
depends_on:
- "logstash"
- "filebeat"
volumes:
data01:
driver: local
networks:
elk-net:
注意这里的容器启动的先后顺序
启动之后可以进入ip:9200和ip:5601内查看对应的web界面
网友评论