服务架构
Elasticsearch + Logstash + Kibana
ELK对日志文件数据进行抽取、分析、存储、展示的日志引擎
Docker 安装镜像包
从镜像仓库拉取
docker pull elasticsearch:8.7.0
docker pull kibana:8.7.0
docker pull logstash:8.7.0
docker更改国内mirror,加速
https://blog.csdn.net/just_for_that_moment/article/details/125308103
从其他服务器打包镜像
在它服务器将指定镜像保存成 tar 归档文件,在目标服务器导入镜像包
docker save elasticsearch:8.7.0 > elasticsearch:8.7.0.tar
docker load -i elasticsearch:8.7.0.tar
docker save elasticsearch:8.7.0 > elasticsearch:8.7.0.tar
docker load -i elasticsearch:8.7.0.tar
docker save elasticsearch:8.7.0 > elasticsearch:8.7.0.tar
docker load -i elasticsearch:8.7.0.tar
查看安装后的镜像包
docker images
Docker-compose
编写docker-compose.yaml
version: '3'
services:
elasticsearch:
image: elasticsearch:8.7.0 #elasticsearch镜像包
container_name: elasticsearch #定义容器名称
restart: always #开机启动,失败也会一直重启
environment:
- "cluster.name=elasticsearch" #设置集群名称为elasticsearch
- "discovery.type=single-node" #以单一节点模式启动
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m" #设置使用jvm内存大小
volumes:
- /data/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins #插件文件挂载
- /data/elk/elasticsearch/data:/usr/share/elasticsearch/data #数据文件挂载
ports:
- 9200:9200
kibana:
image: kibana:8.7.0
container_name: kibana
restart: always
depends_on:
- elasticsearch #kibana在elasticsearch启动之后再启动
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200 #设置访问elasticsearch的地址
ports:
- 5601:5601
logstash:
image: logstash:8.7.0
container_name: logstash
restart: always
volumes:
- /data/elk/logstash/config/:/usr/share/logstash/config/
- /data/elk/logstash/pipeline/:/usr/share/logstash/pipeline/
depends_on:
- elasticsearch #kibana在elasticsearch启动之后再启动
links:
- elasticsearch:es #可以用es这个域名访问elasticsearch服务
ports:
- 4560:4560
创建logstash挂载目录
mkdir -p /data/elk/logstash/config
mkdir -p /data/elk/logstash/pipeline
创建配置文件
logstash.yml
mkdir -p /data/elk/logstash/config
mkdir -p /data/elk/logstash/pipeline
文件内容
config:
reload:
automatic: true
interval: 3s
xpack:
management.enabled: false
monitoring.enabled: false
pipelines.yml
cd /data/elk/logstash/config
touch pipelines.yml
vim pipelines.yml
文件内容(配置多个管道信息,来收集不同的信息)
-pipeline.id: logstash
path.config: /usr/share/logstash/pipeline/logstash.conf
logstash.conf
cd /data/elk/logstash/pipeline
touch logstash.conf
vi logstash.conf
文件内容
input { #从filebeat取数据,端口与filebeat配置文件一致
beats {
port => 4560
}
}
filter {
if [filetype] == "log_pd_cd"{ #filebeta定义文件类型
json {
source => "message" #日志内容字段
remove_field => ["log","offset","tags","instance_id"] #移除字段,不需要采集
}
}
}
output { #输出
elasticsearch {
hosts => [ "es:9200" ] #es同步docker-compose.yml文件
index => "pad_cd_server" #设置索引 用于kibana匹配索引
}
}
运行docker 集群
docker-compose up -d
关闭docker集群
docker-compose down
查看创建容器
docker ps
docker exec -it -u 0 {container_id} bash #进入容器
验证服务
http://{{host}}:5601/ #kibana
http://{{host}}:9200/ #elasticsearch
补充说明
汉化kibana
将kibana容器内的/usr/share/kibana/config/kibana.yml
添加i18n.locale: zh-CN
docker exec -it {container_id} /bin/bash -c "echo "i18n.locale: zh-CN" >> /opt/kibana/config/kibana.yml"
docker restart {container_id}
取消elasticsearch验证
将elasticsearch容器内的 /usr/share/elasticsearch/config/elasticsearch.yml
添加xpack.security.enabled: false
附录
配置文件
elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.security.enabled: false
kibana.yml
/usr/share/elasticsearch/config/elasticsearch.yml
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: zh-CN
参考资料
Docker安装Logstash | https://www.jianshu.com/p/88517124503d
Elastic官网 | https://www.elastic.co/guide/en/elasticsearch/reference/6.0/getting-started.html
docker-compose安装ELK | https://www.jianshu.com/p/2d78ce6bc504
Logstash | https://www.elastic.co/guide/en/logstash/current/introduction.html
网友评论