环境
第一步:安装 Elasticsearch
https://www.elastic.co/cn/downloads/elasticsearch
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
1.下载镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.1.1
2.开发模式运行
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
## 检查运行状态
curl http://127.0.0.1:9200/_cat/health
[root@elk ~]# curl http://127.0.0.1:9200/_cat/health
1560302053 01:14:13 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
3.生产模式运行需要优化参数
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
cat /etc/sysctl.conf
sysctl -w vm.max_map_count=262144
4.编辑 docker-compose.yml 配置文件
## 编辑 docker-compose.yml
cat <<\EOF >docker-compose.yml
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1
container_name: es01
environment:
- node.name=es01
- cluster.name=ELK7.1.1
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
volumes:
esdata01:
driver: local
networks:
esnet:
EOF
## 运行
docker-compose up
## 后台运行
docker-compose up -d
5.创建文件夹用于数据持久化
By default, Elasticsearch runs inside the container as user elasticsearch using uid:gid 1000:1000.
默认情况下,Elasticsearch elasticsearch使用uid:gid 1000:1000作为容器内运行账户
mkdir -p /usr/share/elasticsearch/data
chmod g+rwx /usr/share/elasticsearch/data
chgrp 1000 /usr/share/elasticsearch/data
6.访问
http://192.168.0.1:9200
第二步:Running Kibana on Docker
https://www.elastic.co/guide/cn/kibana/current/docker.html
1.下载镜像
docker pull docker.elastic.co/kibana/kibana:7.1.1
2.编辑 docker-compose.yml 配置文件
## 编辑
cat <<\EOF >Kibana.yml
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.1.1
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_HOSTS: http://192.168.50.14:9200
ports:
- 80:5601
EOF
## 运行
docker-compose -f Kibana.yml up -d
## 后台运行
docker-compose -f Kibana.yml up -d
第三步:安装 Logstash 收集 Nginx 日志
1.下载 Logstash 并安装
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.1.rpm
rpm -ivh logstash-7.1.1.rpm
2.下载 IP 地址信息库
https://dev.maxmind.com/geoip/geoip2/geolite2/
cd /home/
wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar vxzf GeoLite2-City.tar.gz
3.准备 Logstash 配置文件
修改1:日志路径
修改2:索引名称
修改3:es服务器地址
cat <<\EOF >/etc/logstash/conf.d/nginx_access.conf
input {
file {
#这里根据自己日志命名使用正则匹配所有域名访问日志
path => [ "/var/log/tengine/access*.log" ]
ignore_older => 0
codec => json
}
}
filter {
mutate {
convert => [ "status","integer" ]
convert => [ "size","integer" ]
convert => [ "upstreatime","float" ]
remove_field => "message"
}
geoip {
source => "client"
target => "geoip"
database =>"/home/GeoLite2-City.mmdb"
fields => ["location", "country_name", "city_name", "region_name"]
}
}
output {
elasticsearch {
hosts => "http://192.168.50.14:9200"
index => "logstash-nginx-test01-access-%{+YYYY.MM.dd}"
}
# stdout {codec => rubydebug}
}
EOF
4.启动服务
systemctl enable logstash.service
systemctl stop logstash.service
systemctl start logstash.service
systemctl status logstash.service
5.查看索引
curl '192.168.50.14:9200/_cat/indices?v'
第四步:安装 Grafana
https://grafana.com/grafana/download?platform=linux
1.下载并安装
wget https://dl.grafana.com/oss/release/grafana-6.2.2-1.x86_64.rpm
yum install -y grafana-6.2.2-1.x86_64.rpm
2.启动
systemctl start grafana-server.service
systemctl status grafana-server.service
3.安装必要插件
## 饼图
## http://192.168.50.14:3000/plugins/grafana-piechart-panel/
grafana-cli plugins install grafana-piechart-panel
## 地图
## https://grafana.com/plugins/grafana-worldmap-panel
grafana-cli plugins install grafana-worldmap-panel
4.登录系统并添加数据源
5.设置 Nginx 插件
导入模板。
第三步:安装 Filebeat 用于监控 MySQL
1.下载
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.1-x86_64.rpm
rpm -ivh filebeat-7.1.1-x86_64.rpm
2.编辑配置文件 filebeat.yml
3.启动服务
systemctl start filebeat.service
systemctl status filebeat.service
4.查看索引
curl '192.168.50.14:9200/_cat/indices?v'
网友评论