1.安装docker环境
https://www.jianshu.com/p/bf2735f9f4d0
2.安装docker-compose(容器编排技术)
1)去github下载需要的版本即可
https://github.com/docker/compose/releases/tag/v2.0.1
2)上传docker-compose文件到/usr/local/bin/目录并赋执行权限
sudo chmod +x /usr/local/bin/docker-compose(docker-compose文件添加可执行的权限)

3)验证是否安装成功
docker-compose -v

3.配置docker-compose.yml文件(会自动安装zk environment kafka kibana并运行安装的容器)
yml文件里的ip地址缓存实际虚拟机的ip地址
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper ## 镜像
ports:
- "2181:2181" ## 对外暴露的端口号
kafka:
image: wurstmeister/kafka ## 镜像
volumes:
- /etc/localtime:/etc/localtime ## 挂载位置(kafka镜像和宿主机器之间时间保持一直)
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.100.200 ## 修改:宿主机IP
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ## 卡夫卡运行是基于zookeeper的
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: 120
KAFKA_MESSAGE_MAX_BYTES: 10000000
KAFKA_REPLICA_FETCH_MAX_BYTES: 10000000
KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000
KAFKA_NUM_PARTITIONS: 3
KAFKA_DELETE_RETENTION_MS: 1000
kafka-manager:
image: sheepkiller/kafka-manager ## 镜像:开源的web管理kafka集群的界面
environment:
ZK_HOSTS: 192.168.100.200 ## 修改:宿主机IP
ports:
- "9001:9000" ## 暴露端口
elasticsearch:
image: daocloud.io/library/elasticsearch:6.5.4
restart: always
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
kibana:
image: daocloud.io/library/kibana:6.5.4
restart: always
container_name: kibana
ports:
- 5601:5601
environment:
- elasticsearch_url=http://192.168.100.200:9200
depends_on:
- elasticsearch
4.在/usr下新建目录xxx(我这创建dcf),上传docker-compose.yml文件

5.设置虚拟机内存并重启虚拟机
1)设置vm内存
在/etc/sysctl.conf文件的最后添加一行代码:
vm.max_map_count=262144
2)关闭虚拟机防火墙
systemctl stop firewalld.service
3)重启虚拟机
4)重启docker
service docker restart
5.进入/usr/dcf目录下运行docker-compose.yml文件
docker-compose up

6.查看运行结果
如图显示说明kibana elasticsearch zookper运行成功
http://192.168.100.200:9200/

http://192.168.100.200:5601/app/kibana#/dev_tools/console?_g=()


7.安装logstash插件
1)将logstash上传到/usr/dcf目录中并解压

2)进入logstash的config目录修改logstash-sample.conf配置文件

input {
kafka {
bootstrap_servers => "192.168.100.200:9092"
topics => "jyb-log"
}
}
filter {
#Only matched data are send to output.
}
output {
elasticsearch {
action => "index"
hosts => "192.168.100.200:9200"
index => "jyb_logs"
}
}
3)安装jdk环境(安装过就跳过此步骤)
https://www.jianshu.com/p/69883925350c
4)关联kafaka与elasticsearch
输入 bin/logstash-plugin install logstash-input-kafka 关联kafaka

输入 bin/logstash-plugin install logstash-output-elasticsearch 关联elasticsearch

5)进入bin目录运行logstash插件
输入 ./logstash -f ../config/logstash-sample.conf

8.虚拟机配置信息
elk+kafaka搭建,最低配置6g内存+4核+30g存储空间,低于此配置环境是无法正常运行的

9.搭建环境需要的安装包地址
地址百度云盘分享如果失效了,请自行百度查找安装包下载
链接:https://pan.baidu.com/s/1dgdpVigA876Rdg41Z-Iz8g
提取码:3535
网友评论