美文网首页
ELK(四)

ELK(四)

作者: 吃可爱长大鸭 | 来源:发表于2020-02-16 18:20 被阅读0次

第十五章: filebeat收集docker日志升职加薪版

1.目前不完善的地方
正常日志和报错日志放在一个索引里了

2.理想中的索引名称
docker-nginx-access-6.6.0-2020.XX
docker-nginx-error-6.6.0-2020.XX
docker-db-access-6.6.0-2020.XX
docker-db-error-6.6.0-2020.XX

3.filebeat配置文件
cat >/etc/filebeat/filebeat.yml <<EOF   
filebeat.inputs:
- type: log 
  enabled: true
  paths:
    - /var/lib/docker/containers/*/*-json.log
  json.keys_under_root: true
  json.overwrite_keys: true

output.elasticsearch:
  hosts: ["10.0.0.51:9200"]
  indices:
    - index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        attrs.service: "nginx"
        stream: "stdout"
    - index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        attrs.service: "nginx"
        stream: "stderr"

    - index: "docker-db-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        attrs.service: "db"
        stream: "stdout"
    - index: "docker-db-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        attrs.service: "db"
        stream: "stderr"

setup.template.name: "docker"
setup.template.pattern: "docker-*"
setup.template.enabled: false
setup.template.overwrite: true
EOF

4.重启filebeat    
systemctl restart filebeat 

5.生成测试数据
curl 127.0.0.1/nginxxxxxxxxxxx
curl 127.0.0.1:8080/dbbbbbbbbb

第十六章: filebeat收集docker日志终极杀人王火云邪神版

1.需求分析
json格式并且按照下列索引生成
docker-nginx-access-6.6.0-2020.02
docker-db-access-6.6.0-2020.02
docker-db-error-6.6.0-2020.02
docker-nginx-error-6.6.0-2020.02


2.停止并且删除以前的容器
docker stop $(docker ps -qa)
docker rm $(docker ps -qa)

3.创建新容器
docker run -d -p 80:80 -v /opt/nginx:/var/log/nginx/ nginx
docker run -d -p 8080:80 -v /opt/mysql:/var/log/nginx/ nginx

4.准备json格式的nginx配置文件
scp 10.0.0.51:/etc/nginx/nginx.conf /root/

[root@db01 ~]# grep "access_log" nginx.conf 
    access_log  /var/log/nginx/access.log  json;

5.拷贝到容器里并重启
docker cp nginx.conf Nginx容器的ID:/etc/nginx/
docker cp nginx.conf mysql容器的ID:/etc/nginx/
docker stop $(docker ps -qa)
docker start Nginx容器的ID
docker start mysql容器的ID


6.删除ES已经存在的索引


7.配置filebeat配置文件
cat >/etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log 
  enabled: true
  paths:
    - /opt/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["nginx_access"]

- type: log 
  enabled: true
  paths:
    - /opt/nginx/error.log
  tags: ["nginx_err"]

- type: log 
  enabled: true
  paths:
    - /opt/mysql/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["db_access"]

- type: log 
  enabled: true
  paths:
    - /opt/mysql/error.log
  tags: ["db_err"]

output.elasticsearch:
  hosts: ["10.0.0.51:9200"]
  indices:
    - index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "nginx_access"

    - index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "nginx_err"

    - index: "docker-db-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "db_access"

    - index: "docker-db-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "db_err"

setup.template.name: "docker"
setup.template.pattern: "docker-*"
setup.template.enabled: false
setup.template.overwrite: true
EOF

8.重启filebeat
systemctl restart filebeat

9.访问并测试
curl 127.0.0.1/oldboy
curl 127.0.0.1:8080/oldboy
cat /opt/nginx/access.log
cat /opt/mysql/access.log
es-head查看

第十七章: filebeat引入redis缓存

1.安装redis
yum install redis 
sed -i 's#^bind 127.0.0.1#bind 127.0.0.1 10.0.0.51#' /etc/redis.conf
systemctl start redis 
netstat -lntup|grep redis 
redis-cli -h 10.0.0.51

2.停止docker容器
docker stop $(docker ps -q)

3.停止filebeat
systemctl stop filebeat 

4.删除旧的ES索引

5.确认nginx日志为json格式
grep "access_log" nginx.conf

6.修改filebeat配置文件
cat >/etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

output.redis:
  hosts: ["10.0.0.51"]
  keys:
    - key: "nginx_access"
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"

setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true
EOF

7.重启filebaet和nginx
systemctl restart nginx 
systemctl restart filebeat

8.生成测试数据
curl 127.0.0.1/haha

9.检查
redis-cli -h 10.0.0.51
keys * 
TYPE nginx_access
LLEN nginx_access
LRANGE nginx_access 0 -1 
确认是否为json格式

10.安装logstash
rpm -ivh jdk-8u102-linux-x64.rpm 
rpm -ivh logstash-6.6.0.rpm


11.配置logstash
cat >/etc/logstash/conf.d/redis.conf<<EOF 
input {
  redis {
    host => "10.0.0.51"
    port => "6379"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "10.0.0.51"
    port => "6379"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
   stdout {}
   if "access" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM}"
      }
    }
}
EOF

12.前台启动测试
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf 

13.检查
logstash输出的内容有没有解析成json
es-head上有没有索引生成
redis里的列表数据有没有在减少

14.将logstash放在后台运行
ctrl+c
systemctl start logstash
听风扇声音,开始转的时候表示logstash启动了

第十八章: filebeat引入redis完善方案

1.前提条件
- filebeat不支持传输给redis烧饼或集群
- logstash也不支持从redis烧饼或集群里读取数据

2.安装配置redis
yum install redis -y
sed -i 's#^bind 127.0.0.1#bind 127.0.0.1 10.0.0.51#' /etc/redis.conf
systemctl start redis

3.安装配置nginx
配置官方源
yum install nginx -y
放在nginx.conf最后一行的}后面,不要放在conf.d里面
stream {
  upstream redis {
      server 10.0.0.51:6379 max_fails=2 fail_timeout=10s;
      server 10.0.0.52:6379 max_fails=2 fail_timeout=10s backup;
  }
  
  server {
          listen 6380;
          proxy_connect_timeout 1s;
          proxy_timeout 3s;
          proxy_pass redis;
  }
}
nginx -t
systemctl start nginx 

4.安装配置keepalived
yum install keepalived -y
db01的配置
global_defs {
    router_id db01
}
vrrp_instance VI_1 {
    state MASTER
        interface eth0
        virtual_router_id 50
        priority 150
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            10.0.0.100
        }
}

db02的配置
global_defs {
    router_id db02
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.100
    }
}

systemctl start keepalived 
ip a

5.测试访问能否代理到redis
redis-cli -h 10.0.0.100 -p 6380
把db01的redis停掉,测试还能不能连接redis

6.配置filebeat
cat >/etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

output.redis:
  hosts: ["10.0.0.100:6380"]
  keys:
    - key: "nginx_access"
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"

setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true
EOF

7.测试访问filebeat能否传输到redis
curl 10.0.0.51/haha
redis-cli -h 10.0.0.51 #应该有数据
redis-cli -h 10.0.0.52 #应该没数据
redis-cli -h 10.0.0.100 -p 6380 #应该有数据

8.配置logstash
cat >/etc/logstash/conf.d/redis.conf<<EOF 
input {
  redis {
    host => "10.0.0.100"
    port => "6380"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "10.0.0.100"
    port => "6380"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
   stdout {}
   if "access" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM}"
      }
    }
}
EOF

9.启动测试
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

10.最终测试
ab -n 10000 -c 100 10.0.0.100/
检查es-head上索引条目是否为10000条
关闭db01的redis,在访问,测试logstash正不正常
恢复db01的redis,再测试

第十九章: filbeat引入redis优化方案

1.新增加一个日志路径需要修改4个地方:
- filebat 2个位置
- logstash 2个位置

2.优化之后需要修改的地方2个地方
- filebat 1个位置
- logstash 1个位置

3.filebeat配置文件
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]


output.redis:
  hosts: ["10.0.0.100:6380"]
  key: "nginx_log"

setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true

4.优化后的logstash
input {
  redis {
    host => "10.0.0.100"
    port => "6380"
    db => "0"
    key => "nginx_log"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
   stdout {}
   if "access" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM}"
      }
    }
}

二十章: ELK使用kafka作为缓存

0.配置密钥
cat >/etc/hosts<<EOF
10.0.0.51 db01
10.0.0.52 db02
10.0.0.53 db03
EOF
ssh-keygen
ssh-copy-id 10.0.0.52
ssh-copy-id 10.0.0.53

1.安装zook
###db01操作
cd /data/soft
tar zxf zookeeper-3.4.11.tar.gz -C /opt/
ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
mkdir -p /data/zookeeper
cat >/opt/zookeeper/conf/zoo.cfg<<EOF
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.0.0.51:2888:3888
server.2=10.0.0.52:2888:3888
server.3=10.0.0.53:2888:3888
EOF
echo "1" > /data/zookeeper/myid
cat /data/zookeeper/myid
rsync -avz /opt/zookeeper* 10.0.0.52:/opt/
rsync -avz /opt/zookeeper* 10.0.0.53:/opt/

###db02操作
mkdir -p /data/zookeeper
echo "2" > /data/zookeeper/myid
cat /data/zookeeper/myid

###db03操作
mkdir -p /data/zookeeper
echo "3" > /data/zookeeper/myid
cat /data/zookeeper/myid

2.启动zookeeper
/opt/zookeeper/bin/zkServer.sh start

3.检查启动是否成功
/opt/zookeeper/bin/zkServer.sh status
如果启动正常mode应该是
2个follower
1个leader

4.测试zook通讯是否正常
在一个节点上执行,创建一个频道
/opt/zookeeper/bin/zkCli.sh -server 10.0.0.51:2181
create /test "hello"

在其他节点上看能否接收到
/opt/zookeeper/bin/zkCli.sh -server 10.0.0.52:2181
get /test

5.安装kafka
###db01操作
cd /data/soft/
tar zxf kafka_2.11-1.0.0.tgz -C /opt/
ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
mkdir /opt/kafka/logs
cat >/opt/kafka/config/server.properties<<EOF
broker.id=1
listeners=PLAINTEXT://10.0.0.51:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
EOF
rsync -avz /opt/kafka* 10.0.0.52:/opt/
rsync -avz /opt/kafka* 10.0.0.53:/opt/


###db02操作
sed -i "s#10.0.0.51:9092#10.0.0.52:9092#g" /opt/kafka/config/server.properties
sed -i "s#broker.id=1#broker.id=2#g" /opt/kafka/config/server.properties

###db03操作
sed -i "s#10.0.0.51:9092#10.0.0.53:9092#g" /opt/kafka/config/server.properties
sed -i "s#broker.id=1#broker.id=3#g" /opt/kafka/config/server.properties


6.先前台启动kafka测试
/opt/kafka/bin/kafka-server-start.sh  /opt/kafka/config/server.properties

7.检查是否启动
jps

8.kafka测试命令发送消息
创建命令
/opt/kafka/bin/kafka-topics.sh --create --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --partitions 3 --replication-factor 3 --topic  messagetest

测试获取所有的频道
/opt/kafka/bin/kafka-topics.sh  --list --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181

测试发送消息
/opt/kafka/bin/kafka-console-producer.sh --broker-list  10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092 --topic  messagetest

其他节点测试接收
/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic messagetest --from-beginning

9.测试成功之后,可以放在后台启动
按ctrl + c 停止kafka的前台启动,切换到后台启动
/opt/kafka/bin/kafka-server-start.sh  -daemon /opt/kafka/config/server.properties

10.配置filebeat
cat >/etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

output.kafka:
  hosts: ["10.0.0.51:9092", "10.0.0.52:9092", "10.0.0.53:9092"]
  topic: 'filebeat'

setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true
EOF

重启filebeat
systemctl restart filebeat 

11.访问并检查kafka里有没有收到日志
curl 10.0.0.51

/opt/kafka/bin/kafka-topics.sh  --list --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181

/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic filebeat --from-beginning


12.logstash配置文件
cat > /etc/logstash/conf.d/kafka.conf<<EOF 
input {
  kafka{
    bootstrap_servers=>["10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092"]
    topics=>["filebeat"]
    group_id=>"logstash"
    codec => "json"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
   stdout {}
   if "access" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://10.0.0.51:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM}"
      }
    }
}
EOF

13.前台启动logatash测试
先清空ES以前生成的索引
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf 

生成访问日志
curl 127.0.0.1

elk架构图

image.png image.png image.png image.png

相关文章

  • ELK 搭建

    ELK 简介 ELK即是Elasticsearch, Logstash, Filebeat, Kibana四者的结...

  • ELK(四)

    第十五章: filebeat收集docker日志升职加薪版 第十六章: filebeat收集docker日志终极杀...

  • ELK

    目录 一、ELK介绍二、ELK安装准备工作三、安装es四、 配置es五、curl查看es六、安装kibanan七、...

  • Docker下ELK设置

    1获取、启动elk 1.1获取elk镜像 $ docker pull sebp/elk 1.2启动elk镜像 启动...

  • Spring Cloud学习day108:ELK

    一、ELK介绍 1.ELK解决了什么问题? ELK的介绍:示例 ELK的架构原理:示例 二、安装ELK 1.安装E...

  • ELK日志分析系统初体验

    1 ELK技术栈 1.0 官方文档 ELK logstash elasticsearch kibana ELK技术...

  • 1.ELK介绍

    1.1 ELK简介 1.1.1 ELK是什么? ELK Stack 是 Elasticsearch、Logstas...

  • ELK扫盲以及搭建

    1. ELK部署说明 1.1ELK介绍: 1.1.1 ELK是什么? ELK是三个开源软件的缩写,分别表示:Ela...

  • 二、ELK工作原理

    ELK工作原理 ELK含义 ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。...

  • ELK基本部署

    ELK简介 什么是ELK?通俗来讲,ELK是由Elasticsearch、Logstash、Kibana 三个开源...

网友评论

      本文标题:ELK(四)

      本文链接:https://www.haomeiwen.com/subject/bqdmfhtx.html