美文网首页程序人生
ELK(elasticsearch+logstash+fileb

ELK(elasticsearch+logstash+fileb

作者: aaa_8221 | 来源:发表于2019-11-04 17:31 被阅读0次

https://blog.51cto.com/11924224/2158203

https://www.clxz.top/2019/03/30/1124/

https://blog.csdn.net/weixin_38098312/article/details/80181415

https://blog.csdn.net/weixin_41047933/article/details/82699823

官网下载

https://www.elastic.co/cn/downloads/

环境:三台centos7.6,jdk1.8

192.168.72.128 ---- kafka,es,filebeat,logstash,kibana

192.168.72.129 ---- kafka,es,filebeat

192.168.72.130 ---- kafka,es,filebeat

创建elk用户:useradd elk

修改权限:chown -R elk:elk /opt/elk

# 开放端口

firewall-cmd --add-port=2181/tcp --permanent  #zookeeper

firewall-cmd --add-port=2888/tcp --permanent  #zookeeper

firewall-cmd --add-port=3888/tcp --permanent  #zookeeper

firewall-cmd --add-port=9092/tcp --permanent  #kafka

firewall-cmd --add-port=9200/tcp --permanent  #elasticsearch

firewall-cmd --add-port=9300/tcp --permanent  #elasticsearch

firewall-cmd --add-port=5601/tcp --permanent  #kibana# 重新加载防火墙规则firewall-cmd --reload

# 修改文件限制

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 2048" >> /etc/security/limits.conf

echo "* hard nproc 4096" >> /etc/security/limits.conf

echo "* soft memlock unlimited" >> /etc/security/limits.conf

echo "* hard memlock unlimited" >> /etc/security/limits.conf

# 调整进程数

vi /etc/security/limits.d/20-nproc.conf

# 调整成以下配置

*          soft    nproc    4096

*          hard  nproc    4096

root      soft    nproc    unlimited

# 调整虚拟内存&最大并发连接

vi /etc/sysctl.conf

# 增加的内容

vm.max_map_count=655360

fs.file-max=655360

# 保存后,输入命令使其生效:sysctl -p

安装kafka集群

安装Filebeat

tar zxvf filebeat-7.3.0-linux-x86_64.tar.gz

mv filebeat-7.3.0-linux-x86_64 filebeat

cd filebeat

mv filebeat.yml filebeat.yml.bak

vi filebeat.yml

filebeat.inputs:

- type: log

  enabled: true

  paths:

#    - /var/log/*.log

    - /opt/elk/log/*.log            #监听的日志文件

#processors:

  # 去除filebeat不需要显示的字段

#  - drop_fields:

#      fields: ["@metadata", "beat.version", "offset", "prospector.type", "source"]

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

#setup.template.settings:

#  index.number_of_shards: 3

output.kafka:

  enabled: true

  hosts: ["192.168.77.128:9092","192.168.77.129:9092","192.168.77.130:9092"]    # kafka集群配置

  topic: "erp-web"  #kafka 订阅的主题

chmod g-w filebeat.yml  #取消其他用户的写权限

后台启动filebeat

nohup ./filebeat -e -c filebeat.yml &

安装Logstash

tar zxvf logstash-7.3.0.tar.gz

mv logstash-7.3.0 logstash

cd logstash

vi conf/erp-web.conf

input {

  kafka {

      bootstrap_servers => "192.168.77.128:9092,192.168.77.129:9092,192.168.77.130:9092"

      group_id => "erp-web"

      topics => ["erp-web"]

      consumer_threads => 4

      decorate_events => true

      codec => "json"

      type => "erp-web"

  }

}

filter {

  grok {

      patterns_dir => [ "/opt/elk/logstash/patterns" ]

      match => { "message" => "%{LOG_FAT}" }

      overwrite => [ "message" ]

  }

  date {

      match => ["logtime","ISO8601", "yyyy-MM-dd'T'HH:mm:ss.SSS" ]

      target => "@timestamp"

      timezone => "Asia/Shanghai"

  }

  mutate {

      remove_field => ["fields","prospector","@version"]

  }

}

output {

  if [type] == "erp-web" {

      elasticsearch {

        hosts => ["192.168.77.128:9200","192.168.77.129:9200","192.168.77.130:9200"]

        index => "erp-web-%{+YYYY-MM-dd}"

      }

      #stdout { codec => rubydebug }

  }

}

# 上面使用了自定义正则匹配的,添加下面的配置

vi patterns/applog

LOG_TIME %{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND})

SPACE \s*

LOG_THREAD [A-Za-z0-9\-\[\]\.\:]+

LOG_LEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)

LOG_CLASS ([a-zA-Z0-9-]+\.)+[A-Za-z0-9\(\)]+

LOG_MSG .*

LOG_FAT %{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOG_THREAD:thread}%{SPACE}%{LOG_LEVEL:level}%{SPACE}%{LOG_CLASS:class}%{SPACE}-%{SPACE}%{LOG_MSG:message}

# 测试配置语法是否正确bin/logstash -f config/erp-web.conf -t# 指定配置文件启动nohup bin/logstash -f config/erp-web.conf &# 多配置文件启动:nohup bin/logstash -f config/ &

安装ElasticSearch

tar zxvf elasticsearch-7.3.0-linux-x86_64.tar.gz

mv elasticsearch-7.3.0 elasticsearch

cd elasticsearch/config

mv elasticsearch.yml elasticsearch.yml.bak

vi elasticsearch.yml  #三台机器分别修改标红部分

cluster.name: es-cluster  # 集群名称必须一致

node.name: node-128      # 节点名称不能相同

node.master: true

node.data: true

path.data: /opt/elk/elasticsearch/data

path.logs: /opt/elk/elasticsearch/log

network.host: 0.0.0.0  #是否可以公开访问,如果不配置此项则不可公网访问

network.publish_host: 192.108.77.128    # host需要修改成对应的内网IP

http.port: 9200

transport.tcp.port: 9300

discovery.seed_hosts: ["192.168.77.128:9300","192.168.77.129:9300","192.168.77.130:9300"]

cluster.initial_master_nodes: ["192.168.77.130"]  #指定初始master节点

http.cors.enabled: true

http.cors.allow-origin: "*"

#三台机器上分别启动

nohup bin/elasticsearch &

#后台启动es

bin/elasticsearch -d

安装Kibana

tar zxvf kibana-7.3.0-linux-x86_64.tar.gz

mv kibana-7.3.0-linux-x86_64 kibana

cd kibana/config

mv kibana.yml kibana.yml.bak

vi kibana.yml

server.port: 5601

server.host: 0.0.0.0    #配置0.0.0.0则公网可访问

elasticsearch.hosts: ["http://192.128.77.128:9200", "http://192.148.77.129:9200", "http://192.138.77.130:9200"]

# 启动

nohup bin/kibana &

# 停止kibana

ps -eaf | grep node    # 查看PID

kill -9 PID

查看elasticsearch上的所有索引

http://192.168.77.158:9200/_cat/indices?v

相关文章

  • ELK(elasticsearch+logstash+fileb

    https://blog.51cto.com/11924224/2158203 https://www.clxz....

  • Docker下ELK设置

    1获取、启动elk 1.1获取elk镜像 $ docker pull sebp/elk 1.2启动elk镜像 启动...

  • Spring Cloud学习day108:ELK

    一、ELK介绍 1.ELK解决了什么问题? ELK的介绍:示例 ELK的架构原理:示例 二、安装ELK 1.安装E...

  • ELK日志分析系统初体验

    1 ELK技术栈 1.0 官方文档 ELK logstash elasticsearch kibana ELK技术...

  • 1.ELK介绍

    1.1 ELK简介 1.1.1 ELK是什么? ELK Stack 是 Elasticsearch、Logstas...

  • ELK扫盲以及搭建

    1. ELK部署说明 1.1ELK介绍: 1.1.1 ELK是什么? ELK是三个开源软件的缩写,分别表示:Ela...

  • 二、ELK工作原理

    ELK工作原理 ELK含义 ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。...

  • ELK基本部署

    ELK简介 什么是ELK?通俗来讲,ELK是由Elasticsearch、Logstash、Kibana 三个开源...

  • docker swarm elk 日志收集 kibana

    准备三个节点 elk1、elk2、elk3 准备elk环境 准备swarm环境 准备启动 备注: 源码地址 htt...

  • ELK搭建

    ELK(ElasticSearch, Logstash, Kibana) ELK(ElasticSearch, L...

网友评论

    本文标题:ELK(elasticsearch+logstash+fileb

    本文链接:https://www.haomeiwen.com/subject/yyidbctx.html