elk : elasticsearch logstash kibana
elasticsearch: 部署
1.下载并安装GPG Key
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2.添加yum仓库
[root@elk-node1 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=0
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
yum install -y elasticsearch
yum install -y java java -version
mkdir -p /data/es-data
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: wingcluster # 组名(同一个组,组名必须一致)
node.name: elk-node1 # 节点名称,建议和主机名一致
path.data: /data/es-data # 数据存放的路径
path.logs: /var/log/elasticsearch/ # 日志存放的路径 elasticsearch自己的日志
bootstrap.mlockall: true # 锁住内存,不被使用到交换分区去
network.host: 0.0.0.0 # 网络设置 监听本地所有网卡 0.0.0.0
http.port: 9200 # 端口
从服务器要在配置文件中追加入如下内容
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["172.16.113.155", "172.16.113.156"] 两台各自的IP
chown -R elasticsearch.elasticsearch /data/
systemctl start elasticsearch
1.安装head插件
[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
[root@elk-node1 head]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 _site]# systemctl restart elasticsearch
2.安装kopf监控插件
[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
[root@elk-node1 kopf]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 _site]# systemctl restart elasticsearch
http://172.16.113.155:9200/_plugin/kopf/#!/cluster
logstash
1.下载并安装GPG Key
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2.添加yum仓库
[root@hadoop-node1 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=0
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
3.安装logstash
[root@elk-node1 ~]# yum install -y logstash
使用rubydebug和写到elasticsearch中的区别:其实就在于后面标准输出的区别,前者使用codec;后者使用elasticsearch
logstash 日志收集
1)logstash的配置
简单的配置方式:
[root@elk-node1 ~]# vim /etc/logstash/conf.d/01-logstash.conf
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.1.160:9200"]}
stdout { codec => rubydebug }
}
它的执行:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf
Settings: Default filter workers: 1
Logstash startup completed
beijing #输入内容
{ #输出下面信息
"message" => "beijing",
"@version" => "1",
"@timestamp" => "2016-11-11T07:41:48.401Z",
"host" => "elk-node1"
}
2)收集系统日志
[root@elk-node1 ~]# vim file.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch { masterIP
hosts => ["172.16.113.155:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
执行上面日志信息的收集,如下,这个命令会一直在执行中,表示日志在监控收集中;如果中断,就表示日志不在收集!所以需要放在后台执行~
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &
Kibana安装配置
下载kibana安装包wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
解压到/usr/local/
tar zxf kibana-4.3.1-linux-x64.tar.gz -C /usr/local 做一个软连接
ln -s /usr/local/kibana-4.3/1-linux-x64/ /usr/local/kibana
2)修改配置文件:
[root@elk-node1 config]# pwd
/usr/local/kibana/config
[root@elk-node1 config]# cp kibana.yml kibana.yml.bak
[root@elk-node1 config]# vim kibana.yml
server.port: 5601
server.host: "0.0.0.0" 后端服务器的主机默认localhost
elasticsearch.url: "http://192.168.1.160:9200"默认http://localhost:9200用于查看elasticsearch
的实例URL
kibana.index: ".kibana"
因为他一直运行在前台,要么选择开一个窗口,要么选择使用screen。安装并使用screen启动kibana: 或
(3)访问kibana:http://172.16.113.155:5601/
网友评论