EFK部署

作者: 崔涂涂 | 来源:发表于2023-03-30 18:09 被阅读0次
前言

EFK指的是:Elasticsearch、Filebeat 和 Kibana, 都将部署在k8s环境中。
其中 Elasticsearch 是 StatefulSet 类型,将会依赖 StorageClass,这里使用 glusterfs + heketi 实现存储共享

参考文章
环境
主机 ip 部署内容
master 192.168.66.100 filebeat
node1 192.168.66.101 filebeat
node2 192.168.66.102 filebeat
node3 192.168.66.103 filebeat,glusterfs(master), heketi
node4 192.168.66.104 filebeat,glusterfs
node5 192.168.66.105 filebeat,glusterfs
1. glusterfs部署

在所有节点执行

sudo apt install glusterfs-server -y
systemctl start glusterfs-server
systemctl enable glusterfs-server

启用成功后,在主节点上执行

sudo gluster peer probe node4
sudo gluster peer probe node5

测试一下

#在两个从节点上执行
sudo mkdir -p /glusterfs/data

#在主节点上执行创建
sudo gluster volume create test-volume replica 2  node4:/glusterfs/data node5:/glusterfs/data 
#启用
sudo gluster volume start test-volume
#查看状态
sudo gluster volume info test-volume
#挂载
mount -t glusterfs node4:/test-volume /mnt/mytest
#删除
sudo gluster volume stop test-volume
sudo gluster volume delete test-volume
2. heketi部署

下载程序(在主节点上执行)

wget https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
wget https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

解压并复制到指定位置

tar -zxvf /heketi-v5.0.1.linux.amd64.tar.gz
sudocp ./heketi/heketi /usr/bin/
sudocp ./heketi/heketi-cli /usr/bin/
sudocp ./heketi/heketi.json /etc/heketi/

修改配置文件

sudo vim /etc/heketi/heketi.json

修改内容:

......
#修改端口,防止端口冲突
  "port": "18080",
......
#允许认证
  "use_auth": true,
......
#admin用户的key改为adminkey
      "key": "adminkey"
......
#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
    "executor": "ssh",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },
......
#定义heketi数据库文件位置
    "db": "/var/lib/heketi/heketi.db"
......
#调整日志输出级别
    "loglevel" : "warning"

配置ssh密钥

ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''
chmod 600 /etc/heketi/heketi_key.pub

#ssh公钥传递
ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.66.104
ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.66.105

#验证是否能通过ssh密钥正常连接到glusterfs节点
ssh -i /etc/heketi/heketi_key root@192.168.75.175

启动heketi

/usr/bin/heketi --config=/etc/heketi/heketi.json

测试一下

curl 192.168.66.103:18080/hello

创建systemctl service

sudo vim /etc/systemd/system/heketi.service

粘贴内容

[Unit]
Description=Heketi Server
 
[Service]
Type=simple
EnvironmentFile=-/etc/heketi/heketi.json
WorkingDirectory=/var/lib/heketi
User=root
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
 
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启用

sudo systemctl daemon-reload
sudo systemctl start heketi
sudo systemctl enable heketi

heketi添加glusterfs

添加cluster

heketi-cli --user admin -server http://192.168.66.103:18080 --secret adminkey --json cluster create

将4个glusterfs作为node添加到cluster
由于我们开启了heketi认证,所以每次执行heketi-cli操作时,都需要带上一堆的认证字段,比较麻烦,我在这里创建一个别名来避免相关操作:

alias heketi-cli='heketi-cli --server "http://192.168.66.103:18080" --user "admin" --secret "adminkey"'

编写配置文件

sudo vim /etc/heketi/topology.json

粘贴内容:

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.66.104"
                            ],
                            "storage": [
                                "192.168.66.104"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.66.105"
                            ],
                            "storage": [
                                "192.168.66.105"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb"
                    ]
                }
            ]
        }
    ]
}

其中 devices 表示的是两台glusterfs上的存储盘,注意这个盘需要是裸盘,不能带文件系统。虚拟机的话,可以在vm直接增加一个硬盘即可。

使用配置

heketi-cli topology load --json topology-sample.json

测试通过heketi创建volume

heketi-cli --json volume create --size 3 --replica 2

在执行创建的时候,抛出了如下异常:

Error: /usr/sbin/thin_check: execvp failed: No such file or directory
  WARNING: Integrity check of metadata for pool vg_90bf6daf935a1e117adca466e171ac0f/tp_82c5ac62341c4231936f26228761c1d9 failed.
  /usr/sbin/thin_check: execvp failed: No such file or directory
  Check of pool vg_90bf6daf935a1e117adca466e171ac0f/tp_82c5ac62341c4231936f26228761c1d9 failed (status:2). Manual repair required!
  Failed to activate thin pool vg_90bf6daf935a1e117adca466e171ac0f/tp_82c5ac62341c4231936f26228761c1d9.
Removal of pool metadata spare logical volume vg_90bf6daf935a1e117adca466e171ac0f/lvol0_pmspare disables automatic recovery attempts after damage to a thin or cache pool. Proceed? [y/n]: [n]
  Logical volume vg_90bf6daf935a1e117adca466e171ac0f/lvol0_pmspare not removed.

这需要在所有glusterfs节点机上安装thin-provisioning-tools包:

sudo apt -y install thin-provisioning-tools

重新执行创建命令,然后查看后删除

heketi-cli volume list
heketi-cli volume delete <id>
3. Elasticsearch部署

创建 kube-logging.yaml 文件

apiVersion: v1
kind: Namespace
metadata:
  name: logging

创建 Namespace

kubectl apply -f kube-logging.yaml

创建 elasticsearch-svc.yaml 文件

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

创建 elasticsearch-storageclass.yaml 文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: es-data-db
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: http://192.168.66.103:18080
  restuser: "admin"
  restuserkey: "adminkey"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:2"

创建 elasticsearch-statefulset.yaml 文件

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es
  namespace: logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels: 
        app: elasticsearch
    spec:
      nodeSelector:
        es: log
      initContainers:
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
        imagePullPolicy: IfNotPresent
        ports:
        - name: rest
          containerPort: 9200
        - name: inter
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 200m
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: cluster.name
          value: k8s-logs
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cluster.initial_master_nodes
          value: "es-0,es-1,es-2"
        - name: discovery.zen.minimum_master_nodes
          value: "2"
        - name: discovery.seed_hosts
          value: "elasticsearch"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        - name: network.host
          value: "0.0.0.0"
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: es-data-db
      resources:
        requests:
          storage: 3Gi

注意,此处指定了es的选择节点,我这里只给node4 和node5,打上了标签,会导致只起来2个es实例。

kubectl label node node4 es=log
kubectl label node node5 es=log
kubectl get nodes --show-labels

创建 Makefile 文件

.PHONY: apply delete
apply:
    kubectl apply -n logging -f elasticsearch-svc.yaml
    kubectl apply -n logging -f elasticsearch-statefulset.yaml
delete:
    kubectl delete -n logging -f elasticsearch-svc.yaml
    kubectl delete -n logging -f elasticsearch-statefulset.yaml

启用

 kubectl apply -f kube-logging.yaml
 kubectl apply -n logging -f elasticsearch-storageclass.yaml
 make apply

使用命令 kubectl get all -n logging 查看状态

测试一下
等待pod启用成功后,在k8s节点上转发端口:

kubectl port-forward es-0 9200:9200 --namespace=logging

另开一个该节点窗口

curl http://localhost:9200/_cluster/state?pretty
4. Kibana部署

创建 kibana.yaml 文件

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
    targetPort: 5601
    nodePort: 5601
  type: NodePort
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.6.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 1000m
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200
        ports:
        - containerPort: 5601

创建 Makefile 文件

.PHONY: apply delete
apply:
    kubectl apply -f kibana.yaml
delete:
    kubectl delete -f kibana.yaml

启用

make apply

访问 <k8s任意节点ip>:5601

5. filebeat部署

下载helm部署文件

helm3 fetch elastic/filebeat
tar -zxvf filebeat-xxxx.tgz

修改 values.yaml 中的配置部分

filebeatConfig:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/lib/docker/containers/*/*.log
      processors:
        - add_kubernetes_metadata: # 增加kubernetes的属性
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/lib/docker/containers/"
    processors:
      - add_cloud_metadata:
      - add_host_metadata:
    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      indices:
        - index: "liaotian"
          when.regexp: # 通过when.regexp 正则表达式匹配
            # kubernetes.labels.app: user-.*
            kubernetes.namespace: liaotian
            
extraEnvs:   #定义语言环境
 - name: LANG
   value: en_US.UTF-8
extraVolumeMounts:   #挂载目录
  - name: sysdate
    mountPath: /etc/localtime
extraVolumes:    #挂载本地文件目录
  - name: sysdate
    hostPath:
      path: /etc/localtime

创建 Makefile 文件

.PHONY: apply delete
apply:
    helm3 install -n logging filebeat .
delete:
    helm3 uninstall -n logging filebeat

启动 filebeat

make apply

等待全部节点都running后
打开kibana页面,点击Management


image.png

点击 Index patterns 后点击创建


image.png

输入liaotian后,点击next,等待创建完毕后,点击左侧栏的 Discover 便可搜索日志


image.png

相关文章

网友评论

      本文标题:EFK部署

      本文链接:https://www.haomeiwen.com/subject/fmdcddtx.html