美文网首页
k8s部署zookeeper集群

k8s部署zookeeper集群

作者: 余空啊 | 来源:发表于2020-07-26 13:28 被阅读0次

前言

本次的目的是通过使用k8s搭建一个三节点的zookeeper集群,因为zookeeper集群需要用到存储,所以我们需要准备三个持久卷(Persistent Volume) 简称就是PV。

创建zk-pv

首先通过nfs创建三个共享目录

mkdir -p /data/share/pv/{zk01,zk02,zk03}

分别对应三节点zk集群中的三个pod的持久化目录,创建好目录之后编写yaml创建zk-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk01
  namespace: tools
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk01
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk02
  namespace: tools
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk02
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk03
  namespace: tools
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk03
  persistentVolumeReclaimPolicy: Recycle
---

使用如下命令创建zk-pk

kubectl create -f zk-pv.yaml

出现如下提示就代表创建成功

image-20200726131146771

这是我们可以通过如下命令去查看创建成功的pv

kubectl get pv -o wide
image-20200726131248218

创建ZK集群

我们选择使用statefulset去部署zk集群的三节点,并且使用刚刚创建的pv作为存储设备。

zk.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: tools
  labels:
    app: zk
spec:
  selector:
    app: zk
  clusterIP: None
  ports:
  - name: server
    port: 2888
  - name: leader-election
    port: 3888
--- 
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: tools
  labels:
    app: zk
spec:
  selector:
    app: zk
  type: NodePort
  ports:
  - name: client
    port: 2181
    nodePort: 21811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: tools
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: tools
spec:
  selector:
    matchLabels:
      app: zk # has to match .spec.template.metadata.labels
  serviceName: "zk-hs"
  replicas: 3 # by default is 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: zk
        imagePullPolicy: Always
        image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
        --servers=3 \
        --data_dir=/var/lib/zookeeper/data \
        --data_log_dir=/var/lib/zookeeper/data/log \
        --conf_dir=/opt/zookeeper/conf \
        --client_port=2181 \
        --election_port=3888 \
        --server_port=2888 \
        --tick_time=2000 \
        --init_limit=10 \
        --sync_limit=5 \
        --heap=512M \
        --max_client_cnxns=60 \
        --snap_retain_count=3 \
        --purge_interval=12 \
        --max_session_timeout=40000 \
        --min_session_timeout=4000 \
        --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

使用kubectl apply -f zk.yaml部署

image-20200726131900366

可以通过kubect get pods -n tool

image-20200726132035822

可以查看到三个pod都是running状态了,我们再看service状态 可以通过kubect get svc -n tool

image-20200726132141885

可以看到我们将2181端口通过nodePort映射给了21811暴露出去了。

验证Zk集群是否启动成功

我们可以通过kubectl exec -it zk-1 -n tools /bin/sh 进入容器

image-20200726132515135

说明当前节点的ZK是一个follower节点

也可以通过以下命令直接查看所有zk节点的状态 for i in 0 1 2; do kubectl exec zk-$i -n tools zkServer.sh status; done

image-20200726132634789

两个follower节点一个leader 代表我们zk集群部署成功!!!

相关文章

  • k8s部署zookeeper集群

    一、部署方式 k8s 以statefulset方式部署zookeeper集群 二、statefulset简介 St...

  • Kafka集群

    Kafka集群是把状态保存在Zookeeper中的,首先要搭建Zookeeper集群。Zookeeper集群部署请...

  • docker-compose部署zookeeper&kafka集

    docker-compose部署zookeeper&kafka集群 环境说明 Zookeeper集群 IP节点目录...

  • 二、HBase部署与使用

    2.1、部署 2.1.1、Zookeeper正常部署 首先保证Zookeeper集群的正常部署,并启动之: /op...

  • HBase安装部署

    Zookeeper正常部署 首先保证Zookeeper集群的正常部署,并启动之: Hadoop正常部署 Hadoo...

  • HBase安装

    Zookeeper正常部署 首先保证Zookeeper集群的正常部署,并启动之: Hadoop正常部署 Hadoo...

  • Hbase安装

    1. Zookeeper正常部署 首先保证Zookeeper集群的正常部署,并启动: 2. Hadoop正常部署 ...

  • 部署k8s 1.22.2 集群 && Euler部署k8s 1

    部署k8s 1.22.2 集群 Euler部署k8s 1.22.2 集群 一、基础环境 主机名IP地址角色系统ma...

  • Storm | 集群部署

    集群部署 环境准备 ssh免密码登陆、zookeeper 部署、python 2.7+ 集群部署 常用配置 启动服...

  • kafka集群部署

    0. zookeeper集群部署 kafka依赖于zookeeper,在安装kafka集群之前,请先安装zooke...

网友评论

      本文标题:k8s部署zookeeper集群

      本文链接:https://www.haomeiwen.com/subject/rhpwlktx.html