美文网首页
k8s部署zookeeper集群

k8s部署zookeeper集群

作者: 余空啊 | 来源:发表于2020-07-26 13:28 被阅读0次

    前言

    本次的目的是通过使用k8s搭建一个三节点的zookeeper集群,因为zookeeper集群需要用到存储,所以我们需要准备三个持久卷(Persistent Volume) 简称就是PV。

    创建zk-pv

    首先通过nfs创建三个共享目录

    mkdir -p /data/share/pv/{zk01,zk02,zk03}
    

    分别对应三节点zk集群中的三个pod的持久化目录,创建好目录之后编写yaml创建zk-pv.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: k8s-pv-zk01
      namespace: tools
      labels:
        app: zk
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      capacity:
        storage: 5Gi
      accessModes:
      - ReadWriteOnce
      hostPath:
        path: /data/share/pv/zk01
      persistentVolumeReclaimPolicy: Recycle
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: k8s-pv-zk02
      namespace: tools
      labels:
        app: zk
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      capacity:
        storage: 5Gi
      accessModes:
      - ReadWriteOnce
      hostPath:
        path: /data/share/pv/zk02
      persistentVolumeReclaimPolicy: Recycle
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: k8s-pv-zk03
      namespace: tools
      labels:
        app: zk
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      capacity:
        storage: 5Gi
      accessModes:
      - ReadWriteOnce
      hostPath:
        path: /data/share/pv/zk03
      persistentVolumeReclaimPolicy: Recycle
    ---
    

    使用如下命令创建zk-pk

    kubectl create -f zk-pv.yaml
    

    出现如下提示就代表创建成功

    image-20200726131146771

    这是我们可以通过如下命令去查看创建成功的pv

    kubectl get pv -o wide
    
    image-20200726131248218

    创建ZK集群

    我们选择使用statefulset去部署zk集群的三节点,并且使用刚刚创建的pv作为存储设备。

    zk.yaml

    apiVersion: v1
    kind: Service
    metadata:
      name: zk-hs
      namespace: tools
      labels:
        app: zk
    spec:
      selector:
        app: zk
      clusterIP: None
      ports:
      - name: server
        port: 2888
      - name: leader-election
        port: 3888
    --- 
    apiVersion: v1
    kind: Service
    metadata:
      name: zk-cs
      namespace: tools
      labels:
        app: zk
    spec:
      selector:
        app: zk
      type: NodePort
      ports:
      - name: client
        port: 2181
        nodePort: 21811
    ---
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
      name: zk-pdb
      namespace: tools
    spec:
      selector:
        matchLabels:
          app: zk
      maxUnavailable: 1
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: zk
      namespace: tools
    spec:
      selector:
        matchLabels:
          app: zk # has to match .spec.template.metadata.labels
      serviceName: "zk-hs"
      replicas: 3 # by default is 1
      updateStrategy:
        type: RollingUpdate
      podManagementPolicy: Parallel
      template:
        metadata:
          labels:
            app: zk # has to match .spec.selector.matchLabels
        spec:
          containers:
          - name: zk
            imagePullPolicy: Always
            image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
            resources:
              requests:
                memory: "500Mi"
                cpu: "0.5"
            ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: server
            - containerPort: 3888
              name: leader-election
            command:
            - sh
            - -c
            - "start-zookeeper \
            --servers=3 \
            --data_dir=/var/lib/zookeeper/data \
            --data_log_dir=/var/lib/zookeeper/data/log \
            --conf_dir=/opt/zookeeper/conf \
            --client_port=2181 \
            --election_port=3888 \
            --server_port=2888 \
            --tick_time=2000 \
            --init_limit=10 \
            --sync_limit=5 \
            --heap=512M \
            --max_client_cnxns=60 \
            --snap_retain_count=3 \
            --purge_interval=12 \
            --max_session_timeout=40000 \
            --min_session_timeout=4000 \
            --log_level=INFO"
            readinessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            volumeMounts:
            - name: datadir
              mountPath: /var/lib/zookeeper
      volumeClaimTemplates:
      - metadata:
          name: datadir
          annotations:
            volume.beta.kubernetes.io/storage-class: "anything"
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 1Gi
    

    使用kubectl apply -f zk.yaml部署

    image-20200726131900366

    可以通过kubect get pods -n tool

    image-20200726132035822

    可以查看到三个pod都是running状态了,我们再看service状态 可以通过kubect get svc -n tool

    image-20200726132141885

    可以看到我们将2181端口通过nodePort映射给了21811暴露出去了。

    验证Zk集群是否启动成功

    我们可以通过kubectl exec -it zk-1 -n tools /bin/sh 进入容器

    image-20200726132515135

    说明当前节点的ZK是一个follower节点

    也可以通过以下命令直接查看所有zk节点的状态 for i in 0 1 2; do kubectl exec zk-$i -n tools zkServer.sh status; done

    image-20200726132634789

    两个follower节点一个leader 代表我们zk集群部署成功!!!

    相关文章

      网友评论

          本文标题:k8s部署zookeeper集群

          本文链接:https://www.haomeiwen.com/subject/rhpwlktx.html