美文网首页
2.基于PV、PVC运行zookeeper集群

2.基于PV、PVC运行zookeeper集群

作者: 哆啦A梦_ca52 | 来源:发表于2019-12-04 22:56 被阅读0次

    PV及PVC实战案例:
    默认情况下容器中的磁盘文件是非持久化的,对于运行在容器中的应用来说面临两个问题,第一:当容器挂掉
    kubelet将重启启动它时,文件将会丢失;第二:当Pod中同时运行多个容器,容器之间需要共享文件时,
    Kubernetes的Volume解决了这两个问题。
    https://v1-14.docs.kubernetes.io/zh/docs/concepts/storage/ #官方文档

    PersistentVolume(PV)是集群中已由管理员配置的一段网络存储,集群中的存储资源就像一个node节点是一
    个集群资源,PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期, 该API对象捕获存
    储的实现细节,即NFS,iSCSI或云提供商特定的存储系统,PV是由管理员添加的的一个存储的描述,是一个全局
    资源即不隶属于任何namespace,包含存储的类型,存储的大小和访问模式等,它的生命周期独立于Pod,例如当
    使用它的Pod销毁时对PV没有影响。
    PersistentVolumeClaim(PVC)是用户存储的请求,它类似于pod,Pod消耗节点资源,PVC消耗存储资源,
    就像pod可以请求特定级别的资源(CPU和内存),PVC是namespace中的资源,可以设置特定的空间大小和访问
    模式。
    kubernetes 从1.0版本开始支持PersistentVolume和PersistentVolumeClaim。
    PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的
    业务使用。
    PVC是对PV资源的申请调用,就像POD消费node节点资源一样,pod是通过PVC将数据保存至PV,PV在保存至存
    储
    
    image.png

    PersistentVolume参数:

    # kubectl explain PersistentVolume
    Capacity: #当前PV空间大小,kubectl explain PersistentVolume.spec.capacity
    accessModes :访问模式,#kubectl explain PersistentVolume.spec.accessModes
    ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO
    ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX
    ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX
    persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:
    #kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
    Retain – 删除PV后保持原装,最后需要管理员手动删除
    Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath
    Delete – 自动删除存储卷
    volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode
    定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统
    mountOptions #附加的挂载选项列表,实现更精细的权限控制
    ro #等
    

    官方提供的基于各后端存储创建的PV支持的访问模式
    https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options

    image.png
    PersistentVolumeClaim创建参数:
    #kubectl explain PersistentVolumeClaim.
    accessModes :PVC 访问模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode
    ReadWriteOnce – PVC只能被单个节点以读写权限挂载,RWO
    ReadOnlyMany – PVC以可以被多个节点挂载但是权限是只读的,ROX
    ReadWriteMany – PVC可以被多个节点是读写方式挂载使用,RWX
    resources: #定义PVC创建存储卷的空间大小
    selector: #标签选择器,选择要绑定的PV
    matchLabels #匹配标签名称
    matchExpressions #基于正则表达式匹配
    volumeName #要绑定的PV名称
    volumeMode #卷类型
    定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统
    
    

    实战案例之zookeeper集群:
    基于PV和PVC作为后端存储,实现zookeeper集群
    下载JDK镜像:

    #docker pull elevy/slim_java:8
    查看java的版本
    root@master:~/metrics-server/deploy/1.8+# docker run -it --rm elevy/slim_java:8 sh 
    / # java -version
    java version "1.8.0_144"
    Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
    Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
    打标签
    root@master:/opt/k8s-data/yaml/web/linux37/nginx# docker tag elevy/slim_java:8 harbor.wyh.net/baseimages/slim_java:1.8.0_144
    上传镜像
    root@master:/opt/k8s-data/yaml/web/linux37/nginx# docker push harbor.wyh.net/baseimages/slim_java:1.8.0_144
    

    zookeeper镜像准备:
    https://www.apache.org/dist/zookeeper #官方程序包下载官方网址

    root@master:/opt/k8s-data/dockerfile# mkdir linux37
    root@master:/opt/k8s-data/dockerfile/linux37/zookeeper# ls
    bin  build-command.sh  conf  Dockerfile  entrypoint.sh  zookeeper-3.12-Dockerfile.tar.gz
    修改镜像地址
    root@master:/opt/k8s-data/dockerfile/linux37/zookeeper# vim Dockerfile 
    FROM harbor.wyh.net/baseimages/slim_java:1.8.0_144
    修改build脚本
    root@master:/opt/k8s-data/dockerfile/linux37/zookeeper# vim build-command.sh 
    #!/bin/bash
    TAG=$1
    docker build -t harbor.wyh.net/linux37/zookeeper:${TAG} .
    sleep 1
    docker push  harbor.wyh.net/linux37/zookeeper:${TAG}
    
    
    执行脚本
    root@master:/opt/k8s-data/dockerfile/linux37/zookeeper# bash build-command.sh v3.4.14
    运行zookper
    docker run -it --rm -p 2181:2181  harbor.wyh.net/linux37/zookeeper:v3.4.14
    
    
    连接进去
    创建文件
    root@master:/opt/k8s-data/yaml# mkdir linux37
    
    root@master:/opt/k8s-data/yaml/linux37/zookeeper/pv# cat zookeeper-persistentvolume.yaml 
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: zookeeper-datadir-pv-1
      namespace: linux37
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce 
      nfs:
        server: 192.168.200.201 
        path: /data/k8sdata/linux37/zookeeper-datadir-1 
    
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: zookeeper-datadir-pv-2
      namespace: linux37
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.200.201
        path: /data/k8sdata/linux37/zookeeper-datadir-2 
    
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: zookeeper-datadir-pv-3
      namespace: linux37
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        server: 192.168.200.201  
        path: /data/k8sdata/linux37/zookeeper-datadir-3 
    创建三个文件夹
    mkdir: created directory '/data/k8sdata/linux37/zookeeper-datadir-3'
    root@haproxy1:~# mkdir /data/k8sdata/linux37/zookeeper-datadir-2 -pv
    mkdir: created directory '/data/k8sdata/linux37/zookeeper-datadir-2'
    root@haproxy1:~# mkdir /data/k8sdata/linux37/zookeeper-datadir-1 -pv
    添加nfs共享目录
    root@haproxy1:~# vim /etc/exports
    /data/k8sdata/linux37 *(rw,no_root_squash)
    root@haproxy1:~# systemctl restart nfs-server
    查看共享目录
    root@master:/opt/k8s-data/yaml/linux37/zookeeper/pv# showmount -e 192.168.200.201
    Export list for 192.168.200.201:
    /data/k8sdata/linux37 *
    /data/linux37         *
    创建pv
    root@master:/opt/k8s-data/yaml/linux37/zookeeper/pv# kubectl apply -f zookeeper-persistentvolume.yaml 
    persistentvolume/zookeeper-datadir-pv-1 created
    persistentvolume/zookeeper-datadir-pv-2 created
    persistentvolume/zookeeper-datadir-pv-3 created
    创建pvc
    root@master:/opt/k8s-data/yaml/linux37/zookeeper/pv# kubectl apply -f zookeeper-persistentvolumeclaim.yaml
    查看pvc的信息
    root@master:/opt/k8s-data/yaml/linux37/zookeeper/pv# kubectl get pvc -n linux37
    NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   20Gi       RWO                           4m15s
    zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   20Gi       RWO                           4m14s
    zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   20Gi       RWO                           4m14s
    
    

    修改zookerper的yaml的镜像地址

    root@master:/opt/k8s-data/yaml/linux37/zookeeper# cat zookeeper.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper
      namespace: linux37
    spec:
      ports:
        - name: client
          port: 2181
      selector:
        app: zookeeper
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper1
      namespace: linux37
    spec:
      type: NodePort        
      ports:
        - name: client
          port: 2181
          nodePort: 42181
        - name: followers
          port: 2888
        - name: election
          port: 3888
      selector:
        app: zookeeper
        server-id: "1"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper2
      namespace: linux37
    spec:
      type: NodePort        
      ports:
        - name: client
          port: 2181
          nodePort: 42182
        - name: followers
          port: 2888
        - name: election
          port: 3888
      selector:
        app: zookeeper
        server-id: "2"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper3
      namespace: linux37
    spec:
      type: NodePort        
      ports:
        - name: client
          port: 2181
          nodePort: 42183
        - name: followers
          port: 2888
        - name: election
          port: 3888
      selector:
        app: zookeeper
        server-id: "3"
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: zookeeper1
      namespace: linux37
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: zookeeper
            server-id: "1"
        spec:
          volumes:
            - name: data
              emptyDir: {}
            - name: wal
              emptyDir:
                medium: Memory
          containers:
            - name: server
              image: harbor.wyh.net/linux37/zookeeper:v3.4.14  
              imagePullPolicy: Always
              env:
                - name: MYID
                  value: "1"
                - name: SERVERS
                  value: "zookeeper1,zookeeper2,zookeeper3"
                - name: JVMFLAGS
                  value: "-Xmx2G"
              ports:
                - containerPort: 2181
                - containerPort: 2888
                - containerPort: 3888
              volumeMounts:
              - mountPath: "/zookeeper/data"
                name: zookeeper-datadir-pvc-1 
          volumes:
            - name: zookeeper-datadir-pvc-1 
              persistentVolumeClaim:
                claimName: zookeeper-datadir-pvc-1
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: zookeeper2
      namespace: linux37
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: zookeeper
            server-id: "2"
        spec:
          volumes:
            - name: data
              emptyDir: {}
            - name: wal
              emptyDir:
                medium: Memory
          containers:
            - name: server
              image: harbor.wyh.net/linux37/zookeeper:v3.4.14 
              imagePullPolicy: Always
              env:
                - name: MYID
                  value: "2"
                - name: SERVERS
                  value: "zookeeper1,zookeeper2,zookeeper3"
                - name: JVMFLAGS
                  value: "-Xmx2G"
              ports:
                - containerPort: 2181
                - containerPort: 2888
                - containerPort: 3888
              volumeMounts:
              - mountPath: "/zookeeper/data"
                name: zookeeper-datadir-pvc-2 
          volumes:
            - name: zookeeper-datadir-pvc-2
              persistentVolumeClaim:
                claimName: zookeeper-datadir-pvc-2
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: zookeeper3
      namespace: linux37
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: zookeeper
            server-id: "3"
        spec:
          volumes:
            - name: data
              emptyDir: {}
            - name: wal
              emptyDir:
                medium: Memory
          containers:
            - name: server
              image: harbor.wyh.net/linux37/zookeeper:v3.4.14 
              imagePullPolicy: Always
              env:
                - name: MYID
                  value: "3"
                - name: SERVERS
                  value: "zookeeper1,zookeeper2,zookeeper3"
                - name: JVMFLAGS
                  value: "-Xmx2G"
              ports:
                - containerPort: 2181
                - containerPort: 2888
                - containerPort: 3888
              volumeMounts:
              - mountPath: "/zookeeper/data"
                name: zookeeper-datadir-pvc-3
          volumes:
            - name: zookeeper-datadir-pvc-3
              persistentVolumeClaim:
                claimName: zookeeper-datadir-pvc-3
    
    

    创建zookeeper
    root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl apply -f zookeeper.yaml
    查看zooker的节点pod

    root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl get pod -n linux37
    NAME                                             READY   STATUS    RESTARTS   AGE
    linux37-nginx-deployment-999888886-r7tm8         1/1     Running   0          44m
    linux37-tomcat-app1-deployment-d44f446c7-dcm9h   1/1     Running   0          179m
    linux37-tomcat-app1-deployment-d44f446c7-sh9jb   1/1     Running   0          44m
    linux37-tomcat-app2-deployment-6cfff645c-qp8md   1/1     Running   0          44m
    linux37-tomcat-app2-deployment-6cfff645c-tvgn2   1/1     Running   0          44m
    zookeeper1-65c6df65fd-hqhr9                      1/1     Running   0          31m
    zookeeper2-8c5c66547-xrghg                       1/1     Running   0          31m
    zookeeper3-7c9d855d64-zrpxp                      1/1     Running   0          31m
    

    进入pod查看是否挂载上了

    root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl exec -it zookeeper3-7c9d855d64-zrpxp sh -n linux37
    / # df -h | grep 192.168.200.201
    192.168.200.201:/data/k8sdata/linux37/zookeeper-datadir-3
    查看状态,主要不是单机模式就行
    / # zkServer.sh  status
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 9010
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    查看另一个的状态
    / # root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl exec -it zookeeper1-65c6df65fd-hqhr9 sh -n linux37
    / # zkServer.sh  status
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 9010
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /zookeeper/bin/../conf/zoo.cfg
    Mode: leader
    
    
    查看持久化存储卷

    查看是那个zookper的数据

    root@haproxy1:~# cat  /data/k8sdata/linux37/zookeeper-datadir-1/myid 
    1
    

    解析zookper的ip地址

    / # root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl exec -it zookeeper3-7c9d855d64-zrpxp sh -n linux37
    root@master:/opt/k8s-data/yaml/linux37/zookeeper# nslookup zookeeper1
    Server:     127.0.0.53
    Address:    127.0.0.53#53
    
    ** server can't find zookeeper1: SERVFAIL
    
    root@master:/opt/k8s-data/yaml/linux37/zookeeper# kubectl exec -it zookeeper3-7c9d855d64-zrpxp sh -n linux37
    / # nslookup zookeeper1
    nslookup: can't resolve '(null)': Name does not resolve
    
    Name:      zookeeper1
    Address 1: 10.20.176.109 zookeeper1.linux37.svc.linux37.local
    
    

    相关文章

      网友评论

          本文标题:2.基于PV、PVC运行zookeeper集群

          本文链接:https://www.haomeiwen.com/subject/aezqgctx.html