美文网首页
12.kubernetes笔记 Volume存储卷(三) Lon

12.kubernetes笔记 Volume存储卷(三) Lon

作者: Bigyong | 来源:发表于2021-12-22 23:34 被阅读0次

    目录
    Storageclass 简介
    常用字段
    Longhorn Storageclass插件
    安装Longhorn
    示例1:测试创建PVC 自动创建PV
    longhorn数据存储位置及自定义资源

    前言:

    前面介绍的PV、PVC完成了存储类型和Pod挂载的解耦,开发人员直接使用PVC,而Volume存储系统的PV由管理员维护,但随之带来的问题,开发人员使用PVC之前,需要系统管理员提前手动创建PV,而随着pod的不断增多,系统管理员的工作会变得重复和繁琐,静态供给方式这并不符合自动化运维的趋势,随之应运而生的Storageclass 动态预配存储空间,它的作用就是创建PV的模板,解决了PV的自动创建,备份等扩展功能.

    Storageclass 简介

    简称SC;PV和PVC都可属于某个特定的SC;
    创建PV的模板:可以将某个存储服务与SC关联起来,并且将该存储服务的管理接口提供给SC,从而让SC能够在存储服务上CRUD(create、Read.Update和Delete)存储单元;因而,在同一个SC上声明PVC时,若无现存可匹配的PV,则SC能够调用管理接口直接创建出一个符合PVC声明的需求的PV来。这种PV的提供机制,就称为Dynamic Provision.

    具体来说,StorageClass会定义一下两部分:

    1. PV的属性 ,比如存储的大小、类型等;
    2. 创建这种PV需要使用到的存储插件,比如Ceph等;

    有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。

    这里需要注意的是这么多种存储系统并不是所有的存储系统都支持StorageClass,它需要你的存储系统提供某种接口来让controller可以调用并传递进去PVC的参数去创建PV

    常用字段:

    StorageClass资源的期望状态直接与apiversion、kind和metadata定义于同一级别而无须嵌套于spec字段中,它支持使用的字段包括如下几个

    • allowVolumeExpansion <boolean>:是否支持存储卷空间扩展功能;
    • allowedTopologies <[]0bject>:定义可以动态配置存储卷的节点拓扑,仅启用了卷调度功能的服务器才会用到该字段;每个卷插件都有自己支持的拓扑规范,空的拓扑选择器表示无拓扑限制;
    • provisioner <strinp>:必选字段,用于指定存储服务方(provisioner,或称为预备器),存储类要依赖该字段值来判定要使用的存储插件以便适配到目标存储系统;kubernetes内建支持许多的Provisioner,它们的名字都以kubernetes.io/为前缀,例如kubernetes.io/glusterfs等;
    • parameters <map[string]stringp: 定义连接至指定的Provisioner类别下的某特定存储时需要使用的各相关参数;不同Provisioner的可用的参数各不相同;
    • reclaimPolicy <strinp:由当前存储类动态创建约PV资源的默认回收策路,可用值为Delete(默认)和Retain两个;但那些静态FV的回收策略则取决于它们自身的定义;
    • volumeBindingMlode <string>:定义如何为PVC完成预配和绑定,默认值为VolumeBindingImmediate;该字段仅在启用了存储卷调度功能时才能生效;
    • mountoptions<[]string>:由当前类动态创建的PV资源的默认挂载选项列表。
    [root@k8s-master storage]# cat Storageclass-rdb-demo.yaml 
    apiversion: storage.k8s.io/v1 
    kind: Storageclass
    metadata:
      name: fast-rbd
    provisioner: kubernetes.io/rbd  #不同的存储后端 parameters模板各不一样 需要查阅官方文档 这里是ceph的 也不是所有的存储类型都是支持Storageclass
    parameters:
      monitors: ceph01.ilinux.io:6789,ceph02.ilinux.io:6789,ceph03.ilinux.io:6789
      adminId: admin
      adminSecretName: ceph-admin-secret
      adminSecretNamespace: kube-system
      pool: kube
      userId: kube
      userSecretName: ceph-kube-secret
      userSecretNamespace: kube-system
      fsType: ext4
      imageFormat: "2"
      imageFeatures: "layering"
    reclaimPolicy: Retain  #回收策略
    
    Longhorn Storageclass插件 提供更多的扩展功能

    官方文档: https://longhorn.io

    Longhorn极大地提升了开发人员和ITOps的效率,仅需点击一下鼠标,即可轻松实现持久化存储,并且无需为专有解决方案支付昂贵的费用。除此之外,Longhorn减少了管理数据及操作环境所需的资源,从而帮助企业更加专注且快速地交付代码及应用程序。

    Longhorn依旧秉承Rancher 100%开源的产品理念,它是一个使用微服务构建的分布式块存储项目。2019年,Longhorn发布Beta版本,并于同年10月作为沙箱(Sandbox)项目捐献给CNCF。Longhorn受到了开发者们的广泛关注,成千上万名用户对其进行了压力测试,并提供了极为宝贵的反馈意见。

    Longhorn的GA版本提供了一系列功能丰富的企业级存储功能,包括:

    • 自动配置,快照,备份和恢复
    • 零中断卷扩容
    • 具有定义的RTO和RPO的跨集群灾难恢复卷
    • 在不影响卷的情况下实时升级
    • 全功能的Kubernetes CLI集成和独立的用户界面
    安装Longhorn

    安装准备:节点数至少为3台 要选择leader

    [root@k8s-master storage]#  yum -y install iscsi-initiator-utils  #安装前需要安装ISCSI
    [root@k8s-master storage]# kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/deploy/longhorn.yaml  #安装 因为要下载镜像 需要等待一些时间
    
    [root@k8s-master storage]# kubectl get pods -n longhorn-system --watch  #等待所有pod就绪
    
    [root@k8s-master storage]# kubectl get pods --namespace longhorn-system -o wide   #所有pod就绪
    NAME                                        READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
    csi-attacher-54c7586574-fvv4p               1/1     Running   0          37m     10.244.3.16    k8s-node3   <none>           <none>
    csi-attacher-54c7586574-swdsr               1/1     Running   0          43m     10.244.2.111   k8s-node2   <none>           <none>
    csi-attacher-54c7586574-zkzrg               1/1     Running   0          43m     10.244.3.10    k8s-node3   <none>           <none>
    csi-provisioner-5ff5bd6b88-bs687            1/1     Running   0          37m     10.244.3.17    k8s-node3   <none>           <none>
    csi-provisioner-5ff5bd6b88-gl4xn            1/1     Running   0          43m     10.244.2.112   k8s-node2   <none>           <none>
    csi-provisioner-5ff5bd6b88-qkzt4            1/1     Running   0          43m     10.244.3.11    k8s-node3   <none>           <none>
    csi-resizer-7699cdfc4-4w49w                 1/1     Running   0          37m     10.244.3.15    k8s-node3   <none>           <none>
    csi-resizer-7699cdfc4-l2j49                 1/1     Running   0          43m     10.244.3.12    k8s-node3   <none>           <none>
    csi-resizer-7699cdfc4-sndlm                 1/1     Running   0          43m     10.244.2.113   k8s-node2   <none>           <none>
    csi-snapshotter-8f58f46b4-6s89m             1/1     Running   0          37m     10.244.2.119   k8s-node2   <none>           <none>
    csi-snapshotter-8f58f46b4-qgv5r             1/1     Running   0          43m     10.244.3.13    k8s-node3   <none>           <none>
    csi-snapshotter-8f58f46b4-tf5ls             1/1     Running   0          43m     10.244.2.115   k8s-node2   <none>           <none>
    engine-image-ei-a5a44787-5ntlm              1/1     Running   0          44m     10.244.1.146   k8s-node1   <none>           <none>
    engine-image-ei-a5a44787-h45hr              1/1     Running   0          44m     10.244.3.6     k8s-node3   <none>           <none>
    engine-image-ei-a5a44787-phnjf              1/1     Running   0          44m     10.244.2.108   k8s-node2   <none>           <none>
    instance-manager-e-4384d6f1                 1/1     Running   0          44m     10.244.2.110   k8s-node2   <none>           <none>
    instance-manager-e-54f46256                 1/1     Running   0          34m     10.244.1.148   k8s-node1   <none>           <none>
    instance-manager-e-e008dd8a                 1/1     Running   0          44m     10.244.3.7     k8s-node3   <none>           <none>
    instance-manager-r-0ad3175d                 1/1     Running   0          44m     10.244.3.8     k8s-node3   <none>           <none>
    instance-manager-r-61277092                 1/1     Running   0          44m     10.244.2.109   k8s-node2   <none>           <none>
    instance-manager-r-d8a9eb0e                 1/1     Running   0          34m     10.244.1.149   k8s-node1   <none>           <none>
    longhorn-csi-plugin-5htsd                   2/2     Running   0          7m41s   10.244.2.123   k8s-node2   <none>           <none>
    longhorn-csi-plugin-hpjgl                   2/2     Running   0          16s     10.244.1.151   k8s-node1   <none>           <none>
    longhorn-csi-plugin-wtkcj                   2/2     Running   0          43m     10.244.3.14    k8s-node3   <none>           <none>
    longhorn-driver-deployer-5479f45d86-l4fpq   1/1     Running   0          57m     10.244.3.4     k8s-node3   <none>           <none>
    longhorn-manager-dgk4d                      1/1     Running   1          57m     10.244.1.145   k8s-node1   <none>           <none>
    longhorn-manager-hb7cl                      1/1     Running   0          57m     10.244.2.107   k8s-node2   <none>           <none>
    longhorn-manager-xrxll                      1/1     Running   0          57m     10.244.3.3     k8s-node3   <none>           <none>
    longhorn-ui-79f8976fbf-sb79r                1/1     Running   0          57m     10.244.3.5     k8s-node3   <none>           <none>
    

    示例1:测试创建PVC 自动创建PV

    [root@k8s-master storage]# cat  pvc-dyn-longhorn-demo.yaml   
    apiVersion: v1
    kind: PersistentVolumeClaim   #资源类型
    metadata:
      name: pvc-dyn-longhorn-demo
      namespace: default
    spec:
      accessModes: ["ReadWriteOnce"]
      volumeMode: Filesystem
      resources:
        requests:
          storage: 2Gi   #新的PV会以最低容量创建PV
        limits:
          storage: 10Gi
      storageClassName: longhorn  #选择longhorn创造
    
    [root@k8s-master storage]# kubectl apply -f longhorn.yaml 
    
    • 修改回收策略及副本数 非必要操作 根据实际需要修改
    [root@k8s-master storage]# kubectl get sc
    NAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    longhorn   driver.longhorn.io   Delete(注意回收策略 是删除会有风险)  Immediate   true      48s
    
    [root@k8s-master storage]# vim longhorn.yaml  #修改回收策略 修改yaml文件
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: longhorn-storageclass
      namespace: longhorn-system
    data:
      storageclass.yaml: |
        kind: StorageClass
        apiVersion: storage.k8s.io/v1
        metadata:
          name: longhorn
        provisioner: driver.longhorn.io   #这个是longhorn自定义的provisioner 存储在本地,如果有网络存储按自己需求修改
        allowVolumeExpansion: true
        reclaimPolicy: Delete
        volumeBindingMode: Immediate
        parameters:
          numberOfReplicas: "3"   #数据副本数 副本数越多数据当然会越安全,但同时的磁盘容量和性能要求也会更高
          staleReplicaTimeout: "2880"
          fromBackup: ""
        reclaimPolicy: Retain   #添加字段 修改回收策略
    ...
    ---
    [root@k8s-master storage]# kubectl apply -f longhorn.yaml  #重新应用配置
    [root@k8s-master storage]# kubectl delete  -f  pvc-dyn-longhorn-demo.yaml
    [root@k8s-master storage]# kubectl apply   -f  pvc-dyn-longhorn-demo.yaml  #重启创建Pod
    
    [root@k8s-master storage]# kubectl get sc  #修改成功
    NAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    longhorn   driver.longhorn.io   Retain          Immediate           true                   10m
    
    [root@k8s-master storage]# kubectl get pv #自动创建PV
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON   AGE
    pv-nfs-demo002                             10Gi       RWX            Retain           Available                                                           2d5h
    pv-nfs-demo003                             1Gi        RWO            Retain           Available                                                           2d5h
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            Retain           Bound       default/pvc-dyn-longhorn-demo   longhorn                118s
    
    [root@k8s-master storage]# kubectl get pvc #自动创建PVC
    NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc-dyn-longhorn-demo   Bound     pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            longhorn       2m19s
    
    修改svc 打开longhorn UI
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE   SELECTOR
    csi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP   17h   app=csi-attacher
    csi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP   17h   app=csi-provisioner
    csi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP   17h   app=csi-resizer
    csi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP   17h   app=csi-snapshotter
    longhorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP    17h   app=longhorn-manager
    longhorn-frontend   ClusterIP   10.111.219.113   <none>        80/TCP      17h   app=longhorn-ui  #修改SVC为NodePort
    
    [root@k8s-master ~]# kubectl edit  svc longhorn-frontend  --namespace longhorn-system 
    service/longhorn-frontend edited
    
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
    csi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP      17h   app=csi-attacher
    csi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP      17h   app=csi-provisioner
    csi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP      17h   app=csi-resizer
    csi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP      17h   app=csi-snapshotter
    longhorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP       17h   app=longhorn-manager
    longhorn-frontend   NodePort    10.111.219.113   <none>        80:30745/TCP   17h   app=longhorn-ui   #使用节点IP打开 节点:30745
    
    

    可以打开UI 查看到现有的卷、创建新卷、查看节点信息、备份等


    • 创建redis Pod绑定PVC
    [root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: volumes-pvc-longhorn-demo
      namespace: default
    spec:
      containers:
      - name: redis
        image: redis:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 6379
          name: redisport
        volumeMounts:
        - mountPath: /data
          name: redis-data-vol
      volumes:
        - name: redis-data-vol
          persistentVolumeClaim:
            claimName: pvc-dyn-longhorn-demo  #使用sc创建的pvc
    
    [root@k8s-master storage]# kubectl apply -f volumes-pvc-longhorn-demo.yaml
    
    [root@k8s-master storage]# kubectl get pod -o wide
    NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
    centos-deployment-66d8cd5f8b-95brg   1/1     Running   0          18h     10.244.2.117   k8s-node2   <none>           <none>
    my-grafana-7d788c5479-bpztz          1/1     Running   0          18h     10.244.2.120   k8s-node2   <none>           <none>
    volumes-pvc-longhorn-demo            1/1     Running   0          2m30s   10.244.1.172   k8s-node1   <none>           <none>
    
    [root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh
    /data # redis-cli
    127.0.0.1:6379> set mykey www.qq.com
    OK
    127.0.0.1:6379> bgsabe
    (error) ERR unknown command `bgsabe`, with args beginning with: 
    127.0.0.1:6379> bgsave
    Background saving started
    127.0.0.1:6379> exit
    /data # ls
    dump.rdb    lost+found
    /data # exit
    
    [root@k8s-master storage]# kubectl delete -f  volumes-pvc-longhorn-demo.yaml 
    pod "volumes-pvc-longhorn-demo" deleted
    
    [root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: volumes-pvc-longhorn-demo
      namespace: default
    spec:
      nodeName: k8s-node2  #指定运行节点
      containers:
      - name: redis
        image: redis:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 6379
          name: redisport
        volumeMounts:
        - mountPath: /data
          name: redis-data-vol
      volumes:
        - name: redis-data-vol
          persistentVolumeClaim:
            claimName: pvc-dyn-longhorn-demo
    
    [root@k8s-master storage]# kubectl get pod -o wide -w
    NAME                                 READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
    centos-deployment-66d8cd5f8b-95brg   1/1     Running             0          18h   10.244.2.117   k8s-node2   <none>           <none>
    my-grafana-7d788c5479-bpztz          1/1     Running             0          18h   10.244.2.120   k8s-node2   <none>           <none>
    volumes-pvc-longhorn-demo            1/1     Running             0          68s   10.244.2.124   k8s-node2   <none>           <none>
    
    [root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh #查看pvc数据
    /data # redis-cli
    127.0.0.1:6379> get mykey
    "www.qq.com"
    127.0.0.1:6379> exit
    /data # exit
    
    
    longhorn数据存储位置及自定义资源
    • 因为之前副本数,默认设置为3,所以3个节点,每个节点都一份数据保存
    [root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/ #默认的数据保存路径 
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2
    [root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2/
    revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta
    
    [root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/ 
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c
    [root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c/
    revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta
    
    [root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0
    [root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0/
    revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta
    
    [root@k8s-master storage]# kubectl api-resources --api-group=longhorn.io   #longhorn自定义的资源类型
    NAME                   SHORTNAMES   APIGROUP      NAMESPACED   KIND
    backingimagemanagers   lhbim        longhorn.io   true         BackingImageManager
    backingimages          lhbi         longhorn.io   true         BackingImage
    engineimages           lhei         longhorn.io   true         EngineImage
    engines                lhe          longhorn.io   true         Engine
    instancemanagers       lhim         longhorn.io   true         InstanceManager
    nodes                  lhn          longhorn.io   true         Node
    replicas               lhr          longhorn.io   true         Replica
    settings               lhs          longhorn.io   true         Setting
    sharemanagers          lhsm         longhorn.io   true         ShareManager
    volumes                lhv          longhorn.io   true         Volume
    
    [root@k8s-master storage]# kubectl get replicas -n longhorn-system
    NAME                                                  STATE     NODE        DISK                                   INSTANCEMANAGER               IMAGE                               AGE
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-08f28c21   running   k8s-node3   49846d34-b5a8-4a86-96f4-f0d7ca191f2a   instance-manager-r-0ad3175d   longhornio/longhorn-engine:v1.1.2   3h54m
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-6694fc48   running   k8s-node2   ce1ed80b-43c9-4fc9-8266-cedb736bacaa   instance-manager-r-61277092   longhornio/longhorn-engine:v1.1.2   3h54m
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-86a35cd3   running   k8s-node1   3d40b18a-c0e9-459c-b37e-d878152d1261   instance-manager-r-d8a9eb0e   longhornio/longhorn-engine:v1.1.2   3h54m
    
    

    相关文章

      网友评论

          本文标题:12.kubernetes笔记 Volume存储卷(三) Lon

          本文链接:https://www.haomeiwen.com/subject/xnutaltx.html