美文网首页
130.kuernetes存储(GlusterFS)之PV,PV

130.kuernetes存储(GlusterFS)之PV,PV

作者: davisgao | 来源:发表于2018-11-13 10:47 被阅读0次

    1.glusterfs的安装

    见博客 glusterfs

    2.在GlusterFS中为Kubernetes创建PV

    [root@host214 heketi]# gluster volume create kube-volume replica 2 transport tcp 10.20.16.227:/brick1/kube-volume 10.20.16.228://brick1/kube-volume force
    volume create: kube-volume: success: please start the volume to access data
    [root@host214 heketi]# gluster volume info kube-volume 
     
    Volume Name: kube-volume
    Type: Replicate
    Volume ID: 22809750-a531-4347-a3ba-3bedd0686424
    Status: Created
    Snapshot Count: 0
    Xlator 1: BD
    Capability 1: thin
    Capability 2: offload_copy
    Capability 3: offload_snapshot
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: 10.20.16.227:/brick1/kube-volume
    Brick1 VG: 
    Brick2: 10.20.16.228:/brick1/kube-volume
    Brick2 VG: 
    Options Reconfigured:
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
    [root@host214 heketi]# gluster volume start kube-volume
    volume start: kube-volume: success
    //Glusterfs调优
    [root@host214 heketi]# gluster volume quota kube-volume enable
    volume quota : success
    
    # 开启 指定 volume 的配额
    [root@host214 heketi]# gluster volume quota kube-volume  enable
     
    # 限制 指定 volume 的配额
    [root@host214 heketi]# gluster volume quota kube-volume  limit-usage / 1TB
     
    # 设置 cache 大小, 默认32MB
    [root@host214 heketi]#  gluster volume set kube-volume performance.cache-size 4GB
     
    # 设置 io 线程, 太大会导致进程崩溃
    [root@host214 heketi]#  gluster volume set kube-volume  performance.io-thread-count 16
     
    # 设置 网络检测时间, 默认42s
    [root@host214 heketi]#  gluster volume set kube-volume  network.ping-timeout 10
     
    # 设置 写缓冲区的大小, 默认1M
    [root@host214 heketi]#  gluster volume set kube-volume  performance.write-behind-window-size 1024MB
    volume set: success
    
    

    3.kubernetes中的PersistentVolume和PersistentVolumeClaims

    PersistentVolume的使用方式

    • 静态方式:管理员预先创建一系列的pv,给应用使用
    • 动态方式:当创建的pv不匹配用户的pvc时,集群会通过StorageClasses尝试动态创建一个pv给pvc使用。但是前提是集群中已经提前创建和配置了该StorageClasses,如果StorageClasses为空,那么将不能够动态创建。

    1.apiserver中可以通过--enable-admission-plugins=DefaultStorageClass,*配置默认的StorageClasses
    2.PV和PVC是一一对应的
    3.如果容量或者权限不满足的情况的下也会新创建PV,例如PVC要求100G,但是PV只有PVC
    4.PV的申请可以指定容量和访问模式

    PersistentVolume的访问模式

    • ReadWriteOnce(RWO) – 只有一个节点可以读写
    • ReadOnlyMany(ROX) – 多个节点可读
    • ReadWriteMany(RWX) – 多个节点读写

    PersistentVolume的回收策略

    • Retain – 保留,手工维护
    • Recycle – 不保留直接删除 (rm -rf /thevolume/*),可以被重新挂载(已废弃)
    • Delete – 直接删除volume

    volume 的状态

    • Available – 可用状态尚未绑定
    • Bound – 和pvc已绑定
    • Released – 和pvc解除绑定,但资源尚未被回收
    • Failed – 自动回收异常

    4.kubernetes通过GlusterFS实现动态存储

    • StorageClass的创建
    [root@host229 gluster]# cat gluster-storageclass.yaml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gluster-storageclass
    provisioner: kubernetes.io/glusterfs
    #default :delete
    reclaimPolicy: Retain
    parameters:
      gidMax: "50000"
      gidMin: "40000"
      resturl: "http://10.20.16.214:8080"
      volumetype: replicate:2
      restauthenabled: "true"
      restuser: "admin"
      restuserkey: "password"
    #  secretNamespace: "default"
    #  secretName: "heketi-secret"
    allowVolumeExpansion: true
    [root@host229 gluster]# kubectl apply -f gluster-storageclass.yaml
    
    • endpoints的创建
    [root@host229 gluster]# cat glusterfs-endpoints.json
    {
      "kind": "Endpoints",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-endpoints"
      },
      "subsets": [
        {
          "addresses": [
            {
              "ip": "10.20.16.214"
            }
          ],
          "ports": [
            {
              "port": 8080
            }
          ]
        }
      ]
    }
    [root@host229 gluster]# kubectl apply -f gluster-endpoints.yaml
    [root@host229 gluster]# kubectl get ep
    NAME                  ENDPOINTS                                                  AGE
    glusterfs-endpoints   10.20.16.214:8080                                          16h
    kubernetes            10.20.16.229:6443                                          5d
    nginx                 192.168.0.3:80,192.168.0.4:80,192.168.1.5:80 + 2 more...   2d
    tomcat                192.168.0.2:8080,192.168.1.2:8080,192.168.2.3:8080         4d
    
    
    • PersistentVolume的创建
    [root@host229 gluster]# cat glusterfs-pv.yaml 
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: kube001
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Recycle
      storageClassName: gluster-storageclass
      glusterfs:
        endpoints: "glusterfs-endpoints" //上面定义的endpoints
        path: "kube-volume" //第二部创建的名称
        readOnly: false
    [root@host229 gluster]# kubectl apply -f  glusterfs-pv.yaml 
    [root@host229 gluster]# kubectl get pv -o wide
    NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM            STORAGECLASS           REASON    AGE
    kube001   5Gi        RWX            Recycle          Bound     default/pvc001   gluster-storageclass   
    
    • PersistentVolumeClaims的创建

    pvc创建的时候会找已经存在的满足大小条件和权限条件的挂载,否则会通过storageclass创建或者挂载失败

    [root@host229 gluster]# cat glusterfs-pvc.yaml 
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc001
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: gluster-storageclass
      resources:
        requests:
          storage: 2Gi
    [root@host229 gluster]# kubectl apply -f  glusterfs-pvc.yaml 
    [root@host229 gluster]# kubectl get pv,pvc -o wide
    NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM            STORAGECLASS           REASON    AGE
    persistentvolume/kube001   5Gi        RWX            Recycle          Bound     default/pvc001   gluster-storageclass             16h
    
    NAME                           STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    persistentvolumeclaim/pvc001   Bound(此处已绑定)     kube001   5Gi        RWX            gluster-storageclass   16h
    
    • 通过nginx测试
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: gluster-nginx
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            name: nginx
        spec:
          containers:
            - name: nginx
              image: nginx
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
              volumeMounts:
                - name: kube001
                  mountPath: "/usr/share/nginx/html"
          volumes:
          - name: kube001
            persistentVolumeClaim:
               claimName: pvc001
    [root@host229 gluster]# kube apply -f nginx.yaml
    [root@host229 gluster]# kubectl get pod -o wide
    NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
    gluster-nginx-7465bd6456-bc7xq   1/1       Running   0          12s       192.168.2.9   host227
    gluster-nginx-7465bd6456-tm8nz   1/1       Running   0          12s       192.168.0.5   host229
    
    //查询挂载信息
    [root@host229 gluster]# kubectl exec -it gluster-nginx-7465bd6456-bc7xq -- df -h|grep kube-volume
    10.20.16.214:kube-volume                     373G   12G  361G   4% /usr/share/nginx/html
    //暴露端口
    [root@host229 gluster]#  kubectl expose deployment gluster-nginx --port=9090 --target-port=80 --type=NodePort
    //访问测试
    [root@host229 gluster]# curl 192.168.2.9
    
    [root@host229 gluster]# cat index.html 
    <!doctype html>
    <html lang="en">
     <head>
      <title>GlusterFS</title>
     </head>
     <body>
      <h1>我是挂载了gluster的应用!!!!</h1>
     </body>
    </html>
    [root@host229 gluster]# kubectl cp index.html gluster-nginx-7465bd6456-bc7xq:/usr/share/nginx/html
    //查看glusterfs下的存储,表示拷贝进去的文件已经落的gluster中,同时落两份
    [root@host227 ~]# ls /brick1/kube-volume/
    index.html
    [root@host228 ~]# ls /brick1/kube-volume/
    index.html
    
    [root@host229 gluster]# kubectl exec -it gluster-nginx  -- ls /var/www/html
    index.html
    
    

    [root@host214 bin]# ./heketi-cli --user admin --secret password --server http://10.20.16.228:30944 cluster list
    Clusters:


    [root@host229 gluster]# kubectl get pvc,pv,pod -owide
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    persistentvolumeclaim/pvc001 Bound kube001 5Gi RWX gluster-storageclass 1m

    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    persistentvolume/kube001 5Gi RWX Recycle Bound default/pvc001 gluster-storageclass 1m

    NAME READY STATUS RESTARTS AGE IP NODE
    pod/gluster-nginx 1/1 Running 0 1m 192.168.2.8 host227

    相关文章

      网友评论

          本文标题:130.kuernetes存储(GlusterFS)之PV,PV

          本文链接:https://www.haomeiwen.com/subject/sownxqtx.html