美文网首页Kubernetes
Kubernetes 基于GlusterFS的动态存储管理Sto

Kubernetes 基于GlusterFS的动态存储管理Sto

作者: YichenWong | 来源:发表于2018-12-13 17:01 被阅读195次

    说明

    最近由于需要部署有状态服务,没有云环境的支持,不能很好的通过cloud provider调用云磁盘EBS,所以在node节点部署了一套glusterFS分布式文件存储系统,用来提供非结构化数据存储,存储图片,大文件等等.

    Kubernetes中使用GlusterFS作为持久化存储,要提供storageClass使用需要依赖Heketi工具。Heketi是一个具有resetful接口的glusterfs管理程序,作为kubernetes的Storage存储的external provisioner。 “Heketi提供了一个RESTful管理界面,可用于管理GlusterFS卷的生命周期。借助Heketi,像OpenStack Manila,Kubernetes和OpenShift这样的云服务可以动态地配置GlusterFS卷和任何支持的持久性类型。Heketi将自动确定整个集群的brick位置,确保将brick及其副本放置在不同的故障域中。Heketi还支持任意数量的GlusterFS集群,允许云服务提供网络文件存储,而不受限于单个GlusterFS集群。

    heketi:提供基于RESTful接口管理glusterfs的功能,可以方便的创建集群管理glusterfs的node,device,volume;与k8s结合可以创建动态的PV,扩展glusterfs存储的动态管理功能。主要用来管理glusterFS volume的生命周期,初始化时候就要分配好裸磁盘(未格式化)设备.

    注意事项

    • 安装Glusterfs客户端:每个kubernetes集群的节点需要安装gulsterfs的客户端,如glusterfs-cli,glusterfs-fuse.主要用于在每个node节点挂载volume。
    • 加载内核模块:每个kubernetes集群的节点运行modprobe dm_thin_pool,加载内核模块。
    • 高可用(至少三个节点):至少需要节点用来部署glusterfs集群,并且这3个节点每个节点需要至少一个空余的磁盘。

    基础设施要求:

    • 正在运行的Kubernetes集群,至少有三个node节点,每个节点至少有一个可用的裸块设备(如EBS卷或本地磁盘,就是没有格式化的).
    • 用于运行GlusterFS节点必须为GlusterFS通信打开相应的端口(如果开启了防火墙的情况下,没开防火墙就不需要这些操作)。在每个节点上运行以下命令
    $ sudo iptables -N heketi
    $ sudo iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 24007 -j $ sudo ACCEPT
    $ sudo iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
    $ sudo iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
    $ sudo iptables -A heketi -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT
    $ sudo service iptables save
    

    Heketi安装

    Heketi是由golang编写,直接静态编译运行二进制即可也可以通过yum安装以及docker部署,主要会产生db文件存储cluster、node、device、volume等信息。

    #/bin/bash
    wget -c https://github.com/heketi/heketi/releases/download/v8.0.0/heketi-v8.0.0.linux.amd64.tar.gz
    tar xzf heketi-v8.0.0.linux.amd64.tar.gz
    mkdir -pv /data/heketi/{bin,conf,data}
    cp heketi/heketi.json /data/heketi/conf/
    cp heketi/{heketi,heketi-cli} /data/heketi/bin/
    mv heketi-v8.0.0.linux.amd64.tar.gz /tmp/
    

    创建ssh-key

    我们glusterFS部署在k8s集群外,所以heketi通过ssh管理glusterFS。需要创建免秘钥登陆到所有glusterFS节点。

    $ sudo  ssh-keygen -f /data/heketi/conf/heketi_key -t rsa -N ''
    # 将公钥放到所有GlusterFS节点
    $ sudo ssh-copy-id -i /data/heketi/conf/heketi_key.pub root@node1
    $ ...
    
    

    配置文件修改

    参考https://github.com/heketi/heketi/blob/master/client/cli/go/topology-sample.json

    {
      "_port_comment": "Heketi Server Port Number",
      "port": "18080",
    
      "_use_auth": "Enable JWT authorization. Please enable for deployment",
      # 开启认证
      "use_auth": true,
    
      "_jwt": "Private keys for access",
      "jwt": {
        "_admin": "Admin has access to all APIs",
        # admin用户的key
        "admin": {
          "key": "adminkey"
        },
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "userkey"
        }
      },
    
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": [
          "Execute plugin. Possible choices: mock, ssh",
          "mock: This setting is used for testing and development.",
          "      It will not send commands to any node.",
          "ssh:  This setting will notify Heketi to ssh to the nodes.",
          "      It will need the values in sshexec to be configured.",
          "kubernetes: Communicate with GlusterFS containers over",
          "            Kubernetes exec api."
        ],
        # 修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
        # heketi 通过ssh远程控制glusterfs的volume周期管理。
        "executor": "ssh",
    
        "_sshexec_comment": "SSH username and private key file information",
        "sshexec": {
           # ssh-keygen 生成,然后拷贝到glusterfs所在机器
          "keyfile": "/data/heketi/conf/heketi_key",
          "user": "root",
          "port": "22",
          # 每创建一个volume,heketi会刷新fstab配置保证机器自启动会自动挂载。
          "fstab": "/etc/fstab"
        },
    
        "_kubeexec_comment": "Kubernetes configuration",
        "kubeexec": {
          "host" :"https://kubernetes.host:8443",
          "cert" : "/path/to/crt.file",
          "insecure": false,
          "user": "kubernetes username",
          "password": "password for kubernetes user",
          "namespace": "OpenShift project or Kubernetes namespace",
          "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
        },
    
        "_db_comment": "Database file name",
        # 修改heketi默认db路径
        "db": "/data/heketi/data/heketi.db",
    
        "_loglevel_comment": [
          "Set log level. Choices are:",
          "  none, critical, error, warning, info, debug",
          "Default is warning"
        ],
        "loglevel" : "debug"
      }
    }
    

    注意:

    • 需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。
    • 使用docker部署的时候,还需将/var/lib/heketi/mounts 挂载至容器里面, heketi 会将此目录作为 gluster volume的挂载点。
    [root@k8s1 ~]# more /etc/fstab
    #
    # /etc/fstab
    # Created by anaconda on Sun Oct 15 15:19:00 2017
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    UUID=eb448abb-3012-4d8d-bcde-94434d586a31 /                       ext4    defaults        1 1
    UUID=63b64b3a-2961-4798-b7f1-cc97484ee49f /data                   ext4    defaults        1 1
    /dev/mapper/vg_fd3a11426117508af77aa38e9565ce65-brick_4b7854340fa8ee78106b1b1446abf078 /var/lib/heketi/mounts/vg_fd3a11426117508af77aa38e9565ce65/brick_4b
    7854340fa8ee78106b1b1446abf078 xfs rw,inode64,noatime,nouuid 1 2
    /dev/mapper/vg_fd3a11426117508af77aa38e9565ce65-brick_1904166cdf0846ee649ccefe66ce1e50 /var/lib/heketi/mounts/vg_fd3a11426117508af77aa38e9565ce65/brick_19
    04166cdf0846ee649ccefe66ce1e50 xfs rw,inode64,noatime,nouuid 1 2
    

    systemd配置

    # cat /usr/lib/system/systemd/heketi.service
    [Unit]
    Description=RESTful based volume management framework for GlusterFS
    Before=network-online.target
    After=network-online.target
    Documentation=https://github.com/heketi/heketi
    
    [Service]
    Type=simple
    LimitNOFILE=65536
    ExecStart=/data/heketi/bin/heketi --config=/data/heketi/conf/heketi.json
    KillMode=process
    Restart=on-failure
    RestartSec=5
    SuccessExitStatus=15
    StandardOutput=syslog
    StandardError=syslog
    
    [Install]
    WantedBy=multi-user.target
    

    启动heketi服务

    $ sudo systemctl start heketi
    $sudo systemctl enable heketi
    $sudo systemctl status heketi
    

    Heketi管理GlusterFS

    添加cluster

    
    $ sudo heketi-cli --user admin --server http://10.111.209.188:18080 --secret adminkey --json  cluster create
    {"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[],"block":true,"file":true,"blockvolumes":[]}
    
    

    将3个glusterfs节点作为node添加到cluster

    由于我们开启了heketi认证,所以每次执行heketi-cli操作时,都需要带上一堆的认证字段,比较麻烦,我在这里创建一个别名来避免相关操作:

    
    $ sudo  alias heketi-cli='heketi-cli --server "[http://192.168.75.175:18080"](http://192.168.75.175:18080/) --user "admin" --secret "adminkey"'
    
    

    添加节点

    
    $sudo heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name k8s1 --storage-host-name 10.111.209.188  --zone 1
    
    $sudo heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name k8s1 --storage-host-name 10.111.209.189 --zone 1
    
    $sudo heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name k8s3 --storage-host-name  10.111.209.190 --zone 1
    
    

    注意: 如果heketi 运行账号为非root,需要保证有sudo权限。

    添加device

    机器只是作为gluster的运行单元,volume是基于device创建的。同时需要特别说明的是,目前heketi仅支持使用裸分区或裸磁盘(未格式化)添加为device,不支持文件系统。

    
    # --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘`
    
    $ sudo  heketi-cli  -json device add -name="/dev/vdc1" --node ``"2e4dc73fb2657586e7a9e64e39c8f01a"
    
    $ sudo heketi-cli node list`
    
    Id:2e4dc73fb2657586e7a9e64e39c8f01a     Cluster:d102a74079dd79aceb3c70d6a7e8b7c4
    
    Id:43a661a917c42dfe9b6659b8c9d848e9     Cluster:d102a74079dd79aceb3c70d6a7e8b7c4
    
    Id:ce5a9e2983d067051b6ccad0a4bd0988     Cluster:d102a74079dd79aceb3c70d6a7e8b7c4
    
    

    生产实际配置

    以上ad-hoc命令均可通过配置文件创建然后导入:

    
    $ sudo  cat /data/heketi/conf/topology.json
    
    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "k8s1"
                                ],
                                "storage": [
                                    "10.111.209.188"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vdc1"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "k8s2"
                                ],
                                "storage": [
                                    "10.111.209.189"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vdc1"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "k8s3"
                                ],
                                "storage": [
                                    "10.111.209.190"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vdc1"
                        ]
                    }             
                ]
            }
        ]
    }
    
    

    创建:

    
    $ sudo heketi-cli  topology load --json topology.json
    
    

    添加volume

    这里仅仅是做一个测试,实际使用中,会由kubernetes自动创建pv.

    创建一个大小为3G,副本为2的volume

    
    $ sudo  heketi-cli volume create --size 3 --replica 2
    
    Name: vol_685f9aea1896f53f30a22a9d15de1680
    
    Size: 3
    
    Volume Id: 685f9aea1896f53f30a22a9d15de1680
    
    Cluster Id: d102a74079dd79aceb3c70d6a7e8b7c4
    
    Mount: 10.111.209.188:vol_685f9aea1896f53f30a22a9d15de1680
    
    Mount Options: backup-volfile-servers=10.111.209.190,10.111.209.189
    
    Block: false
    
    Free Size: 0
    
    Reserved Size: 0
    
    Block Hosting Restriction: (none)
    
    Block Volumes: []
    
    Durability Type: replicate
    
    Distributed+Replica: 2
    
    

    kubernetes storageclass 配置

    创建storageclass

    添加storageclass-glusterfs.yaml文件,内容如下:

    
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: heketi-secret
      namespace: default
    data:
      # base64 encoded password. E.g.: echo -n "mypassword" | base64
      key: YWRtaW5rZXk=
    type: kubernetes.io/glusterfs
    
    ---
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: glusterfs
    provisioner: kubernetes.io/glusterfs
    allowVolumeExpansion: true
    parameters:
      resturl: "http://10.111.209.188:18080"
      clusterid: "6fd6bf78b84315e12abcf8b6db6b1a40"
      restauthenabled: "true"
      restuser: "admin"
      #secretNamespace: "default"
      #secretName: "heketi-secret"
      restuserkey: "adminkey"
      gidMin: "40000"
      gidMax: "50000"
      volumetype: "replicate:2"
    # kubectl apply -f glusterfs-storageclass.yml
    secret/heketi-secret created
    storageclass.storage.k8s.io/glusterfs created
    # kubectl get sc
    NAME                  PROVISIONER               AGE
    glusterfs (default)   kubernetes.io/glusterfs   2m19s
    

    注意:

    更详细的用法参考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs

    创建pvc

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: glusterfs-mysql1
      namespace: default
      annotations:
        volume.beta.kubernetes.io/storage-class: "glusterfs"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
          
    # kubectl create -f glusterfs-pvc.yaml
    persistentvolumeclaim/glusterfs-mysql1 created
    # kubectl get pv,pvc
    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
    persistentvolume/pvc-11aa48c5-fe0f-11e8-9803-00163e13b711   1Gi        RWX            Retain           Bound    default/glusterfs-mysql1   glusterfs               2m
    
    NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    persistentvolumeclaim/glusterfs-mysql1   Bound    pvc-11aa48c5-fe0f-11e8-9803-00163e13b711   1Gi        RWX            glusterfs      2m20s
    
    

    创建pod,使用pvc

    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: mysql
      namespace: default
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: mysql
        spec:
          containers:
          - name: mysql
            image: mysql:5.7
            imagePullPolicy: IfNotPresent
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: root123456
            ports:
              - containerPort: 3306
            volumeMounts:
            - name: gluster-mysql-data
              mountPath: "/var/lib/mysql"
          volumes:
            - name: gluster-mysql-data
              persistentVolumeClaim:
                claimName: glusterfs-mysql1
                
    # kubectl create -f /etc/kubernetes/mysql-deployment.yaml
    deployment.extensions/mysql created
    # kubectl exec -it mysql-84786cf494-hb2ss -- df -PHT /var/lib/mysql
    Filesystem                                          Type            Size  Used Avail Use% Mounted on
    10.111.209.188:vol_426e62366141c789dac33f2e68dfb13b fuse.glusterfs  1.1G  266M  798M  25% /var/lib/mysql
    
    

    创建statefulset

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None
      selector:
        app: nginx
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nginx
    spec:
      selector:
        matchLabels:
          app: nginx # has to match .spec.template.metadata.labels
      serviceName: "nginx"
      replicas: 3 # by default is 1
      template:
        metadata:
          labels:
            app: nginx # has to match .spec.selector.matchLabels
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: www
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "glusterfs"
          resources:
            requests:
              storage: 1Gi
    # kubectl apply -f nginx-statefulset.yml
    service/nginx created
    statefulset.apps/nginx created
    
    
    # kubectl get pod,pv,pvc
    NAME           READY   STATUS    RESTARTS   AGE
    pod/nginx-0    1/1     Running   0          116s
    pod/nginx-1    1/1     Running   0          98s
    pod/nginx-2    1/1     Running   0          91s
    
    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
    persistentvolume/pvc-5ac3eba9-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-0   glusterfs               99s
    persistentvolume/pvc-65f27519-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-1   glusterfs               93s
    persistentvolume/pvc-69b31512-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            Retain           Bound    default/www-nginx-2   glusterfs               86s
    
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    persistentvolumeclaim/www-nginx-0   Bound    pvc-5ac3eba9-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      116s
    persistentvolumeclaim/www-nginx-1   Bound    pvc-65f27519-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      98s
    persistentvolumeclaim/www-nginx-2   Bound    pvc-69b31512-fe12-11e8-a19e-00163e14b18f   1Gi        RWO            glusterfs      91s
    

    我们可以看到RECLAIM POLICY: Retain ,经过测试

    • 删除pvc,pv status会变成Released状态,且不会被删除
    • 删除pv, 通过heketi-cli volume list查看,volume不会被删除

    kubernetes pv和gluster volume不一致时,可使用heketi来统一管理volume.此文档heketi和glusterfs都在kubernetes集群外部署。对于支持AWS EBS的磁盘,可通过EBS storageClass方式将glusterFS heketi部署在容器中管理.参考https://github.com/gluster/gluster-kubernetes

    参考文档

    相关文章

      网友评论

        本文标题:Kubernetes 基于GlusterFS的动态存储管理Sto

        本文链接:https://www.haomeiwen.com/subject/vqjbhqtx.html