美文网首页
k8s使用LINSTOR pv

k8s使用LINSTOR pv

作者: TEYmL | 来源:发表于2020-12-24 11:12 被阅读0次

背景

  • 之前配置的kubernetes集群调用LINSTOR时是将LINSTOR controller,satellite等相关软件配置成container使用,宿主机不用安装LINSTOR软件
  • VersaSDS 中的iSCSI功能以及与OpenStack都有与LINSTOR的交互,而这些交互都是需要宿主机安装配置LINSTOR集群
  • VersaSDS加入kubernetes相关功能后,如果宿主机也进行LINSTOR集群的安装,kubernetes也进行LINSTOR集群配置的话,可能会产生冲突,所以需要配置测试kubernetes与宿主机LINSTOR集群的交互使用

环境

  • k8smaster节点:IP为10.203.1.90,作为Kubernetes的master节点以及LINSTOR集群的controller节点
  • k8snode1节点:IP为10.203.1.91,作为Kubernetes的node节点以及LINSTOR集群的satellite节点

测试

安装配置kubernetes集群

安装配置kubernetes集群

安装配置LINSTOR集群

安装

All node

执行以下命令

apt install software-properties-common
add-apt-repository ppa:linbit/linbit-drbd9-stack
apt update
apt install drbd-utils drbd-dkms lvm2
modprobe drbd
echo drbd > /etc/modules-load.d/drbd.conf
apt install linstor-controller linstor-satellite  linstor-client

#命令含义
#安装software-properties-common工具,安装之后才能执行第二个命令
#添加DRBD9 ppa源
#更新apt源
#安装DRBD9以及相应软件
#加载DRBD9
#DRBD9开机启动
#安装LINSTOR相关软件

配置

k8smaster

开启linstor-controller以及satellite service

命令

systemctl enable linstor-controller
systemctl start linstor-controller
systemctl enable linstor-satellite
systemctl start linstor-satellite

创建LINSTOR node

命令

linstor node create k8smaster 10.203.1.90 --node-type combined

结果

root@k8smaster:~# linstor n c k8smaster 10.203.1.90 --node-type combined 
SUCCESS:
Description:
    New node 'k8smaster' registered.
Details:
    Node 'k8smaster' UUID is: 34169cca-a2b5-408e-b94e-f9c95b0388ce
SUCCESS:
Description:
    Node 'k8smaster' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, file, file_thin, openflex_target]
    Supported resource layers  : [drbd, cache, storage]
    Unsupported storage providers:
        ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
        ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1
        SPDK: IO exception occured when running 'rpc.py get_spdk_version': Cannot run program "rpc.py": error=2, No such file or directory

    Unsupported resource layers:
        LUKS: IO exception occured when running 'cryptsetup --version': Cannot run program "cryptsetup": error=2, No such file or directory
        NVME: IO exception occured when running 'nvme version': Cannot run program "nvme": error=2, No such file or directory
        WRITECACHE: 'modprobe dm-writecache' returned with exit code 1
        OPENFLEX: IO exception occured when running 'nvme version': Cannot run program "nvme": error=2, No such file or directory

使用lv创建存储池poola

命令

vgcreate vg1 /dev/sdb
lvcreate -l 100%FREE  --thinpool vg1/lvmthinpool
linstor storage-pool create lvmthin k8smaster poola vg1/lvmthinpool

结果

root@k8smaster:~# vgcreate vg1 /dev/sdb
  Physical volume "/dev/sdb" successfully created.
  Volume group "vg1" successfully created
root@k8smaster:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree  
  vg1   1   0   0 wz--n- <20.00g <20.00g
root@k8smaster:~# lvcreate -l 100%FREE  --thinpool vg1/lvmthinpool
  Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "lvmthinpool" created.
root@k8smaster:~# linstor storage-pool create lvmthin k8smaster poola vg1/lvmthinpool                
SUCCESS:
    Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
    New storage pool 'poola' on node 'k8smaster' registered.
Details:
    Storage pool 'poola' on node 'k8smaster' UUID is: 5e6f787b-4d3a-4ed3-8d84-8fa8c30e56a7
SUCCESS:
    (k8smaster) Changes applied to storage pool 'poola'
root@k8smaster:~# linstor storage-pool create lvmthin k8snode1 poola vg1/lvmthinpool      
SUCCESS:
    Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
    New storage pool 'poola' on node 'k8snode1' registered.
Details:
    Storage pool 'poola' on node 'k8snode1' UUID is: 03f1a762-4645-4ce0-8b78-c7f4c7135e34
SUCCESS:
    (k8snode1) Changes applied to storage pool 'poola'

k8snode1

开启linstor-satellite service

命令

systemctl enable linstor-satellite
systemctl start linstor-satellite

添加linstor-client.conf 文件

在/etc/linstor路径下创建instor-client.conf,声明controller节点,文件内容如下

[global]
controllers=10.203.1.90

创建LINSTOR node

命令

linstor node create k8snode1 10.203.1.91 --node-type Satellite

结果

root@k8smaster:~# linstor n c k8smaster 10.203.1.90 --node-type combined 
SUCCESS:
Description:
    New node 'k8smaster' registered.
Details:
    Node 'k8smaster' UUID is: 34169cca-a2b5-408e-b94e-f9c95b0388ce
SUCCESS:
Description:
    Node 'k8smaster' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, file, file_thin, openflex_target]
    Supported resource layers  : [drbd, cache, storage]
    Unsupported storage providers:
        ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
        ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1
        SPDK: IO exception occured when running 'rpc.py get_spdk_version': Cannot run program "rpc.py": error=2, No such file or directory

    Unsupported resource layers:
        LUKS: IO exception occured when running 'cryptsetup --version': Cannot run program "cryptsetup": error=2, No such file or directory
        NVME: IO exception occured when running 'nvme version': Cannot run program "nvme": error=2, No such file or directory
        WRITECACHE: 'modprobe dm-writecache' returned with exit code 1
        OPENFLEX: IO exception occured when running 'nvme version': Cannot run program "nvme": error=2, No such file or directory

创建存储池poola

与k8smaster node一致

配置kubernetes LINSTOR CSI 服务

CSI是container storage interface的缩写,kubernetes使用第三方存储时需要使用第三方存储自己开发的CSI来进行交互

创建yaml文件

yaml文件是从LINBIT github 下载的,但是其中的images在国内可能不可用,需要修改成自己仓库的images,以及需要修改LINSTOR controller的ip,文件修改后内容如下
需要修改LINSTOR controller ip的话搜索“LINSTOR_IP”进行查询

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: linstor-csi-controller
  namespace: kube-system
spec:
  serviceName: "linstor-csi"
  replicas: 1
  selector:
    matchLabels:
      app: linstor-csi-controller
      role: linstor-csi
  template:
    metadata:
      labels:
        app: linstor-csi-controller
        role: linstor-csi
    spec:
      priorityClassName: system-cluster-critical
      serviceAccount: linstor-csi-controller-sa
      containers:
        - name: csi-provisioner
          image: teym88/csi-provisioner:v2.0.4
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--feature-gates=Topology=true"
            - "--timeout=120s"
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: csi-attacher
          image: teym88/csi-attacher:v3.0.2
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--timeout=120s"
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: csi-resizer
          image: teym88/csi-resizer:v1.0.1
          args:
          - "--v=5"
          - "--csi-address=$(ADDRESS)"
          env:
          - name: ADDRESS
            value: /var/lib/csi/sockets/pluginproxy/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
          - mountPath: /var/lib/csi/sockets/pluginproxy/
            name: socket-dir
        - name: csi-snapshotter
          image: teym88/csi-snapshotter:v3.0.2
          args:
            - "-csi-address=$(ADDRESS)"
            - "-timeout=120s"
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          imagePullPolicy: Always
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: linstor-csi-plugin
          image: teym88/linstor-csi:v0.10.1
          args:
            - "--csi-endpoint=$(CSI_ENDPOINT)"
            - "--node=$(KUBE_NODE_NAME)"
            - "--linstor-endpoint=$(LINSTOR_IP)"
            - "--log-level=debug"
          env:
            - name: CSI_ENDPOINT
              value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: LINSTOR_IP
              value: "http://10.203.1.90:3370"
          imagePullPolicy: "Always"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
      volumes:
        - name: socket-dir
          emptyDir: {}
---

kind: ServiceAccount
apiVersion: v1
metadata:
  name: linstor-csi-controller-sa
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-provisioner-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["get", "list"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-provisioner-binding
subjects:
  - kind: ServiceAccount
    name: linstor-csi-controller-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: linstor-csi-provisioner-role
  apiGroup: rbac.authorization.k8s.io

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-attacher-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-attacher-binding
subjects:
  - kind: ServiceAccount
    name: linstor-csi-controller-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: linstor-csi-attacher-role
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: linstor-csi-resizer-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["patch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-resizer-binding
subjects:
  - kind: ServiceAccount
    name: linstor-csi-controller-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: linstor-csi-resizer-role
  apiGroup: rbac.authorization.k8s.io

---

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: linstor-csi-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: linstor-csi-node
      role: linstor-csi
  template:
    metadata:
      labels:
        app: linstor-csi-node
        role: linstor-csi
    spec:
      priorityClassName: system-node-critical
      serviceAccount: linstor-csi-node-sa
      containers:
        - name: csi-node-driver-registrar
          image: teym88/csi-node-driver-registrar:v2.0.1
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "rm -rf /registration/linstor.csi.linbit.com /registration/linstor.csi.linbit.com-reg.sock"]
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DRIVER_REG_SOCK_PATH
              value: /var/lib/kubelet/plugins/linstor.csi.linbit.com/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi/
            - name: registration-dir
              mountPath: /registration/
        - name: linstor-csi-plugin
          image: teym88/linstor-csi:v0.10.1
          args:
            - "--csi-endpoint=$(CSI_ENDPOINT)"
            - "--node=$(KUBE_NODE_NAME)"
            - "--linstor-endpoint=$(LINSTOR_IP)"
            - "--log-level=debug"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: LINSTOR_IP
              value: "http://10.203.1.90:3370"
          imagePullPolicy: "Always"
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - name: device-dir
              mountPath: /dev
      volumes:
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins/linstor.csi.linbit.com/
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: device-dir
          hostPath:
            path: /dev
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: linstor-csi-node-sa
  namespace: kube-system

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-driver-registrar-role
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: linstor.csi.linbit.com
spec:
  attachRequired: true
  podInfoOnMount: true

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-driver-registrar-binding
subjects:
  - kind: ServiceAccount
    name: linstor-csi-node-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: linstor-csi-driver-registrar-role
  apiGroup: rbac.authorization.k8s.io

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: linstor-csi-snapshotter-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents/status"]
    verbs: ["update"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["create", "list", "watch", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots/status"]
    verbs: ["update"]

Apply yaml文件

无报错即成功,因为之前已经apply没有修改的yaml文件,所以有一些资源的状态是unchanged,正常第一次apply的话,最后状态都是configured

root@k8smaster:~# kubectl apply -f linstor.yaml 
statefulset.apps/linstor-csi-controller configured
serviceaccount/linstor-csi-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/linstor-csi-provisioner-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-provisioner-binding unchanged
clusterrole.rbac.authorization.k8s.io/linstor-csi-attacher-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-attacher-binding unchanged
clusterrole.rbac.authorization.k8s.io/linstor-csi-resizer-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-resizer-binding unchanged
daemonset.apps/linstor-csi-node configured
serviceaccount/linstor-csi-node-sa unchanged
clusterrole.rbac.authorization.k8s.io/linstor-csi-driver-registrar-role unchanged
Warning: storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
csidriver.storage.k8s.io/linstor.csi.linbit.com unchanged
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-driver-registrar-binding unchanged
clusterrole.rbac.authorization.k8s.io/linstor-csi-snapshotter-role unchanged

查看相关资源状态

可以看到第12行是一个CSI-controller的pod,第13,14行是CSI pod,CSI pod会分别运行在LINSTOR集群的所有节点,接受CSI-controller的调度
如果所有资源都是running的状态,就可以进行下一步测试

root@k8smaster:~# kubectl -n kube-system get all | cat -n
     1  NAME                                    READY   STATUS    RESTARTS   AGE
     2  pod/coredns-7f89b7bc75-qxggs            1/1     Running   0          5d6h
     3  pod/coredns-7f89b7bc75-r9ff5            1/1     Running   0          5d6h
     4  pod/etcd-k8smaster                      1/1     Running   0          5d6h
     5  pod/kube-apiserver-k8smaster            1/1     Running   0          5d6h
     6  pod/kube-controller-manager-k8smaster   1/1     Running   0          5d5h
     7  pod/kube-flannel-ds-v945j               1/1     Running   0          5d5h
     8  pod/kube-flannel-ds-vw2t8               1/1     Running   0          5d5h
     9  pod/kube-proxy-lp6cp                    1/1     Running   0          5d6h
    10  pod/kube-proxy-xdq9t                    1/1     Running   0          5d5h
    11  pod/kube-scheduler-k8smaster            1/1     Running   0          5d5h
    12  pod/linstor-csi-controller-0            5/5     Running   0          60m
    13  pod/linstor-csi-node-55w8t              2/2     Running   0          49m
    14  pod/linstor-csi-node-wwdz9              2/2     Running   0          59m
    15
    16  NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    17  service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   5d6h
    18
    19  NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    20  daemonset.apps/kube-flannel-ds    2         2         2       2            2           <none>                   5d5h
    21  daemonset.apps/kube-proxy         2         2         2       2            2           kubernetes.io/os=linux   5d6h
    22  daemonset.apps/linstor-csi-node   2         2         2       2            2           <none>                   81m
    23
    24  NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    25  deployment.apps/coredns   2/2     2            2           5d6h
    26
    27  NAME                                 DESIRED   CURRENT   READY   AGE
    28  replicaset.apps/coredns-7f89b7bc75   2         2         2       5d6h
    29
    30  NAME                                      READY   AGE
    31  statefulset.apps/linstor-csi-controller   1/1     81m

kubernetes创建LINSTOR resource

创建StorageClass 资源

编辑yaml文件,声明使用LINSTOR 插件,存储池,DRBD 资源 mirror-ways等,文件内容如下

root@k8smaster:~# cat linstor_sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor
provisioner: linstor.csi.linbit.com
parameters:
  autoPlace: "2"
  storagePool: "poola"

Apply

root@k8smaster:~# kubectl apply -f linstor_sc.yaml
storageclass.storage.k8s.io/linstor created

创建PVC

编辑一个yaml文件,声明使用刚刚创建的StorageClass来创建pvc,文件内容如下

root@k8smaster:~# cat linstor_sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor
provisioner: linstor.csi.linbit.com
parameters:
  autoPlace: "2"
  storagePool: "poola"
root@k8smaster:~# cat linstor_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: linstor
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Apply

root@k8smaster:~# kubectl apply -f linstor_pvc.yaml      
persistentvolumeclaim/test-pvc created

查看状态

查看pvc状态

状态Bound为正常

root@k8smaster:~# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-5960c246-8910-4d06-9d4a-348c888860ae   1Gi        RWO            linstor        46m

查看LINSTOR resource

可以看到已经创建了一个replicas为2的LINSTOR resource

root@k8smaster:~# linstor r l
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName                             ┊ Node      ┊ Port ┊ Usage  ┊ Conns ┊    State ┊ CreatedOn           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-5960c246-8910-4d06-9d4a-348c888860ae ┊ k8smaster ┊ 7000 ┊ Unused ┊ Ok    ┊ UpToDate ┊ 2020-12-22 00:03:49 ┊
┊ pvc-5960c246-8910-4d06-9d4a-348c888860ae ┊ k8snode1  ┊ 7000 ┊ Unused ┊ Ok    ┊ UpToDate ┊ 2020-12-22 00:03:49 ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

问题

linstor-csi版本过低

描述:执行部署LINSTOR CSI服务时选择的版本v0.7.0过低,与现在的kubernetes版本不符,部分service无法创建

root@k8smaster:~# curl https://raw.githubusercontent.com/LINBIT/linstor-csi/v0.7.0/examples/k8s/deploy/linstor-csi-1.14.yaml | sed "s/linstor-controller.example.com/10.203.1.90/g" | kubectl apply -f -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10917  100 10917    0     0    156      0  0:01:09  0:01:09 --:--:--  2355
serviceaccount/linstor-csi-controller-sa created
clusterrole.rbac.authorization.k8s.io/linstor-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/linstor-csi-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-attacher-binding created
clusterrole.rbac.authorization.k8s.io/linstor-csi-cluster-driver-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-cluster-driver-registrar-binding created
serviceaccount/linstor-csi-node-sa created
clusterrole.rbac.authorization.k8s.io/linstor-csi-driver-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-driver-registrar-binding created
clusterrole.rbac.authorization.k8s.io/linstor-csi-snapshotter-role created
clusterrolebinding.rbac.authorization.k8s.io/linstor-csi-snapshotter-binding created
unable to recognize "STDIN": no matches for kind "StatefulSet" in version "apps/v1beta1"
unable to recognize "STDIN": no matches for kind "DaemonSet" in version "apps/v1beta2"

解决:到LINBIT github查看最新的版本
最新地址:https://github.com/piraeusdatastore/linstor-csi/blob/795a2242ff31e4b9266124253616e69821ba4f56/examples/k8s/deploy/linstor-csi-1.17.yaml

image不可用

描述:直接apply最新的yaml文件还是有问题,原因是网络问题导致 image pull error
解决:查找yaml文件中使用的image,与自己仓库的image作对比,进行替换再重新apply

相关文章

网友评论

      本文标题:k8s使用LINSTOR pv

      本文链接:https://www.haomeiwen.com/subject/cnxmnktx.html