ceph rbd image 多集群使用
背景
多个计算集群且计算机群使用外部ceph,但是计算集群的pod需要从一个集群迁移到另一个集群
实现方式
假设是从计算集群1迁移计算集群2,计算集群1中的pv内容如下
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: kubesphere-rook-ceph.rbd.csi.ceph.com
creationTimestamp: "2023-05-08T11:41:15Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-716a6f63-2b4e-4b42-bd90-a8231deab0fd
resourceVersion: "239077529"
uid: 853cb97b-6d5e-4ac2-b345-15592390e1df
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: www-nginxdemo-0
namespace: kube-system
resourceVersion: "239077512"
uid: 716a6f63-2b4e-4b42-bd90-a8231deab0fd
csi:
controllerExpandSecretRef:
name: ceph-wwq
namespace: kubesphere-rook-ceph
driver: kubesphere-rook-ceph.rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
name: ceph-wwq
namespace: kubesphere-rook-ceph
volumeAttributes:
clusterID: 1319f9dd-2326-4297-a486-e57cc8c6df1b
imageFeatures: layering
imageFormat: "2"
imageName: csi-vol-fb6c7473-ed94-11ed-a41e-027bc4f73bc2
journalPool: rbd-wwq
pool: rbd-wwq
storage.kubernetes.io/csiProvisionerIdentity: 1672975589281-8081-kubesphere-rook-ceph.rbd.csi.ceph.com
volumeHandle: 0001-0024-1319f9dd-2326-4297-a486-e57cc8c6df1b-000000000000001e-fb6c7473-ed94-11ed-a41e-027bc4f73bc2
persistentVolumeReclaimPolicy: Delete
storageClassName: cephblk-wwq
volumeMode: Filesystem
复制计算集群1中的pv,在计算集群2中手动创建,只需要修改storage.kubernetes.io/csiProvisionerIdentity,修改后的内容是计算集群2中的实际值(随便创建一个pv就可以得到这个值)
网友评论