环境
集群名 | Host | 备注 |
---|---|---|
cluster1 | 10.9.8.95-97 | ci-test |
cluster2 | 10.9.8.72-75 | k1-k4 |
操作步骤
- 在 cluster1 创建 rbd 卷,并写入文件
[root@ci-test1 ~]# rbd create pool/lun1 --size=1G
[root@ci-test1 ~]# rbd ls -p pool
lun1
[root@ci-test1 ~]# rbd feature disable pool/lun1 object-map fast-diff deep-flatten
[root@ci-test1 ~]# rbd map pool/lun1
/dev/rbd0
[root@ci-test1 ~]# mkdir /mnt/rbd
[root@ci-test1 ~]# mount /dev/rbd0 /mnt/rbd/
mount: /dev/rbd0 is write-protected, mounting read-only
mount: unknown filesystem type '(null)'
[root@ci-test1 ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=32768 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=
[root@ci-test1 ~]# mount /dev/rbd0 /mnt/rbd/
[root@ci-test1 ~]# cd /mnt/rbd/
[root@ci-test1 rbd]# ls
[root@ci-test1 rbd]# cp ~/go1.20.5.linux-amd64.tar.gz .
[root@ci-test1 rbd]# ls
go1.20.5.linux-amd64.tar.gz
[root@ci-test1 rbd]# md5sum go1.20.5.linux-amd64.tar.gz > md5
[root@ci-test1 rbd]# ls
go1.20.5.linux-amd64.tar.gz md5
- 创建快照
[root@ci-test1 ~]# rbd snap create pool/lun1@lun1_snap1
[root@ci-test1 ~]# rbd snap ls pool/lun1
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 lun1_snap1 1 GiB Fri Nov 10 16:15:46 2023
[root@ci-test1 ~]# rbd ls -l -p pool
NAME SIZE PARENT FMT PROT LOCK
lun1 1 GiB 2
lun1@lun1_snap1 1 GiB 2
- 导出 rbd 快照
[root@ci-test1 ~]# rbd export pool/lun1@lun1_snap1 lun1.export
Exporting image: 100% complete...done.
[root@ci-test1 ~]# ll lun1.export -h
-rw-r--r-- 1 root root 1.0G Nov 10 16:20 lun1.export
- 将导出的文件拷贝到 cluster2 节点上
~/go/src/deeproute/smd (udev ✗) scp 10.9.8.95:~/lun1.export .
lun1.export
- 在 cluster2 上导入 lun1.export
~/go/src/deeproute/smd (udev ✗) rbd import --image-format 2 --image-feature layering lun1.export new-pool/new-image
Importing image: 100% complete...done.
~/go/src/deeproute/smd (udev ✗) rbd ls -p new-pool
new-image
- 在 cluster2 上查看导入的 image
~/go/src/deeproute/smd (udev ✗) rbd map new-pool/new-image
/dev/rbd0
~/go/src/deeproute/smd (udev ✗) mount /dev/rbd0 /mnt/ceph
~/go/src/deeproute/smd (udev ✗) ls /mnt/ceph
go1.20.5.linux-amd64.tar.gz md5
~/go/src/deeproute/smd (udev ✗) cd /mnt/ceph
/mnt/ceph cat md5
4504f55404e8021531fcbcfc669ebf87 go1.20.5.linux-amd64.tar.gz
/mnt/ceph md5sum go1.20.5.linux-amd64.tar.gz
4504f55404e8021531fcbcfc669ebf87 go1.20.5.linux-amd64.tar.gz
NOTE: 如果 rbd 卷很大,如果导出的话可能占用大量的存储空间,因此可以采用 PIPE 和 SSH 的方式直接从 cluster1 导出,然后导入到 cluster2,在 cluster1 上执行如下命令。
[root@ci-test1 ~]# rbd export pool/lun1@lun1_snap1 - | ssh root@10.9.8.72 "rbd import --image-format 2 --image-feature layering,exclusive-lock - new-pool/new-image"
Exporting image: 100% complete...done.
Importing image: 100% complete...done.
image-feature
可以与源卷保持一致。
参考链接
MachineNIX - How to export a ceph RBD image from one cluster to another without using a bridge serve
网友评论