ceph 由 rook 部署
- rbd 的创建过程
kubectl create -f pg64pool.yaml
[root@node1 nvme]# cat pg64pool.yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: pg64pool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
pg_num: "64"
pgp_num: "64"
compression_mode: none
annotations:
deviceClass: nvme
rbd create pg64poolrbd --size 20480 --image-feature layering --pool pg64pool
rbd map pg64poolrbd --name client.admin -p pg64pool
该步骤将rbd image 映射到本地
可以看到
[root@node1 nvme]# rbd device list
id pool namespace image snap device
0 pg64pool pg64poolrbd - /dev/rbd0
以上是创建过程
- 销毁过程
2.1 那么销毁的时候应该先执行unmap
rbd device unmap /dev/rbd0
2.2 然后删除rbd image
rbd rm pg64pool/pg64poolrbd
2.3 最后执行
kubectl delete -f pg64pool.yaml
如果不执行unmap 那么就会出现以下问题
rbd rm pg64pool/pg64poolrbd
2020-11-12T10:02:57.334+0800 7f1af67fc700 -1 librbd::image::PreRemoveRequest: 0x557d46671da0 check_image_watchers: image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
网友评论