ceph采用pv的方式创建pvc相对简单,但是每次创建新的pvc都需要手工创建image,再创建pv,相对麻烦,不利用自动化管理,本文介绍ceph动态创建pvc。
环境
集群 | 节点 |
---|---|
k8s集群 | master-192,node-193,node-194 |
ceph集群 | node-193,node-194 |
实战
前提
k8s集群正常
ceph集群正常,已经创建rbd pool
[root@node-194 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
40 GiB 38 GiB 2.1 GiB 5.19
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 4 37 MiB 0.20 12 GiB 19
采用kubernetes.io/rbd的方式
- 创建secret保存ceph的admin key
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFCSWplcGJXR29nRmhBQWhwRlZxSlgwZktNcDA3S3RacmJlNmc9PQ==
[root@master-192 ceph]# kubectl create -f ceph-secret.yaml
- 创建classstorage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph
provisioner: kubernetes.io/rbd
parameters:
monitors: 172.30.81.194:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: rbd
userId: admin
userSecretName: ceph-secret
imageFeatures: layering
imageFormat: "2"
[root@master-192 ceph]# kubectl create -f ceph-storageclass.yaml
3.创建pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph
resources:
requests:
storage: 1Gi
[root@master-192 ceph]# kubectl create -f ceph-pvc.yaml
4.查看结果
[root@master-192 ceph]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Pending ceph 17s
一直pending,查看具体原因
[root@master-192 ceph]# kubectl describe pvc claim2
Name: claim2
Namespace: default
StorageClass: ceph
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/rbd
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 11s (x2 over 26s) persistentvolume-controller Failed to provision volume with StorageClass "ceph": failed to create rbd image: fork/exec /usr/local/bin/rbd: no such file or directory, command output:
Mounted By: <none>
发现报错是由于没找到rbd,调用rbd的是kube-controller-manager组件,该组件运行在容器中的话,确实会找不到rbd
下面测试把controller-manager运行在装了rbd的宿主机上
[root@master-192 ceph]# kubectl exec -it kube-controller-manager-master-192 sh -n kube-system
/ # ps
PID USER TIME COMMAND
1 root 11:27 kube-controller-manager --leader-elect=true --use-service-account-credentials=true --cluster-signing-key-file=/et
31 root 0:00 tar xf - -C /usr/bin
46 root 0:00 tar xf - -C /usr/bin
82 root 0:00 sh
89 root 0:00 ps
/ # which kube-controller-manager
/usr/local/bin/kube-controller-manager
/ # cat /proc/1/cmdline
kube-controller-manager--leader-elect=true--use-service-account-credentials=true--cluster-signing-key-file=/etc/kubernetes/pki/ca.key--address=127.0.0.1--controllers=*,bootstrapsigner,tokencleaner--kubeconfig=/etc/kubernetes/controller-manager.conf--root-ca-file=/etc/kubernetes/pki/ca.crt--service-account-private-key-file=/etc/kubernetes/pki/sa.key--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt--allocate-node-cidrs=true--cluster-cidr=10.254.0.0/16--node-cidr-mask-size=24/ #
拷贝controller-manager执行文件到宿主机
[root@master-192 ceph]# kubectl cp kube-controller-manager-master-192:/usr/local/bin/kube-controller-manager /opt -n kube-system
赋予执行权限
[root@master-192 ceph]# chmod +x /opt/kube-controller-manager
删除容器运行的kube-controller-manager
[root@master-192 ceph]# mv /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/
宿主运行kube-controller-manager
[root@master-192 ceph]# /opt/kube-controller-manager --leader-elect=true --use-service-account-credentials=true --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --address=127.0.0.1 --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --root-ca-file=/etc/kubernetes/pki/ca.crt--service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
I1114 14:34:08.410010 8994 controllermanager.go:116] Version: v1.10.0
W1114 14:34:08.411787 8994 authentication.go:55] Authentication is disabled
I1114 14:34:08.411830 8994 insecure_serving.go:44] Serving insecurely on 127.0.0.1:10252
I1114 14:34:08.412401 8994 leaderelection.go:175] attempting to acquire leader lease kube-system/kube-controller-manager...
现在再查看pvc,发现已经创建成功
[root@master-192 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Bound pvc-d085bc45-e7d5-11e8-8fe7-5254003ceebc 1Gi RWO ceph 11m
ceph集群查看
[root@node-194 ~]# rbd ls
kubernetes-dynamic-pvc-5692af0e-e7d7-11e8-87fb-5254003ceebc
采用第三方rbd方式
1.创建ceph key secret
方法同上
2.创建storageclass,ceph.com/rbd
[root@master-192 ceph]# cat ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph
provisioner: ceph.com/rbd
parameters:
monitors: 172.30.81.194:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: rbd
userId: admin
userSecretName: ceph-secret
imageFeatures: layering
imageFormat: "2"
[root@master-192 ceph]# kubectl create -f ceph-storageclass.yaml
3.创建提供ceph.com/rbd的pod
git clone https://github.com/xiaotech/ceph-pvc
[root@master-192 rbac]# kubectl create -f .
clusterrole.rbac.authorization.k8s.io/rbd-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.extensions/rbd-provisioner created
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
serviceaccount/rbd-provisioner created
4.创建pvc
[root@master-192 ceph]# cat ceph-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph
resources:
requests:
storage: 1Gi
[root@master-192 ceph]# kubectl create -f ceph-pvc.yaml
5.验证
[root@master-192 ceph]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Bound pvc-afd0fb78-e7d9-11e8-8fe7-5254003ceebc 1Gi RWO ceph 2m
[root@node-194 ~]# rbd ls
kubernetes-dynamic-pvc-cc49b712-e7d9-11e8-9fe9-be6d73ce589a
网友评论