美文网首页
ceph之动态pvc

ceph之动态pvc

作者: xiao_b4b1 | 来源:发表于2018-11-14 14:54 被阅读0次

ceph采用pv的方式创建pvc相对简单,但是每次创建新的pvc都需要手工创建image,再创建pv,相对麻烦,不利用自动化管理,本文介绍ceph动态创建pvc。

环境

集群 节点
k8s集群 master-192,node-193,node-194
ceph集群 node-193,node-194

实战

前提

k8s集群正常
ceph集群正常,已经创建rbd pool

[root@node-194 ~]# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    40 GiB     38 GiB      2.1 GiB          5.19 
POOLS:
    NAME     ID     USED       %USED     MAX AVAIL     OBJECTS 
    rbd      4      37 MiB      0.20        12 GiB          19 

采用kubernetes.io/rbd的方式

  1. 创建secret保存ceph的admin key
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"
data:
  key: QVFCSWplcGJXR29nRmhBQWhwRlZxSlgwZktNcDA3S3RacmJlNmc9PQ==

[root@master-192 ceph]# kubectl create -f ceph-secret.yaml

  1. 创建classstorage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: ceph
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.30.81.194:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  pool: rbd
  userId: admin
  userSecretName: ceph-secret
  imageFeatures: layering
  imageFormat: "2"

[root@master-192 ceph]# kubectl create -f ceph-storageclass.yaml

3.创建pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim2
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph
  resources:
    requests:
      storage: 1Gi

[root@master-192 ceph]# kubectl create -f ceph-pvc.yaml

4.查看结果

[root@master-192 ceph]# kubectl get pvc
NAME     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim2   Pending                                      ceph           17s

一直pending,查看具体原因

[root@master-192 ceph]# kubectl describe  pvc claim2
Name:          claim2
Namespace:     default
StorageClass:  ceph
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type       Reason              Age                From                         Message
  ----       ------              ----               ----                         -------
  Warning    ProvisioningFailed  11s (x2 over 26s)  persistentvolume-controller  Failed to provision volume with StorageClass "ceph": failed to create rbd image: fork/exec /usr/local/bin/rbd: no such file or directory, command output:
Mounted By:  <none>

发现报错是由于没找到rbd,调用rbd的是kube-controller-manager组件,该组件运行在容器中的话,确实会找不到rbd
下面测试把controller-manager运行在装了rbd的宿主机上

[root@master-192 ceph]# kubectl exec -it kube-controller-manager-master-192 sh -n kube-system
/ # ps 
PID   USER     TIME  COMMAND
    1 root     11:27 kube-controller-manager --leader-elect=true --use-service-account-credentials=true --cluster-signing-key-file=/et
   31 root      0:00 tar xf - -C /usr/bin
   46 root      0:00 tar xf - -C /usr/bin
   82 root      0:00 sh
   89 root      0:00 ps
/ # which kube-controller-manager
/usr/local/bin/kube-controller-manager
/ # cat /proc/1/cmdline 
kube-controller-manager--leader-elect=true--use-service-account-credentials=true--cluster-signing-key-file=/etc/kubernetes/pki/ca.key--address=127.0.0.1--controllers=*,bootstrapsigner,tokencleaner--kubeconfig=/etc/kubernetes/controller-manager.conf--root-ca-file=/etc/kubernetes/pki/ca.crt--service-account-private-key-file=/etc/kubernetes/pki/sa.key--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt--allocate-node-cidrs=true--cluster-cidr=10.254.0.0/16--node-cidr-mask-size=24/ #

拷贝controller-manager执行文件到宿主机
[root@master-192 ceph]# kubectl cp kube-controller-manager-master-192:/usr/local/bin/kube-controller-manager /opt -n kube-system

赋予执行权限
[root@master-192 ceph]# chmod +x /opt/kube-controller-manager

删除容器运行的kube-controller-manager
[root@master-192 ceph]# mv /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/

宿主运行kube-controller-manager

[root@master-192 ceph]# /opt/kube-controller-manager --leader-elect=true --use-service-account-credentials=true --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --address=127.0.0.1 --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --root-ca-file=/etc/kubernetes/pki/ca.crt--service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
I1114 14:34:08.410010    8994 controllermanager.go:116] Version: v1.10.0
W1114 14:34:08.411787    8994 authentication.go:55] Authentication is disabled
I1114 14:34:08.411830    8994 insecure_serving.go:44] Serving insecurely on 127.0.0.1:10252
I1114 14:34:08.412401    8994 leaderelection.go:175] attempting to acquire leader lease  kube-system/kube-controller-manager...

现在再查看pvc,发现已经创建成功

[root@master-192 ~]# kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim2   Bound    pvc-d085bc45-e7d5-11e8-8fe7-5254003ceebc   1Gi        RWO            ceph           11m

ceph集群查看

[root@node-194 ~]# rbd ls
kubernetes-dynamic-pvc-5692af0e-e7d7-11e8-87fb-5254003ceebc

采用第三方rbd方式

1.创建ceph key secret
方法同上

2.创建storageclass,ceph.com/rbd

[root@master-192 ceph]# cat ceph-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: ceph
provisioner: ceph.com/rbd
parameters:
  monitors: 172.30.81.194:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  pool: rbd
  userId: admin
  userSecretName: ceph-secret
  imageFeatures: layering
  imageFormat: "2"

[root@master-192 ceph]# kubectl create -f ceph-storageclass.yaml

3.创建提供ceph.com/rbd的pod

git clone https://github.com/xiaotech/ceph-pvc

[root@master-192 rbac]# kubectl create -f .
clusterrole.rbac.authorization.k8s.io/rbd-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.extensions/rbd-provisioner created
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
serviceaccount/rbd-provisioner created

4.创建pvc

[root@master-192 ceph]# cat ceph-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim2
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph
  resources:
    requests:
      storage: 1Gi

[root@master-192 ceph]# kubectl create -f ceph-pvc.yaml

5.验证

[root@master-192 ceph]# kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim2   Bound    pvc-afd0fb78-e7d9-11e8-8fe7-5254003ceebc   1Gi        RWO            ceph           2m
[root@node-194 ~]# rbd ls
kubernetes-dynamic-pvc-cc49b712-e7d9-11e8-9fe9-be6d73ce589a

相关文章

  • ceph之动态pvc

    ceph采用pv的方式创建pvc相对简单,但是每次创建新的pvc都需要手工创建image,再创建pv,相对麻烦,不...

  • rancher 创建pv/pvc 卷

    1 添加ceph-secret 2 添加卷ceph PV 卷 3 添加ceph PVC 卷,绑定原来ceph pv...

  • glusterfs之动态PVC

    动态PVC可以简化管理员创建PV的过程,提升整个分布式存储的使用效率和体验;底层存储部署完成以后,用户只需要通过s...

  • k8s ceph pv pvc 动态扩容

    问题 目的:初始配置pv和pvc的容量小了,想要进行动态扩容修改pvc配置,其它无操作kubectl edit P...

  • k8s rbd provider

    问题 创建好ceph集群后,在k8s是中创建好storageclass后.创建pvc居然不能自动创建pv.报如下错...

  • ceph动态扩展

    一、准备及注意事项 SES 5.X 必须使⽤SLES 12 sp3作为操作系统 准备SLES和SES的官⽅ISO镜...

  • k8s使用ceph实现动态持久化存储

    简介 本文章介绍如何使用ceph为k8s提供动态申请pv的功能。ceph提供底层存储功能,cephfs方式支持k8...

  • ceph集群的动态扩展+ceph的osd的误删和恢复(三)

    ceph集群的动态扩展 一、使用ceph-deploy把配置文件和admin 秘钥下发到新添加的节点 在主节点的c...

  • Crush 与 PG 分布

    参考资料:《Ceph 之 RADOS 设计原理与实现》https://docs.ceph.com/en/lates...

  • 2018-05-23

    Ceph - a scalable distributed storage system 开始 Vagrant 之...

网友评论

      本文标题:ceph之动态pvc

      本文链接:https://www.haomeiwen.com/subject/phqjfqtx.html