k8s中存储分为静态存储以及动态存储。k8s提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象.
用户在部署应用。通过指定对应的动态存储卷,自动创建pvc。格式如下:
1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中
2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上
StorageClass对象会定义下面两部分内容:
1,PV的属性.比如,存储类型,Volume的大小等.
2,创建这种PV需要用到的存储插件
有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV.
过程如下:
1)k8s运维人员先创建好storageclass;
2)用户创建使用存储类的持久化存储声明(PVC:PersistentVolumeClaim);
3)存储持久化声明通知系统,它需要一个持久化存储(PV: PersistentVolume);
4)系统读取存储类的信息;
5)系统基于存储类的信息,在后台自动创建PVC需要的PV;
6)用户创建一个使用PVC的Pod;
7)Pod中的应用通过PVC进行数据的持久化;
8)而PVC使用PV进行数据的最终持久化处理。
一、 我们进行动态存储卷的部署
首先需要准备一台nfs服务器、k8s集群v1.153。
动态存储卷由三部分组成:①权限控制创建②创建NFS provisioner③创建NFS资源的StorageClass。动态存储的yaml文件如'pub-nfs-sc.yaml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: storageclass #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
namespace: storageclass #根据实际环境设定namespace,下面类同
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
namespace: storageclass #根据实际环境设定namespace,下面类同
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: storageclass
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: storageclass
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: storageclass #根据实际环境设定namespace,下面类同
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: storageclass
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: storageclass #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: docker.in.zwxict.com/official/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: pub-nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
- name: NFS_SERVER
value: 10.192.52.140 #NFS Server IP地址
- name: NFS_PATH
value: /mnt/nfs/pub-nfs-sc #NFS挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 10.192.52.140 #NFS Server IP地址
path: /mnt/nfs/pub-nfs-sc #NFS 挂载卷
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pub-nfs-sc
provisioner: pub-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
reclaimPolicy: Retain # 回收策略,pv以及pvc删除后,nfs服务器文件保留
部署:
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl apply -f pub-nfs-sc.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
deployment.apps/nfs-client-provisioner created
storageclass.storage.k8s.io/pub-nfs-sc created
查看创建的storageclass
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get sc
NAME PROVISIONER AGE
pub-nfs-sc pub-nfs-storage 6m1s
二、验证
创建pvctest-claim.yaml
,并挂载到pod。
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: pub-nfs-sc # 指定创建的sc名称
resources:
requests:
storage: 1Mi
查看pvc的状态是否为Bound:
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-b06ba929-bb98-42cf-93f8-465cb9c64520 1Mi RWX pub-nfs-sc 7s
test-po.yaml
使用指定创建的pvc:
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: docker.in.zwxict.com/tools/busybox:latest
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim #与PVC名称保持一致
到nfs路径中查看是否正常创建SUCCESS文件:
[root@nfs-140 pub-nfs-sc]# ls default-test-claim-pvc-b06ba929-bb98-42cf-93f8-465cb9c64520/ #文件规则是按照${namespace}-${pvcName}-${pvName}创建的
SUCCESS
ps: 创建pvc的时候发现pvc的状态为Pending,查看nfs创建的日志确认原因,大部分是由于角色绑定对应的权限没有生效导致的,确保角色与权限以及对应的deployment在同一命名下空间。
创建一个nginx应用test-pod.yaml
,把web页面挂载出来,指定动态存储卷:
---
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None #注意此处的值,None表示无头服务
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2 #两个副本
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.in.zwxict.com/official/nginx:1.19.2
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: pub-nfs-sc
resources:
requests:
storage: 1Mi
结果如下:
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl apply -f nginx.yaml
service/nginx-headless created
statefulset.apps/web created
[root@k8s-10-192-52-123 pub-nfs-sc]# cat nginx.yaml
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get pvc # 查看pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-4a12ddef-8c37-4c9d-ba35-e200595e1d16 1Mi RWO pub-nfs-sc 92s
www-web-1 Bound pvc-f7a17f18-21b7-44da-bf23-48671bf8a0f2 1Mi RWO pub-nfs-sc 88s
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get pv|grep pub-nfs-sc # 查看pv
pvc-4a12ddef-8c37-4c9d-ba35-e200595e1d16 1Mi RWO Retain Bound default/www-web-0 pub-nfs-sc 2m58s
pvc-b06ba929-bb98-42cf-93f8-465cb9c64520 1Mi RWX Retain Bound default/test-claim pub-nfs-sc 12m
pvc-f7a17f18-21b7-44da-bf23-48671bf8a0f2 1Mi RWO Retain Bound default/www-web-1 pub-nfs-sc 2m54s
nfs服务器查看自动创建的结果如下:
default-test-claim-pvc-b06ba929-bb98-42cf-93f8-465cb9c64520
[root@nfs-140 pub-nfs-sc]# ll
total 0
drwxrwxrwx 2 root root 6 Aug 25 14:31 default-www-web-0-pvc-4a12ddef-8c37-4c9d-ba35-e200595e1d16
drwxrwxrwx 2 root root 6 Aug 25 14:31 default-www-web-1-pvc-f7a17f18-21b7-44da-bf23-48671bf8a0f2
到nfs服务器上给两个nginx添加不同内容,并进入pod中进行访问:
[root@nfs-140 pub-nfs-sc]# echo "web-00" > default-www-web-0-pvc-4a12ddef-8c37-4c9d-ba35-e200595e1d16/index.html
[root@nfs-140 pub-nfs-sc]# echo "web-11" > default-www-web-1-pvc-f7a17f18-21b7-44da-bf23-48671bf8a0f2/index.html
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl exec -it web-0 -- /bin/sh
/ # nslookup nginx-headless
Server: 192.168.0.10
Address: 192.168.0.10:53
Name: nginx-headless.default.svc.cluster.local
Address: 192.168.58.207
Name: nginx-headless.default.svc.cluster.local
Address: 192.168.90.42
/ # curl 192.168.58.207
web-00
/ # curl 192.168.90.42
web-11
设置默认的storageclass
[root@k8s-master-155-221 classStorage]# kubectl patch storageclass pub-nfs-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' #设置pub-nfs-sc为默认后端存储
storageclass.storage.k8s.io/managed-nfs-storage patched
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get sc
NAME PROVISIONER AGE
pub-nfs-sc (default) pub-nfs-storage 30s
[root@k8s-master-155-221 deploy]# kubectl patch storageclass pub-nfs-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' #取消默认存储后端
storageclass.storage.k8s.io/managed-nfs-storage patched
[root@k8s-10-192-52-123 pub-nfs-sc]# kubectl get sc
NAME PROVISIONER AGE
pub-nfs-sc pub-nfs-storage 2m44s
网友评论