目录
Volume存储卷
几种常用的卷类型 emptyDir、hostPath、NFS简介
示例1:emptyDir Pod上直接挂载临时存储卷
示例2:持久化 节点存储卷挂载 hostPath: 不能跨节点
示例3:网络持久化 NFS存储卷挂载 跨节点
前言:
Kubernetes 中的卷有明确的寿命一一与封装它的 Pod 相同。所以卷的生命比 Pod 中的所有容器都长,当这个容器重启时数据仍然得以保存。当然,当 Pod 不再存在时,卷也将不复存在
为了保证数据的持久性,必须保证数据在外部存储,在docker容器中为了实现数据的持久性存储,在宿主机和容器内做映射,可以保证在容器的生命周期结束,数据依旧可以实现持久性存储。但是在k8s中,由于pod分布在各个不同的节点之上,并不能实现不同节点之间持久性数据的共享,并且,在节点故障时,可能会导致数据的永久性丢失。为此,k8s就引入了外部存储卷的功能。
Kubernetes 支持以下类型的卷:
- awsElasticBlockStore、azureDisk、azureFile、cephfs、csi、downwardAPI、emptyDir
- fc、flocker、gcePersistentDisk、gitRepo、glusterfs、hostPath、iscsi、local、nfs
- persistentVolumeClaim、projected、portworxVolume、quobyte、rbd、scaleIO、secret
- storageos、vsphereVolume
按存储类型分类:
- 临时存储:emptyDir :Pod删除,数据也会被清除,这种存储成为emptyDir,用于数据的临时存储。
- Host级别: hostPath,Local :数据不能跨节点
- 网络级别: NFS、GlusterFS、rbd(块设备)、cephFS (文件系统) 具有持久能力的存储;
- CSI: Container Storage InterfaceLonghorn(容器存储接口)
- 云存储(EBS,Azure Disk)
根据共享式存储设备属性不同又分为:
- 多路并行读写多路
- 只读单路读写
几种常用的卷类型 emptyDir、hostPath、NFS简介
emptyDir
当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入 emptyDir 卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时, emptyDir 中的数据将被永久删除
- emptyDir 的用法有:
- 暂存空间,例如用于基于磁盘的合并排序
- 用作长时间计算崩溃恢复时的检查点
- Web服务器容器提供数据时,保存内容管理器容器提取的文件
查看默认支持volumes 存储类型
[root@k8s-master ~]# kubectl explain pods.spec.volumes #查看默认支持volumes 存储类型
KIND: Pod
VERSION: v1
RESOURCE: volumes <[]Object>
DESCRIPTION:
List of volumes that can be mounted by containers belonging to the pod.
More info: https://kubernetes.io/docs/concepts/storage/volumes
Volume represents a named volume in a pod that may be accessed by any
container in the pod.
FIELDS:
awsElasticBlockStore <Object>
AWSElasticBlockStore represents an AWS Disk resource that is attached to a
kubelet's host machine and then exposed to the pod. More info:
https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
azureDisk <Object>
AzureDisk represents an Azure Data Disk mount on the host and bind mount to
the pod.
azureFile <Object>
AzureFile represents an Azure File Service mount on the host and bind mount
to the pod.
cephfs <Object>
CephFS represents a Ceph FS mount on the host that shares a pod's lifetime
...
示例1:emptyDir Pod上直接挂载临时存储卷
[root@k8s-master storage]# cat volumes-emptydir-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-emptydir-demo
namespace: default
spec:
initContainers:
- name: config-file-downloader
image: ikubernetes/admin-box
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","wget -O /data/envoy.yaml http://192.168.4.254/envoy.yaml"] #初始化容器 下载配置文件
volumeMounts:
- name: config-file-store
mountPath: /data #挂载目录
containers:
- name: envoy
image: envoyproxy/envoy-alpine:v1.13.1
command: ['/bin/sh','-c']
args: ['envoy -c /etc/envoy/envoy.yaml']
volumeMounts:
- name: config-file-store
mountPath: /etc/envoy #挂载目录
readOnly: true
volumes:
- name: config-file-store #被调用时的卷名称
emptyDir: #卷插件类型 临时文件
medium: Memory #使用内存
sizeLimit: 16Mi
[root@k8s-master storage]# kubectl apply -f volumes-emptydir-demo.yaml
pod/volumes-emptydir-demo created
[root@k8s-master storage]# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos-deployment-66d8cd5f8b-fkhft 1/1 Running 0 2m
volumes-emptydir-demo 1/1 Running 0 4s
[root@k8s-master storage]# kubectl exec volumes-emptydir-demo volumes-emptydir-demo -it -- /bin/sh
/ # cat /etc/envoy/envoy.yaml #查看挂载文件
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 127.0.0.1, port_value: 9999 }
# 定义静态资源
static_resources:
......
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
# 配置后端挂载的IP地址
hosts: [{ socket_address: { address: 127.0.0.1, port_value: 8080 }}]
/ # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:9999 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN
hostPath
hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中
hostPath 的用途如下:
- 运行需要访问 Docker 内部的容器;使用 /var/lib/docker 的 hostPath
- 在容器中运行 cAdvisor;使用 /dev/cgroups 的 hostPath
- 允许 pod 指定给定的 hostPath 是否应该在 pod 运行之前存在,是否应该创建,以及它应该以什么形式存在
hostPath使用选项
- DirectoryOrCreate:指定的路径不存时自动将其创建为0755权限的空目录,属主属组均为kubelet;
- Directory:事先必须存在的目录路径;
- FileOrCreate:指定的路径不存时自动将其创建为0644权限的空文件,属主和属组同为kubelet;
- File:事先必须存在的文件路径;
- Socket:事先必须存在的Socket文件路径;
- CharDevice:事先必须存在的字符设备文件路径;
- BlockDevice:事先必须存在的块设备文件路径;
- "":空字符串,默认配置,在关联hostPath存储卷之前不进行任何检查
使用这种卷类型是请注意,因为:
- 由于每个节点上的文件都不同,具有相同配置(例如从 podTemplate 创建的)的 pod 在不同节点上的行为
可能会有所不同 - 当 Kubernetes 按照计划添加资源感知调度时,将无法考虑 hostPath 使用的资源
- 在底层主机上创建的文件或目录只能由 root 写入。您需要在特权容器中以 root 身份运行进程,或修改主机
上的文件权限以便写入 hostPath 卷
示例2:持久化 节点存储卷挂载 hostPath: 不能跨节点
[root@k8s-master storage]# cat volumes-hostpath-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-hostpath-demo
spec:
containers:
- name: filebeat
image: ikubernetes/filebeat:5.6.7-alpine
env:
- name: REDIS_HOST
value: redis.ilinux.io:6379
- name: LOG_LEVEL
value: info
volumeMounts:
- name: varlog
mountPath: /var/log
- name: socket
mountPath: /var/run/docker.sock
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly : true
volumes:
- name: varlog
hostPath :
path: /var/log #宿主机上路径
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers #宿主机上路径
type: Directory #目录必须存在
- name: socket
hostPath:
path: /var/run/docker.sock
[root@k8s-master storage]# kubectl describe pod volumes-hostpath-demo
...
Mounts: #挂载点
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
/var/run/docker.sock from socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fsshk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume) #bare host directory volume表示没有做检查
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
socket:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType:
default-token-fsshk:
Type: Secret (a volume populated by a Secret)
[root@k8s-master storage]# kubectl exec volumes-hostpath-demo -it -- /bin/sh
~ # ls /var/log/ #宿主机logs
anaconda chrony journal messages-20210801 spooler
boot.log containers lastlog ntpstats spooler-20210711
boot.log-20210616 cron maillog pods spooler-20210718
boot.log-20210629 cron-20210711 maillog-20210711 qemu-ga spooler-20210725
boot.log-20210701 cron-20210718 maillog-20210718 rhsm spooler-20210801
boot.log-20210704 cron-20210725 maillog-20210725 sa tallylog
boot.log-20210706 cron-20210801 maillog-20210801 secure wtmp
boot.log-20210717 dmesg messages secure-20210711 yum.log
boot.log-20210723 dmesg.old messages-20210711 secure-20210718 yum.log-20210402
btmp grubby messages-20210718 secure-20210725
btmp-20210801 grubby_prune_debug messages-20210725 secure-20210801
~ # ls /var/lib/docker/containers/ #宿主机/var/lib/docker/containers
11520a271e7ab8e3e3e4a06738429087a489491f4c4fa3ccb9e3eae49dbcc805
1ad9339e7b61e0b6538470dd40fd89cc409d7594403f7a908aae9ff14f9e197f
28694971823e1c126ea0c3616f29a3ee29c5cbb709fddf3a1989c277190915da
32ac0bd0d3f7dd28a1b8b2a53b0690e2e28d59b4952a60b9603e786fa0cf7e2f
336f21fc7ec8d980c232ad5a96f5df80fd9afd820579a7e0569f465910ec5ddb
37ab98ed1001ab94053d24ba2c87ae8863f2731760a5602ea85b04fb3e2a5d40
3a3a845172627fcd8bd5c887c1917d6e2f5d80e1447696a3018e6d1e9faa44db
...
示例3:网络持久化 NFS存储卷挂载 跨节点
1.准备nfs服务器,并在node节点上测试挂载及权限
2.容器挂载nfs存储卷,并实现数据持久化及跨节点
查看nfs资源配置规范
[root@k8s-master ~]# kubectl explain pods.spec.volumes.nfs
KIND: Pod
VERSION: v1
RESOURCE: nfs <Object>
DESCRIPTION:
NFS represents an NFS mount on the host that shares a pod's lifetime More
info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do
not support ownership management or SELinux relabeling.
FIELDS:
path <string> -required-
Path that is exported by the NFS server. More info:
https://kubernetes.io/docs/concepts/storage/volumes#nfs
readOnly <boolean>
ReadOnly here will force the NFS export to be mounted with read-only
permissions. Defaults to false. More info:
https://kubernetes.io/docs/concepts/storage/volumes#nfs
server <string> -required-
Server is the hostname or IP address of the NFS server. More info:
https://kubernetes.io/docs/concepts/storage/volumes#nfs
- 部署nfs服务提供存储卷
[root@nfs ~]# mkdir -pv /data/redis
[root@nfs redis]# cat /etc/exports
/data/redis 192.168.4.0/24(rw)
[root@nfs ~]# useradd -u 1010 redis
[root@nfs ~]# chown 1010 /data/redis
[root@nfs redis]# systemctl restart nfs #没有服务安装nfs-utils
[root@nfs redis]# ss -anput|grep 2049
udp UNCONN 0 0 *:2049 *:*
udp UNCONN 0 0 :::2049 :::*
tcp LISTEN 0 64 *:2049 *:*
tcp TIME-WAIT 0 0 192.168.4.100:2049 192.168.4.171:772
tcp ESTAB 0 0 192.168.4.100:2049 192.168.4.172:997
- 在master、node1、node2创建redis用户 安装nfs-utils服务并测试挂载及权限
[root@k8s-node1 ~]# yum -y install nfs-utils
[root@k8s-node1 ~]# useradd -u 1010 redis
[root@k8s-node1 ~]# showmount -e 192.168.4.100
Export list for 192.168.4.100:
[root@k8s-node1 ~]# mount -t nfs 192.168.4.100:/data/redis /mnt #挂载
[root@k8s-node1 ~]# df -h|grep redis
192.168.4.100:/data/redis 20G 13G 7.2G 65% /mnt
[root@k8s-node1 ~]# su - redis #测试写权限
[root@k8s-node1 ~]# cd /mnt/
[root@k8s-node1 ~]# mkdir test
[root@k8s-node1 ~]# ls
test
- redis服务部署
[root@k8s-master storage]# cat volumes-nfs-demo.yaml
apiVersion: v1
kind : Pod
metadata:
name: volumes-nfs-demo
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redisport
securityContext:
runAsUser: 1010 #使用1010 redis用户登录
volumeMounts:
- mountPath: /data
name: redisdata
volumes:
- name: redisdata
nfs:
server: 192.168.4.100 #远程nfs地址
path: /data/redis
readOnly: false
[root@k8s-master storage]# kubectl apply -f volumes-nfs-demo.yaml
[root@k8s-master storage]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
centos-deployment-66d8cd5f8b-fkhft 1/1 Running 0 22h 10.244.2.101 k8s-node2 <none> <none>
my-grafana-7d788c5479-zwlhm 1/1 Running 0 22h 10.244.1.109 k8s-node1 <none> <none>
volumes-emptydir-demo 1/1 Running 0 22h 10.244.1.110 k8s-node1 <none> <none>
volumes-hostpath-demo 1/1 Running 0 21h 10.244.1.114 k8s-node1 <none> <none>
volumes-nfs-demo 1/1 Running 0 4s 10.244.1.117 k8s-node1 <none> <none> #节点1上
[root@k8s-node1 ~]# df -h|grep redis #在节点1上可以看到已经挂载成功
192.168.4.100:/data/redis 20G 13G 7.2G 65% /var/lib/kubelet/pods/5b6cb5b5-0350-4be5-b129-985911b5a2f7/volumes/kubernetes.io~nfs/redisdata
[root@k8s-master storage]# kubectl exec volumes-nfs-demo -it -- /bin/sh
/data $ netstat -tnl #检查redis服务
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN
tcp 0 0 :::6379 :::* LISTEN
/data $ redis-cli -h 127.0.0.1
127.0.0.1:6379> set mykey www.google.com #写数据
OK
127.0.0.1:6379> get mykey
"www.google.com"
127.0.0.1:6379> BGSAVE #写入磁盘
Background saving started
127.0.0.1:6379> exit
/data $ ls
dump.rdb
/data $ exit
[root@nfs redis]# ls # nfs服务器上已经可以看到数据
dump.rdb
[root@k8s-master storage]# kubectl delete pod volumes-nfs-demo
pod "volumes-nfs-demo" deleted
[root@k8s-node1 ~]# systemctl stop kubelet #停止node1 kubelet让node1 宕机,pod运行到node2上
[root@k8s-master storage]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 34d v1.19.9
k8s-node1 NotReady <none> 34d v1.19.9 #node1 NotReady
k8s-node2 Ready <none> 34d v1.19.9
[root@k8s-master storage]# kubectl apply -f volumes-nfs-demo.yaml
pod/volumes-nfs-demo created
[root@k8s-master storage]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volumes-hostpath-demo 1/1 Running 1 22h 10.244.1.120 k8s-node1 <none> <none>
volumes-nfs-demo 1/1 Running 0 6s 10.244.2.104 k8s-node2 <none> <none> #运行在node2节点上
[root@k8s-master storage]# kubectl exec volumes-nfs-demo -it -- /bin/sh #测试跨节点数据持久化
/data $ ls
dump.rdb
/data $ redis-cli -h 127.0.0.1
127.0.0.1:6379> get mykey
"www.google.com"
网友评论