美文网首页k8sCentOSK8s
centos 7 k8s安装nfs 服务器与客户端 nfs动态存

centos 7 k8s安装nfs 服务器与客户端 nfs动态存

作者: andrewkk | 来源:发表于2021-10-13 17:15 被阅读0次

    NFS介绍
    概述
      网络文件系统(Network File System, NFS),是基于内核的文件系统,nfs主要是通过网络实现服务器和客户端之间的数据传输,采用远程过程调用RPC(Romete Procedure Call)机制,让不同的机器节点共享文件目录。只需将nfs服务器共享的文件目录挂载到nfs客户端,这样客户端就可以对远程服务器上的文件进行读写操作。
      一个NFS服务器可以对应多个nfs客户端,基于RPC机制,用户可以像访问本地文件一样访问远端的共享目录文件,使用起来很nice!

    原理
    挂载原理


    image.png

    如图所示,在NFS服务器创建并设置好一个共享目录/nfs,其他网络互通的NFS客户端可以将该目录挂载到本地文件系统中的某个挂载点(可自定义),如NFS客户端A挂载到/nfs-a/data中,NFS客户端B挂载到/nfs-b/data中,这样NFS客户端可以在本地的挂载目录即可看到NFS服务器共享目录/nfs内的所有数据。具体的权限(如只读、读写等权限)根据服务器的配置而定。

    通信原理


    image.png

    如图所示,通过RPC和NFS服务传输数据。

    NFS服务端启动RPC服务,开启111端口,可以通过nfsnetstat。
    NFS服务端启动NFS服务,并向RPC注册端口信息。
    NFS客户端启动RPC服务,向服务端RPC服务请求NFS服务端口。
    NFS服务端RPC服务反馈NFS端口信息给NFS客户端。
    NFS客户端通过获取的NFS端口简历和服务端的NFS连接,并通过RPC及底层TCP/IP协议进行数据传输。

    NFS服务器搭建
    
    1.关闭防火墙
    systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
    
    2.检查SELinux
    cat /etc/selinux/config
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
    
    部署nfs 客户端
    yum install -y nfs-utils rpcbind 
    systemctl restart rpcbind && systemctl enable rpcbind && systemctl status rpcbind
    systemctl restart nfs && systemctl enable nfs && systemctl status nfs
     ------备注:k8s在部署jenkins-nfs或其它 只需安装服务 启动服务即可 不需要手工挂载。
    
    配置步骤
    使用nfs共享目录的都需要配置一遍以下步骤。
    安装nfs-utils和rpcbind
    $ yum install -y nfs-utils rpcbind
    创建挂载的文件夹
    $ mkdir -p /nfs/data
    挂载nfs
    $ mount -t nfs 10.1.1.1:/nfs /nfs/data
    查看挂载信息
    $ df -Th
    测试挂载
    可以进入本机的/nfs/data目录,上传一个文件,然后去nfs服务器查看/nfs目录中是否有该文件,若有则共享成功。反之在nfs服务器操作/nfs目录,查看本机客户端的目录是否共享。
    取消挂载
    $ umount /nfs/data
    vim /etc/fstab 加如下配置 每次重启主机自动挂载
    10.1.1.1:/nfs /nfs/data nfs defaults 1 1
    
    
    部署nsf服务端 --------备注:可以在linux系统的k8s集群中任意一个node节点做nfs服务端。
    yum install -y nfs-utils rpcbind
    
    1.创建共享存储文件夹
    mkdir -p /nfs/data/k8s/ ... 比如:jenkins grafana prometheus
    
    2.配置nfs
    vi /etc/exports
    格式为:nfs共享目录 nfs客户端地址1(param1, param2,...) nfs客户端地址2(param1, param2,...)
    /nfs/data/k8s 192.168.64.204/24(rw,async,no_root_squash)
    
    3.先启动rpc服务,再启动nfs服务
    systemctl restart rpcbind && systemctl enable rpcbind && systemctl status rpcbind
    systemctl restart nfs && systemctl enable nfs && systemctl status nfs
    
    4.查看可用的nfs地址
    showmount -e 127.0.0.1 或 showmount -e localhost
    [root@k8s-node4 grafana]# showmount -e localhost
    Export list for localhost:
    /data/k8s 192.168.64.204/24
    

    =======================================
    k8s基于nfs创建storageclass
    =======================================

    [root@k8s-master1 jenkins]# cat rbac.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    

    =======================================
    部署nfs-client-provisioner
    =======================================

    [root@k8s-master1 jenkins]# cat nfs-client-provisioner-deployment.yaml 
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.64.204
                - name: NFS_PATH
                  value: /data/k8s/
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.64.204
                path: /data/k8s/
    

    =======================================
    部署storageclass
    =======================================

    [root@k8s-master1 jenkins]# cat class.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    # 必须与deployment.yaml中的PROVISIONER_NAME一致
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "false"
    
    image.png

    kubectl get sc,pv,pvc -A -o wide
    kubectl get all -n ns-monitor
    kubectl get deployments.apps -o wide -A

    相关文章

      网友评论

        本文标题:centos 7 k8s安装nfs 服务器与客户端 nfs动态存

        本文链接:https://www.haomeiwen.com/subject/svceoltx.html