美文网首页dockerkubernetes计算机技术
Kubernetes探索学习系列001----kubeadm快速

Kubernetes探索学习系列001----kubeadm快速

作者: StarDustMrsu | 来源:发表于2018-12-06 18:13 被阅读0次

    Centos7.6使用kubeadm快速部署kubernetes集群

    首先笔者为什么要使用kubeadm来部署kubernetes呢?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新手快速搭建学习,通过kubeadm配合kubernetes相关组件的docker镜像部署出来的集群环境和二进制文件搭建起来的集群环境基本上没什么区别。但是请注意这种方式不建议直接用于生产环境!这里笔者主要用于学习研究kubernetes! 关于kubeadm: Easily bootstrap a secure Kubernetes cluster

    1.1.服务器规划

    主机名 内网ip地址 角色 系统版本
    kubernetes01 10.5.0.206 Master CentOS Linux release 7.6.1810 (Core)
    kubernetes02 10.5.0.207 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes03 10.5.0.208 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes04 10.5.0.209 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes05 10.5.0.210 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes06 10.5.0.213 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes07 10.5.0.214 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes08 10.5.0.218 Worker CentOS Linux release 7.6.1810 (Core)
    kubernetes09 10.5.0.219 Worker CentOS Linux release 7.6.1810 (Core)

    1.2.Master节点

    Master 节点主要包含了三个Kubernetes项目中最最最重要的组件:apiserver,scheduler,controller-manager!
    apiserver:提供了管理集群的API接口
    scheduler:负责分配调度Pod到集群内的node节点
    controller-manager:由一系列的控制器组成,通过apiserver监控整个集群的状态

    1.2.1.确认系统版本修改主机名
    1.查看系统版本
    [root@iZ2ze7ftggknd1fplnxygqZ ~]# cat /etc/redhat-release 
    CentOS Linux release 7.6.1810 (Core)
    2.修改主机名
    hostnamectl set-hostname kubernetes01
    3.别忘了修改/etc/hosts文件
    [root@kubernetes01 ~]# cat /etc/hosts
    127.0.0.1       localhost       localhost.localdomain   localhost4      localhost4.localdomain4
    ::1     localhost       localhost.localdomain   localhost6      localhost6.localdomain6
    # kubernetes-cluster
    10.5.0.206 kubernetes01
    ...
    
    1.2.2.关闭防火墙
    systemctl stop firewalld && systemctl disable firewalld
    
    1.2.3.检查selinux是否关闭
    [root@kubernetes01 ~]# setenforce 0
    setenforce: SELinux is disabled
    
    1.2.4.提前处理路由问题
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1    
    vm.swappiness=0
    EOF
    之后
    sysctl --system
    
    1.2.5.安装docker-ce, 注意docker-ce和kubernetes版本的兼容性!
    yum安装docekr-ce,版本是v18.06.1
    [root@kubernetes01 ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
    [root@kubernetes01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@kubernetes01 ~]# yum -y install docker-ce-18.06.1.ce
    [root@kubernetes01 ~]# /bin/systemctl start docker.service 
    [root@kubernetes01 ~]# docker --version 
    Docker version 18.06.1-ce, build e68fc7a
    
    1.2.6.安装kubelet kubeadm kubectl
    1.配置为国内阿里云yum源
    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
    2.安装key文件
    wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    rpm -import rpm-package-key.gpg
    3.安装
    yum install -y kubelet-1.12.1
    yum install -y kubectl-1.12.1
    yum install -y kubeadm-1.12.1
    
    1.2.7.检查核验版本
    [root@kubernetes01 ~]# kubelet --version
    Kubernetes v1.12.1
    [root@kubernetes01 ~]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    [root@kubernetes01 ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    
    v1.12.1版本kubeadm需要的kubernetes组件对应的docker镜像版本:
    k8s.gcr.io/kube-apiserver:v1.12.1
    k8s.gcr.io/kube-controller-manager:v1.12.1
    k8s.gcr.io/kube-scheduler:v1.12.1
    k8s.gcr.io/kube-proxy:v1.12.1
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.2
    
    1.2.8.下载kubernetes相关组件的docker镜像
    这里因国内节点所处网络环境的“特殊性”,另辟蹊径。
    [root@kubernetes01 ~]# cat pull_k8s_images.sh 
    #!/bin/bash
    images=(kube-proxy:v1.12.1 kube-scheduler:v1.12.1 kube-controller-manager:v1.12.1
    kube-apiserver:v1.12.1
    etcd:3.2.24 coredns:1.2.2 pause:3.1 )
    for imageName in ${images[@]} ; do
    docker pull anjia0532/google-containers.${imageName}
    docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
    docker rmi anjia0532/google-containers.$imageName
    done
    
    1.2.9.查看镜像信息
    朋友们还记得开头提起过的scheduler,controller-manager,apiserver这三个组件的作用吗?😂别忘记呀~~
    [root@kubernetes01 ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy                v1.12.1             61afff57f010        5 months ago        96.6MB
    k8s.gcr.io/kube-apiserver            v1.12.1             dcb029b5e3ad        5 months ago        194MB
    k8s.gcr.io/kube-scheduler            v1.12.1             d773ad20fd80        5 months ago        58.3MB
    k8s.gcr.io/kube-controller-manager   v1.12.1             aa2dd57c7329        5 months ago        164MB
    k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        6 months ago        220MB
    k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        7 months ago        39.2MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        15 months ago       742kB
    
    1.2.10.使用kubeadm部署kubernetes集群master节点
    [root@kubernetes01 ~]# kubeadm init --kubernetes-version=v1.12.1 
    preflight核验没有问题后过一段时间,看到这样的提示算是完成了对Kubernetes Master节点的部署。
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 10.5.0.206:6443 --token bh3pih.cuir6xpjl7zn7pf2 --discovery-token-ca-cert-hash sha256:ae00fc1ad4a680c01be4deaae6f6e4cf554867664bc5c16e0b3f98d4f2adcf2c
    
    在开始使用之前,需要以常规用户身份运行以下命令: 上面那段英文中有说明注意查看!因为Kubernetes集群默认是需要加密访问的!
    so执行这段命令👇
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    1.2.11.健康状态检查
    1.查看主要组件的健康状态
    [root@kubernetes01 ~]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}   
    2.查看master节点状态
    [root@kubernetes01 ~]# kubectl get nodes
    NAME           STATUS     ROLES    AGE     VERSION
    kubernetes01   NotReady   master   4m15s   v1.12.1
    
    1.2.12.部署网络插件weave

    Weave是一个比较热门的容器网络方案,具有良好的易用性功能也很强大。

    [root@kubernetes01 ~]# kubectl apply -f https://git.io/weave-kube-1.6
    serviceaccount/weave-net created
    serviceaccount/weave-net created
    clusterrole.rbac.authorization.k8s.io/weave-net created
    clusterrolebinding.rbac.authorization.k8s.io/weave-net created
    role.rbac.authorization.k8s.io/weave-net created
    rolebinding.rbac.authorization.k8s.io/weave-net created
    daemonset.extensions/weave-net created
    等一会儿,查看Master节点状态,STATUS已经变了,这是因为部署的网络组件生效了
    [root@kubernetes01 ~]# kubectl get nodes
    NAME                STATUS   ROLES    AGE   VERSION
    kubernetes-master   Ready    master   21m   v1.12.1
    
    1.2.13查看Master节点上网络weave相关Pod的状态
    [root@kubernetes01 ~]# kubectl get pods -n kube-system -l name=weave-net -o wide
    NAME              READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE
    weave-net-vhs56   2/2     Running   0          6m59s   10.5.0.206   kubernetes-master   <none>
    
    1.2.14部署可视化插件
    1.获取可视化插件docker镜像,修改tag
    docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0
    docker tag  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0   k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    docker rmi  anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 
    2.获取并修改可视化插件YAML文件的最后部分,便于后期通过token登陆可视化页面,这里需要特别注意的是暴露了30001端口,这如果在生产环境是极不安全的!
    [root@kubernetes01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    [root@kubernetes01 ~]# tail -n 20 kubernetes-dashboard.yaml
            effect: NoSchedule
    
    ---
    # ------------------- Dashboard Service ------------------- #
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    3.部署可视化插件
    [root@kubernetes01 ~]# kubectl apply -f kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard configured
    4.查看可视化插件对应的Pod状态
    [root@kubernetes01 ~]# kubectl get pods -n kube-system |  grep dash
    kubernetes-dashboard-65c76f6c97-f29nm   1/1     Running   0          3m8s
    5.获取token值
    [root@kubernetes01 ~]# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
    Name:         namespace-controller-token-mt4sh
    Type:  kubernetes.io/service-account-token
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1tdDRzaCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY5YzE3YWQzLTUxYzItMTFlOS05NWZiLTAwMTYzZTBlNDRiYyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.W2flckBO8CrzGyJzw2aJH5obQSjy4PNSll7uHOiIXPk4dnOTEzI-BfM4C9QrNDjbNTu8gIdLHntLj1181Sf_sRMidB_vhUPg6CFA1zy3XmYH21eVqjSxEBNXMSfrJHBgXnBzaHieaXqF55_etABB0j4xLM7V-bRsQ9AB0G3cv1IYU_gYG3BozksvAObmDEY4GgCI7f0-nu2YRqOMPJPhXWzKOGUvBBPyj171Xo06QvF6p9zpTMSoLa3aV-gU4XA2nMf2_aDdgFrGVI4p95ziewyu0o-W-DiEnXW1hRtwgg-PRe3QPU9ps3TALlr3U8rwh3xVmlqnRuNGVDqzmclVdQ
    访问https://10.5.0.206:30001通过token登陆控制面板,注意是https协议!
    
    1.2.15部署容器存储插件

    这里需要知道Rook项目是基于Ceph的Kubernetes存储插件,一个可用于生产级别的做持久化存储的插件,值得好好把玩。

    cd /usr/local/src
    yum -y install git
    git clone https://github.com/rook/rook.git
    cd /usr/local/src/rook/cluster/examples/kubernetes/ceph
    kubectl apply -f common.yaml
    kubectl apply -f operator.yaml
    kubectl apply -f cluster.yaml 
    

    1.3.Worker节点

    和安装Master节点相似,首先把准备工作做好主机名修改,关闭防火墙,提前处理路由问题,配置yum源等等,由于节点数9个,所以这里简单的使用了下ansible playbook配合shell脚本进行安装,节省时间。
    
    #!/bin/bash
    #pre config
    systemctl stop firewalld && systemctl disable firewalld
    setenforce 0
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1    
    vm.swappiness=0
    EOF
    sysctl --system
    
    #install docker-ce
    yum -y install yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum -y install docker-ce-18.06.1.ce
    /bin/systemctl start docker.service 
    
    # install kubeadm
    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
    wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    rpm -import rpm-package-key.gpg
    yum install -y kubelet-1.12.1
    yum install -y kubectl-1.12.1
    yum install -y kubeadm-1.12.1
    
    # install kube-proxy and pause
    images=(kube-proxy:v1.12.1 pause:3.1 )
    for imageName in ${images[@]} ; do
    docker pull anjia0532/google-containers.$imageName
    docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
    docker rmi anjia0532/google-containers.$imageName
    done
    
    # join cluster
    kubeadm join 10.5.0.206:6443 --token bh3pih.cuir6xpjl7zn7pf2 --discovery-token-ca-cert-hash sha256:ae00fc1ad4a680c01be4deaae6f6e4cf554867664bc5c16e0b3f98d4f2adcf2c
    

    注意脚本中可没写设置主机名!

    1.4其它

    遇到的一些问题:
    kubeadmv1.12.1无法正确安装的问题,节点报错[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:的问题,从k8s.gcr.io拉取镜像失败的问题,这些问题都很好解决,卡住了别怕!一点一点儿克服困难。
    
    kubernetes集群.png

    1.5补充

    1.5.1非国内节点搭建
    这里我使用了三台香港节点来部署!以下是master节点操作基本思路!
    cat /etc/redhat-release 
    CentOS Linux release 7.6.1810 (Core)
    1.修改主机名
    hostname kubernetes001
    vim /etc/hosts
    2.关闭防火墙
    setenforce 0
    3.修改内核文件
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1    
    vm.swappiness=0
    EOF
    sysctl --system
    4.yum安装docker-ce
    yum -y install yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum -y install docker-ce
    /bin/systemctl start docker.service 
    [root@kubernetes001 ~]# docker --version
    Docker version 18.09.5, build e8ff056
    5.yum安装kubeadm相关组件
    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
    yum install -y kubelet
    yum install -y kubectl
    yum install -y kubeadm
    [root@kubernetes001 ~]# kubelet --version
    Kubernetes v1.14.1
    [root@kubernetes001 ~]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    [root@kubernetes001 ~]# kubeadm  version 
    kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    6.docker安装kubenetes相关的组件镜像
    docker pull k8s.gcr.io/kube-proxy:v1.14.1
    docker pull k8s.gcr.io/kube-apiserver:v1.14.1
    docker pull k8s.gcr.io/kube-scheduler:v1.14.1
    docker pull k8s.gcr.io/kube-controller-manager:v1.14.1
    docker pull k8s.gcr.io/etcd:3.2.24
    docker pull k8s.gcr.io/coredns:1.2.2
    docker pull k8s.gcr.io/pause
    7.初始化master节点
    kubeadm init --kubernetes-version=v1.14.1
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    node节点怎么加入master节点以及node节点上的操作过程此处省略!!!
    8.安装网络插件
    kubectl apply -f https://git.io/weave-kube-1.6
    9.查看状态
    [root@kubernetes001 ~]# kubectl get nodes
    NAME            STATUS   ROLES    AGE    VERSION
    kubernetes001   Ready    master   165m   v1.14.1
    kubernetes002   Ready    <none>   116m   v1.14.1
    kubernetes003   Ready    <none>   115m   v1.14.1
    [root@kubernetes001 ~]# kubectl get pods -n kube-system -l name=weave-net -o wide
    NAME              READY   STATUS    RESTARTS   AGE    IP             NODE            NOMINATED NODE   READINESS GATES
    weave-net-48kv8   2/2     Running   0          119m   172.31.5.117   kubernetes002   <none>           <none>
    weave-net-pchlk   2/2     Running   0          118m   172.31.5.118   kubernetes003   <none>           <none>
    weave-net-wcbr5   2/2     Running   0          167m   172.31.5.116   kubernetes001   <none>           <none>
    10.安装dashboard面板
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    vim kubernetes-dashboard.yaml 编辑并修改最后几行!
    kubectl apply -f kubernetes-dashboard.yaml
    kubectl get pods -n kube-system |  grep dash
    kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
    11.安装存储插件
    cd /usr/local/src
    yum -y install git
    git clone https://github.com/rook/rook.git
    cd /usr/local/src/rook/cluster/examples/kubernetes/ceph
    kubectl apply -f common.yaml
    kubectl apply -f operator.yaml
    kubectl apply -f cluster.yaml 
    
    1.5.2忘记生成的kubadm join那条命令怎么办?
    [root@kubernetes001 ~]# kubeadm token list
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    nvv5wu.7e1v9oniyak5se3a   23h       2019-05-06T11:29:59+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    [root@kubernetes001 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    a90d6683aef10b826041a21de487d5274fc80a5aa6edb67abe638251ce59e3ed
    在节点2上执行join操作!
    [root@kubernetes002 ~]# kubeadm join 172.31.5.116:6443 --token nvv5wu.7e1v9oniyak5se3a --discovery-token-ca-cert-hash sha256:a90d6683aef10b826041a21de487d5274fc80a5aa6edb67abe638251ce59e3ed
    

    1.6总结

    文章中使用kubeadm部署了1台Kubernetes Master节点,部署了9台Kubernetes Worker节点,部署了可视化插件,部署了容器存储插件,部署了容器的网络插件。总的来说kubeadm是玩起来是相当方便😄,但是缺点也显而易见,比如没有做到Master的高可用,安全性不足等等等😭...so并不具备生产环境使用的标准。这里个人推荐生产环境研究使用kubeaszkubespray部署!最后的最后,学习kubernetes需要的就是探索精神!☀️
    PS:服务器使用的是国内某☁️的机器
    欢迎大家留言讨论哦~~~

    相关文章

      网友评论

        本文标题:Kubernetes探索学习系列001----kubeadm快速

        本文链接:https://www.haomeiwen.com/subject/pltacqtx.html