美文网首页
kubeadm部署k8s集群

kubeadm部署k8s集群

作者: 杏壳 | 来源:发表于2019-12-20 17:41 被阅读0次

    环境准备

    这里使用Parallels Desktop 新建三个虚拟机,网络配置使用共享网络,确保三个主机有三个不同的网卡MAC地址,确保三个虚拟机之间可以互相ping通。三个虚拟机配置如下

    主机名 IP
    master1 192.168.1.27
    node1 192.168.1.28
    node2 192.168.1.29

    配置root密码,修改主机名

    su passwd root
    vim /etc/hostname
    

    配置master的本地DNS

    [root@master1 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.1.27 hongyi.master1.com master1
    192.168.1.28 hongyi.node1.com node1
    192.168.1.29 hongyi.node2.com node2
    

    永久关闭swap

    free -h
    swapoff -a
    // 修改/etc/fstab配置文件,注释掉swap的挂载
    

    配置SELinux和firewalld

    [root@master1 ~] sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 修改配置永久生效,需重启
    [root@master1 ~]# setenforce 0
    #关闭防火墙
    [root@master1 ~]# systemctl stop firewalld
    [root@master1 ~]# systemctl disable  firewalld
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
    

    yum仓库配置

    由于需要下载相关镜像,这里使用阿里云的镜像仓库
    配置docker-ce的镜像仓库

    [root@master1 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@master1 yum.repos.d]# touch kubenetes.repo
    [root@master1 yum.repos.d]# vim kubenetes.repo 
    [root@master1 yum.repos.d]# cat kubenetes.repo 
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    

    将两个仓库配置拷贝到node节点

    scp ./docker-ce.repo  node2:/etc/yum.repos.d/docker-ce.repo
    scp ./kubenetes.repo  node1:/etc/yum.repos.d/kubenetes.repo
    

    Master节点配置

    master需要安装的组件包括

    • docker-ce
    • kubeadm
    • kubectl
    • kubelet

    安装组件,并启动docker

    yum install kubeadm-1.15.2 kubectl-1.15.2 kubelet-1.15.2  docker-ce  -y
    systemctl start docker
    systemctl enable docker
    docker info
    

    由于墙的原因,这里我们先使用脚本准备需要的镜像,pull下来之后重新tag

    [parallels@master1 ~]$ cat /dockerpull.sh 
    #!/bin/bash
    echo "Hello World !"
    #根据上面报错的提示选择相应的版本
    docker pull mirrorgooglecontainers/kube-apiserver:v1.15.2
    docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.2-beta.0
    docker pull mirrorgooglecontainers/kube-scheduler:v1.15.2-beta.0
    docker pull mirrorgooglecontainers/kube-proxy:v1.15.2
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd:3.3.10
    docker pull coredns/coredns:1.3.1 
    #改tag
    docker tag mirrorgooglecontainers/kube-apiserver:v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2
    docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.2-beta.0 k8s.gcr.io/kube-controller-manager:v1.15.2
    docker tag mirrorgooglecontainers/kube-scheduler:v1.15.2-beta.0 k8s.gcr.io/kube-scheduler:v1.15.2
    docker tag mirrorgooglecontainers/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
    docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    #清理镜像
    docker rmi mirrorgooglecontainers/kube-apiserver:v1.15.2
    docker rmi mirrorgooglecontainers/kube-controller-manager:v1.15.2-beta.0
    docker rmi mirrorgooglecontainers/kube-scheduler:v1.15.2-beta.0
    docker rmi mirrorgooglecontainers/kube-proxy:v1.15.2
    docker rmi mirrorgooglecontainers/pause:3.1
    docker rmi mirrorgooglecontainers/etcd:3.3.10
    docker rmi coredns/coredns:1.3.1 
    

    查看一下镜像

    [root@master1 yum.repos.d]# docker image ls
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy                v1.15.2             167bbf6c9338        4 months ago        82.4MB
    k8s.gcr.io/kube-apiserver            v1.15.2             34a53be6c9a7        4 months ago        207MB
    k8s.gcr.io/kube-controller-manager   v1.15.2             575346c7506b        5 months ago        159MB
    k8s.gcr.io/kube-scheduler            v1.15.2             38d61dd6e105        5 months ago        81.1MB
    k8s.gcr.io/coredns                   1.3.1               eb516548c180        11 months ago       40.3MB
    k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        12 months ago       258MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        24 months ago       742kB
    

    使用kubeadm init 初始化集群的control-plane

    kubeadm init --kubernetes-version=v1.15.2  --pod-network-cidr=10.224.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    

    在经过初始化一系列的phase之后,提示接下来的操作

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.1.27:6443 --token 8bzhtc.c97lhb44g0y7l4uy \
        --discovery-token-ca-cert-hash sha256:8dd9c308281a85175b2ef9106110c2a81e2b2374e230717d0882b04f31b9bc69 
    

    执行提示的指令

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    // 由于本来就是用root用户操作的,第三句可以省去
    

    接着命令提示我们去设置网络,先查看下节点的状态,这里的NotReady表明网络不可用

    [root@master1 yum.repos.d]# kubectl get nodes
    NAME      STATUS     ROLES    AGE     VERSION
    master1   NotReady   master   6m20s   v1.15.2
    

    这里使用flannel作为网络的支持组件,手动部署flannel

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    如果在虚拟机里下载不下来这个yml文件,可以先在宿主机下好之后拷贝到虚拟机中。

    [root@master1 yum.repos.d]# kubectl apply -f /kube-flannel.yml 
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds-amd64 created
    daemonset.apps/kube-flannel-ds-arm64 created
    daemonset.apps/kube-flannel-ds-arm created
    daemonset.apps/kube-flannel-ds-ppc64le created
    daemonset.apps/kube-flannel-ds-s390x created
    

    稍等片刻,master节点的状态就会变为Ready

    [root@master1 yum.repos.d]# kubectl get nodes
    NAME      STATUS   ROLES    AGE   VERSION
    master1   Ready    master   12m   v1.15.2
    

    加入Node节点

    Node节点需要手动安装的组件包括

    • kubeadm
    • kubelet
    • docker-ce
    [root@node1 ~]# yum install docker-ce kubelet-1.15.2 kubeadm-1.15.2  -y
    [root@node1 ~]# systemctl start docker
    [root@node1 ~]# systemctl enable docker
    [root@node1 ~]# systemctl start kubelet
    [root@node1 ~]# systemctl enable  kubelet
    

    在join集群之前,由于墙的原因,node节点也需要准备好相关镜像,node节点使用这个dockerpull.sh脚本拉取镜像

    [root@centos-7-k8s-node1 ~]# cat /dockerpull.sh 
    #!/bin/bash
    docker pull mirrorgooglecontainers/kube-proxy:v1.15.2
    docker pull mirrorgooglecontainers/pause:3.1
    #改tag
    docker tag mirrorgooglecontainers/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
    docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
    #清理镜像
    docker rmi mirrorgooglecontainers/kube-proxy:v1.15.2
    docker rmi mirrorgooglecontainers/pause:3.1
    

    镜像准备好之后,使用kubeadm join加入集群(master执行kuebadm init时提示的指令)

    kubeadm join 192.168.1.27:6443 --token 8bzhtc.c97lhb44g0y7l4uy \
        --discovery-token-ca-cert-hash sha256:8dd9c308281a85175b2ef9106110c2a81e2b2374e230717d0882b04f31b9bc69 
    

    稍等片刻,等Node节点加入集群成功后,在master节点查看所有节点状态

    NAME      STATUS   ROLES    AGE   VERSION
    master1   Ready    master   40m   v1.15.2
    node1     Ready    <none>   13m   v1.15.2
    node2     Ready    <none>   47s   v1.15.2
    
    [root@master1 yum.repos.d]# kubectl get pods -n kube-system
    NAME                              READY   STATUS    RESTARTS   AGE
    coredns-5c98db65d4-fstfc          1/1     Running   0          43m
    coredns-5c98db65d4-wmqln          1/1     Running   0          43m
    etcd-master1                      1/1     Running   0          42m
    kube-apiserver-master1            1/1     Running   0          42m
    kube-controller-manager-master1   1/1     Running   0          42m
    kube-flannel-ds-amd64-rhw8v       1/1     Running   0          31m
    kube-flannel-ds-amd64-wrts8       1/1     Running   0          3m41s
    kube-flannel-ds-amd64-zmvdv       1/1     Running   0          15m
    kube-proxy-9d2jq                  1/1     Running   0          15m
    kube-proxy-l2kd8                  1/1     Running   0          3m41s
    kube-proxy-sh8wp                  1/1     Running   0          43m
    kube-scheduler-master1            1/1     Running   0          42m
    

    至此,一个未安装任何应用的一个集群就部署完毕了。
    have fun

    相关文章

      网友评论

          本文标题:kubeadm部署k8s集群

          本文链接:https://www.haomeiwen.com/subject/yyhinctx.html