美文网首页
03 CenterOS7.9 安装K8s过程

03 CenterOS7.9 安装K8s过程

作者: 逸章 | 来源:发表于2021-10-17 16:28 被阅读0次

    一、主机环境准备

    1、主机名配置

    1.1 三台主机上各自修改自己主机名

    hostnamectl set-hostname k8s-master
    
    hostnamectl set-hostname k8s-node1
    
    hostnamectl set-hostname k8s-node2
    

    1.2 三台主机上配置host文件,把work和node节点都加进去

    #配置hosts文件
    cat >> /etc/hosts << EOF        
    192.168.100.48 k8s-master
    192.168.100.49 k8s-node01
    192.168.100.50 k8s-node02
    EOF
    

    2、在三台机器上关闭防火墙等

    防火墙等禁用

    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    swapoff -a                         #关闭swap交换分区
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab      #禁止swap交换分区开机自启
    
    #合一为:
    #systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 
    

    将桥接的IPv4流量传递到iptables的链

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    sysctl --system                        #使上一条命令的配置生效
    

    NTP时间同步

    yum install -y ntpdate        #安装ntpdate用于校准时间,确保master和node时间同步
    ntpdate time.windows.com    #校准时间
    

    二、在三台机器上都安装Docker

    yum install wget -y
    
    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
    yum -y install docker-ce-20.10.9
    
    #以下命令用来配置镜像地址..
    curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
    systemctl enable docker
    systemctl start docker
    

    附:删除已经安装的Docker方法为:

     [root@k8s-master ~]# yum list installed|grep docker
    containerd.io.x86_64                  1.4.11-3.1.el7                 @docker-ce-stable
    docker.x86_64                         2:1.13.1-208.git7d71120.el7_9  @extras
    docker-client.x86_64                  2:1.13.1-208.git7d71120.el7_9  @extras
    docker-common.x86_64                  2:1.13.1-208.git7d71120.el7_9  @extras
    [root@k8s-master ~]# yum -y remove containerd.io.x86_64 docker.x86_64 docker-client.x86_64 docker-common.x86_64
    

    三、三台主机都安装K8s

    配置国内k8s镜像地址
    k8s镜像仓库需要翻出去,所以配置国内镜像地址

    [root@k82-node2 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    安装K8s(安装和使能kubelet)

    [root@k82-node2 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
    
    
    
    ...
    Installed:
      kubeadm.x86_64 0:1.15.0-0                         kubectl.x86_64 0:1.15.0-0                         kubelet.x86_64 0:1.15.0-0
    
    Dependency Installed:
      conntrack-tools.x86_64 0:1.4.4-7.el7               cri-tools.x86_64 0:1.13.0-0                        kubernetes-cni.x86_64 0:0.8.7-0
      libnetfilter_cthelper.x86_64 0:1.0.0-11.el7        libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7        libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
      socat.x86_64 0:1.7.3.2-2.el7
    
    Complete!
    
    
    
    [root@k82-node2 ~]# systemctl enable kubelet
    

    附:既有K8s的删除方法:

    [root@k8s-node-88 ~]# yum list installed|grep kube
    cri-tools.x86_64                     1.19.0-0                       @kubernetes
    kubeadm.x86_64                       1.22.2-0                       @kubernetes
    kubectl.x86_64                       1.22.2-0                       @kubernetes
    kubelet.x86_64                       1.22.2-0                       @kubernetes
    kubernetes-cni.x86_64                0.8.7-0                        @kubernetes
    [root@k8s-node-88 ~]# yum -y remove cri-tools.x86_64 kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 kubernetes-cni.x86_64
    

    配置cgroupdriver=systemd
    默认docker的Cgroup是cgroups,kubelet的Cgroups是systemd,两者的Cgroups不一样,两边需要修改成一致的配置

    [root@k8s-master-86 ~]# cat /etc/docker/daemon.json
    {
      "registry-mirrors": [
      "http://f1361db2.m.daocloud.io",
      "https://registry.docker-cn.com",
      "https://hub-mirror.c.163.com",
      "https://docker.mirrors.ustc.edu.cn" ],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    
    
    [root@k8s-master-86 ~]# cat > /var/lib/kubelet/config.yaml <<EOF
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    cgroupDriver: systemd
    EOF
    
    
    [root@k8s-master-86 ~]# systemctl daemon-reload && \
                            systemctl restart docker && \
                            systemctl restart kubelet
    

    四、创建K8s集群

    1、仅在master节点上执行

    1.1 初始化

    kubeadm init \
    --apiserver-advertise-address=192.168.100.48 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.15.0 \
    --service-cidr=10.1.0.0/16 \
    --pod-network-cidr=10.244.0.0/16
    
    image.png

    然后依据提示执行:

    [root@k8s-master ~]# mkdir -p $HOME/.kube && \ 
                         sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  && \ 
                         sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    上面的命令如果不执行,则在执行kubectl get node时会得到下面的提示: image.png

    初始化如果失败,请执行

    rm -rf  /var/lib/etcd  && \
    rm -rf /etc/kubernetes/* && \
    rm -rf ~/.kube/* &&\
    echo y | kubeadm reset
    

    然后重新初始化

    1.2 在master节点安装配置flannel网络

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f kube-flannel.yml
    

    然后检查健康状况:

    [root@k8s-master-86 ~]# kubectl get pods -n kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-7f6cbbb7b8-9l2mt                1/1     Running   0          26m
    coredns-7f6cbbb7b8-jl2qm                1/1     Running   0          26m
    etcd-k8s-master-86                      1/1     Running   1          26m
    kube-apiserver-k8s-master-86            1/1     Running   1          26m
    kube-controller-manager-k8s-master-86   1/1     Running   1          26m
    kube-flannel-ds-clldj                   1/1     Running   0          11m
    kube-proxy-shxg5                        1/1     Running   0          26m
    kube-scheduler-k8s-master-86            1/1     Running   1          26m
    [root@k8s-master-86 ~]# kubectl get  cs
    Warning: v1 ComponentStatus is deprecated in v1.19+
    NAME                 STATUS      MESSAGE                                                                                       ERROR
    scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
    controller-manager   Healthy     ok
    etcd-0               Healthy     {"health":"true","reason":""}
    
    
    [root@k8s-master-86 ~]# ps -ef|grep flannel|grep -v grep    #可能需要10分钟左右启动初始化完成,才有返回结果.然后再执行下一步
    [root@k8s-master-86 ~]# kubectl get nodes              #可能需要等待十几分钟,状态才能全部转为ready.然后再执行下一步
    

    上面kubectl get cs结果显示系统不健康,解决方案如下:
    把下面文件的 - --port=0注释掉,即前面加上#

    # vi /etc/kubernetes/manifests/kube-controller-manager.yaml
    # vi /etc/kubernetes/manifests/kube-scheduler.yaml
    # systemctl restart kubelet.service
    
    
    image.png

    我自己下载了kube-flannel.yml文件,然后进行了局部修改

    wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
    
    vi kube-flannel.yml
    

    修改kube-flannel.yml里面为(vi中使用:set number,然后 :106定位到106行):

        第106行为   image: lizhenliang/flannel:v0.11.0-amd64   
        第120行为   image: lizhenliang/flannel:v0.11.0-amd64
    
    image.png
    注意,你可能会遇到有一个node节点是NotReady的问题: image.png
    可能是NotReady节点没有自动下载flannel这个docker镜像,验证过程如下: image.png 比如另外一个node就是Ready状态,它下载的镜像如下: image.png
    进一步分析发现这台node什么镜像都下载不下来: image.png

    图中给出了解决方法,修改/etc/docker/daemon.json里面内容,换用一个新的国内镜像源,最后成功了

    2、仅在两个node节点上执行

    注意,命令来自master节点init执行结果中给出的加入集群提示

    [root@k8s-node2 ~]# kubeadm join 192.168.100.48:6443 --token qgg72c.up6e96wieswtdod1 \
    >     --discovery-token-ca-cert-hash 
    
    ......
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@k8s-node2 ~]#
    
    

    如果后期有node想加入,但是又不知道token,可以如下操作生成一个新的token后依据提示加入集群:

    [root@k8s-master03 wal]# kubeadm token create --print-join-command
    kubeadm join 192.168.108.222:6443 --token pduv8e.dc5z5iu84f8lyrfh     --discovery-token-ca-cert-hash >  sha256:44bb94c467bdcaa79d3128af9872c1296757f993e404e1751e1662c7de3faddb
    

    若是想增加master,别忘记了加上--control-plane,后面高可用部分会提到多master方案

    五、简单测试一下nginx的K8s部署(非必须)

    [root@k8s-master ~]# kubectl create deployment nginx --image=nginx
    
    [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
    
    [root@k8s-master ~]# kubectl get pods,svc
    NAME                         READY   STATUS    RESTARTS   AGE
    pod/nginx-554b9c67f9-2xwhw   1/1     Running   0          43s
    
    NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        28h
    service/nginx        NodePort    10.1.170.52   <none>        80:30385/TCP   24s
    [root@k8s-master ~]#
    
    
    image.png
    image.png
    1、扩缩容测试
    1.1、扩容,要求有两个nginx实例在运行
    [root@k8s-master ~]# kubectl scale deployment nginx --replicas=2
    
    image.png
    1.1、缩容,从4个实例缩为2个nginx实例
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-554b9c67f9-jhkgt   1/1     Running   0          15m   10.244.1.7    k8s-node2   <none>           <none>
    nginx-554b9c67f9-qchcx   1/1     Running   1          15m   10.244.2.12   k8s-node1   <none>           <none>
    [root@k8s-master ~]# kubectl scale deployment nginx --replicas=4
    deployment.extensions/nginx scaled
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-554b9c67f9-jhkgt   1/1     Running   0          15m   10.244.1.7    k8s-node2   <none>           <none>
    nginx-554b9c67f9-qchcx   1/1     Running   1          15m   10.244.2.12   k8s-node1   <none>           <none>
    nginx-554b9c67f9-vns7w   1/1     Running   0          3s    10.244.2.14   k8s-node1   <none>           <none>
    nginx-554b9c67f9-zs27t   1/1     Running   0          3s    10.244.1.9    k8s-node2   <none>           <none>
    [root@k8s-master ~]# kubectl scale deployment nginx --replicas=3
    deployment.extensions/nginx scaled
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                     READY   STATUS        RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-554b9c67f9-jhkgt   1/1     Running       0          15m   10.244.1.7    k8s-node2   <none>           <none>
    nginx-554b9c67f9-qchcx   1/1     Running       1          15m   10.244.2.12   k8s-node1   <none>           <none>
    nginx-554b9c67f9-vns7w   1/1     Running       0          14s   10.244.2.14   k8s-node1   <none>           <none>
    nginx-554b9c67f9-zs27t   0/1     Terminating   0          14s   10.244.1.9    k8s-node2   <none>           <none>
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-554b9c67f9-jhkgt   1/1     Running   0          15m   10.244.1.7    k8s-node2   <none>           <none>
    nginx-554b9c67f9-qchcx   1/1     Running   1          16m   10.244.2.12   k8s-node1   <none>           <none>
    nginx-554b9c67f9-vns7w   1/1     Running   0          40s   10.244.2.14   k8s-node1   <none>           <none>
    [root@k8s-master ~]#
    
    
    image.png

    2、高可用测试
    接下来我们分别尝试删除pod和停止Container来测试高可用性:

    [root@k8s-master ~]# kubectl get pod
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-554b9c67f9-2xwhw   1/1     Running   0          18h
    nginx-554b9c67f9-qchcx   1/1     Running   0          23s
    [root@k8s-master ~]# kubectl delete pod nginx-554b9c67f9-2xwhw
    pod "nginx-554b9c67f9-2xwhw" deleted
    [root@k8s-master ~]# kubectl get pod
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-554b9c67f9-jhkgt   1/1     Running   0          15s
    nginx-554b9c67f9-qchcx   1/1     Running   0          51s
    [root@k8s-master ~]# ssh k8s-node1
    root@k8s-node1's password:
    Last login: Wed Oct 20 06:15:25 2021 from k8s-master
    [root@k8s-node1 ~]# docker ps | grep nginx
    f15ab7d492b3   nginx                                               "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes             k8s_nginx_nginx-554b9c67f9-qchcx_default_5553e8b9-a20a-48c6-a0b5-bb514fb59fcc_0
    a9a0506fa724   registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 2 minutes ago   Up 2 minutes             k8s_POD_nginx-554b9c67f9-qchcx_default_5553e8b9-a20a-48c6-a0b5-bb514fb59fcc_0
    [root@k8s-node1 ~]# docker stop f15ab7d492b3
    f15ab7d492b3
    [root@k8s-node1 ~]# docker ps | grep nginx
    5b809693112f   nginx                                               "/docker-entrypoint.…"   6 seconds ago   Up 5 seconds             k8s_nginx_nginx-554b9c67f9-qchcx_default_5553e8b9-a20a-48c6-a0b5-bb514fb59fcc_1
    a9a0506fa724   registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 2 minutes ago   Up 2 minutes             k8s_POD_nginx-554b9c67f9-qchcx_default_5553e8b9-a20a-48c6-a0b5-bb514fb59fcc_0
    [root@k8s-node1 ~]# exit
    logout
    Connection to k8s-node1 closed.
    [root@k8s-master ~]# kubectl get pod
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-554b9c67f9-jhkgt   1/1     Running   0          2m40s
    nginx-554b9c67f9-qchcx   1/1     Running   1          3m16s
    [root@k8s-master ~]#
    
    
    image.png

    六、配置k8s UI界面(非必须)

    1、安装

    [root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    --2021-10-20 03:20:43--  https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
    Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 4577 (4.5K) [text/plain]
    Saving to: ‘kubernetes-dashboard.yaml’
    
    100%[==============================================================================================================>] 4,577       --.-K/s   in 0.002s
    
    2021-10-20 03:20:43 (2.33 MB/s) - ‘kubernetes-dashboard.yaml’ saved [4577/4577]
    
    [root@k8s-master ~]# vi kubernetes-dashboard.yaml
    
    [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    [root@k8s-master ~]#
    
    

    修改内容为:

    112行        image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1   # 替换为此
    
    158行   type: NodePort     # 增加此行
    
    162行       nodePort: 30001   # 增加此行
    
    image.png
    image.png

    2、配置管理员账号(注意产生的token)

    [root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
    serviceaccount/dashboard-admin created
    [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
    [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
    Name:         dashboard-admin-token-xnv89
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: ee729a1d-691a-4fee-899e-968cd8622fb5
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teG52ODkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWU3MjlhMWQtNjkxYS00ZmVlLTg5OWUtOTY4Y2Q4NjIyZmI1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Gt9TpRex7_Z7-fsrmMns_ijVzkEoFdUWEOHR-6R5ui0kOGTAjILCnxKqF2FrJPwp0R593cgnOOa_ygvC1ljUE69bw4HX-MCVQzf3f1WPPoLiR_ToXpQfVtUJq7FLir-46WmygStD2VWDb8auQuCIOiKyC1c1-JR6GwYTpQ55-gQoF7Orp4ijQaOjuKrKnmINrNtwAMzOZvyn2CIg7eIZ6ARk9_iv_xhUW4LBWdrkTaH5a4NR1C3yvlf5z2AaI549LJaYAP-DhS28Nb-PpqMfJxj38RJW0nPHwEOVrxe5xUgK6fPj_KJWwjb7JxG9cujHdrKjoXjdAximM90xvlMaVw
    [root@k8s-master ~]#
    
    

    3、登录

    用firefox浏览器访问:访问https://192.168.2.130:30001


    image.png
    image.png

    七、master节点的高可用和负载均衡(商用环境需要使用高可用)

    使用keepalived+haproxy方案,
    1、keepalive本身是没有负载均衡能力的,它只是提供主机高可用的能力的;
    2、此处的haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。在仅仅使用keepalived主备模式下仅单个master节点承载流量,而配备了haproxy后系统具有更高的吞吐量。

    7.1、一共3个k8s-master节点

    一共有3个Master:
    master:192.168.108.88 k8s-master03
    backup:192.168.108.87 k8s-master02
    backup:192.168.108.86 k8s-master04
    二个worker节点:
    k8s-node1:192.168.100.49
    k8s-node2:192.168.100.50

    # hostnamectl set-hostname k8s-master02
    # hostnamectl set-hostname k8s-master03
    # hostnamectl set-hostname k8s-master04
    # hostnamectl set-hostname k8s-node1
    # hostnamectl set-hostname k8s-node2
    

    7.2、修改所有节点的/etc/hosts文件

    image.png

    7.3、所有master间的免密钥登陆

    各个master把自己公钥发到同一个master上汇总(这里选用k8s-master01),然后分发到各个master上:

    [root@k8s-master03 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa && cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys && ssh-copy-id -i ~/.ssh/id_dsa.pub k8s-master
    

    接下来把汇总的公钥分发给其他master主机(k8s-master02和k8s-master03):

    [root@k8s-master01 ~]# scp -r /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/
    [root@k8s-master01 ~]# scp -r /root/.ssh/authorized_keys root@192.168.108.88:/root/.ssh/
    

    到各个master上修改文件访问权限:

    [root@k8s-master03 ~]# chmod 600 ~/.ssh/authorized_keys
    [root@k8s-master02 ~]# chmod 600 ~/.ssh/authorized_keys
    

    7.4、在所有master节点上安装keepalive和haproxy

    7.4.1、安装keepalive和haproxy

    # yum install -y socat keepalived haproxy ipvsadm conntrack
    

    7.4.2、配置keepalive属性(修改/etc/keepalived/keepalived.conf)

    本例中环境:  
    master:192.168.108.88
    backup:192.168.108.87
    backup:192.168.108.86
    VIP: 192.168.108.100

    # vi /etc/keepalived/keepalived.conf
    

    各个master节点依据自己情况做少量更改,主要是:
    1、自己的被选举权重
    2、自己的网卡名称

    在本例中,主master节点配置(/etc/keepalived/keepalived.conf)为:

    global_defs{
     router_id master01
    }
    vrrp_instance VI_1 {
        state MASTER #主
        interface ens160  #网卡名字,根据自己实际的网卡名称来写, 你可以通过# ip addr来查看
        virtual_router_id 50 #ID是唯一的,必须一致
        priority 100 #权重100 ,根据权重来选举虚拟ip,其他两台权重不能一样
        advert_int 1
        authentication {  #认证方式,必须统一密码
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.108.222 #vip  创建一个虚拟IP
        }
        track_script {
            check_apiserver
        }
    
    }
    

    第一个备份master节点配置(/etc/keepalived/keepalived.conf):

    global_defs {
        router_id master01
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface ens160
        virtual_router_id 50
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.108.222
        }
        track_script {
            check_apiserver
        }
    }
    

    第二个备份master节点配置为(/etc/keepalived/keepalived.conf):

    global_defs {
        router_id master01
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 50
        priority 80
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.108.222
        }
     track_script {
            check_apiserver
        }
    }
    

    7.4.3、配置haproxy属性(修改/etc/haproxy/haproxy.cfg)

    三个节点都一样的配置 image.png
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
        log /dev/log local0
        log /dev/log local1 notice
        daemon
    
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
        mode                    http
        log                     global
        option                  httplog
        option                  dontlognull
        option http-server-close
        option forwardfor       except 127.0.0.0/8
        option                  redispatch
        retries                 1
        timeout http-request    10s
        timeout queue           20s
        timeout connect         5s
        timeout client          20s
        timeout server          20s
        timeout http-keep-alive 10s
        timeout check           10s
    
    #---------------------------------------------------------------------
    # apiserver frontend which proxys to the masters
    #---------------------------------------------------------------------
    frontend apiserver
        bind *:16443
        mode tcp
        option tcplog
        default_backend apiserver
    
    #---------------------------------------------------------------------
    # round robin balancing for apiserver
    #---------------------------------------------------------------------
    backend apiserver
        option httpchk GET /healthz
        http-check expect status 200
        mode tcp
        option ssl-hello-chk
        balance     roundrobin
            server master02 192.168.108.87:6443 check
            server master03 192.168.108.88:6443 check
            server master04 192.168.108.86:6443 check
    
    

    7.4.4、可以测试一下keepalived是否起作用

    image.png

    7.5、所有master上都启动keepalive服务和haproxy服务

    注意:如果你的master之前执行过kubeadm init,则需要执行kubeadm reset,然后再启动keepalive

    # systemctl enable keepalived && systemctl start keepalived
    # systemctl status keepalived 
    
    # systemctl enable haproxy && systemctl start haproxy 
    # systemctl status haproxy
    
    
    
    主Master上看到的 从第一个Master上看到的
    从第二个Master上看到的 haproxy的状态: 第一个master上看到的
    另外一个master上看到的

    7.6、在所有master上启用IPVS(可选的,一般可以不用配置)

    # cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
    for kernel_module in \${ipvs_modules}; do
     /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
     if [ $? -eq 0 ]; then
     /sbin/modprobe \${kernel_module}
     fi
    done
    EOF
    
    # chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
    

    7.7、配置K8s

    仿造前面做node和master的配置(master到初始化master的前一步)这里略去,参考前面即可

    7.8、在一个master上进行初始化

    7.8.1、在一个master上配置一下/etc/kubernetes/kubeadm-config.yaml

    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterConfiguration
    kubernetesVersion: v1.15.0
    controlPlaneEndpoint: "192.168.108.222:6443"
    apiServer:
      certSANs:
      - 192.168.108.222
      - 192.168.108.88
      - 192.168.108.87
      - 192.168.108.86
    networking:
      podSubnet: 10.244.0.0/16
    imageRepository: "registry.aliyuncs.com/google_containers"
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: ipvs
    

    7.8.2、在同一个master上执行init

    鉴于我们是在非master高可用基础上做的高可用,即原来的master上曾经执行过kubeadm init,所以这里需要在这个节点上先执行reset操作(只需要在原来的那一个master节点上执行即可)

    [root@k8s-master03 ~]# kubeadm reset && \
                         rm -rf  /var/lib/etcd  && \
                         rm -rf /etc/kubernetes/* && \
                         rm -rf ~/.kube/* 
    [root@k8s-master03 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    [root@k8s-master03 ~]# systemctl stop kubelet && \
                           systemctl stop docker && \
                           rm -rf /var/lib/cni/* && \
                           rm -rf /var/lib/kubelet/* && \
                           rm -rf /etc/cni/* && \
                           systemctl start docker && \
                           systemctl start kubelet
                                            
    [root@k8s-master03 ~]# kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
    
    
    ..................
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join 192.168.108.222:6443 --token t3l781.rf2tc20q312xi6hb \
        --discovery-token-ca-cert-hash sha256:348f4b1c3f458a70b96674da3d0d5d4fead1f79a6a0f2cf83261a4518b093695 \
        --experimental-control-plane
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.108.222:6443 --token t3l781.rf2tc20q312xi6hb \
        --discovery-token-ca-cert-hash sha256:348f4b1c3f458a70b96674da3d0d5d4fead1f79a6a0f2cf83261a4518b093695
    
    
    # mkdir -p $HOME/.kube &&  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config &&  sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    7.9、从MASTER拷贝证书到其他BACKUP的master节点上(很重要)

    在MASTER上建一个脚本然后执行:

    [root@k8s-master03 ~]# vi cert-master.sh
    [root@k8s-master03 ~]# cat cert-master.sh
    USER=root # customizable
    CONTROL_PLANE_IPS="192.168.108.87 192.168.100.48"
    for host in ${CONTROL_PLANE_IPS}; do
        scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
        scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
        scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
        scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
        scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
        scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
        scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
     # Quote this line if you are using external etcd
        scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    done
    [root@k8s-master03 ~]# sh cert-master.sh
    ca.crt                                                                                                                                                                    100% 1025     1.8MB/s   00:00
    ca.key                                                                                                                                                                    100% 1679     3.4MB/s   00:00
    sa.key                                                                                                                                                                    100% 1675     2.9MB/s   00:00
    sa.pub                                                                                                                                                                    100%  451   942.8KB/s   00:00
    front-proxy-ca.crt                                                                                                                                                        100% 1038     2.6MB/s   00:00
    front-proxy-ca.key                                                                                                                                                        100% 1679     3.9MB/s   00:00
    ca.crt                                                                                                                                                                    100% 1017     1.9MB/s   00:00
    ca.key                                                                                                                                                                    100% 1675     2.2MB/s   00:00
    ca.crt                                                                                                                                                                    100% 1025    48.8KB/s   00:00
    ca.key                                                                                                                                                                    100% 1679     1.5MB/s   00:00
    sa.key                                                                                                                                                                    100% 1675     1.3MB/s   00:00
    sa.pub                                                                                                                                                                    100%  451   463.5KB/s   00:00
    front-proxy-ca.crt                                                                                                                                                        100% 1038   176.0KB/s   00:00
    front-proxy-ca.key                                                                                                                                                        100% 1679   600.6KB/s   00:00
    ca.crt                                                                                                                                                                    100% 1017   878.1KB/s   00:00
    ca.key                                                                                                                                                                    100% 1675     1.5MB/s   00:00
    [root@k8s-master03 ~]#
    
    

    在所有其他的BACKUP master上建一个脚本然后执行:

    [root@k8s-master01 ~]# cat mv-cert.sh
    USER=root # customizable
    mkdir -p /etc/kubernetes/pki/etcd
    mv /${USER}/ca.crt /etc/kubernetes/pki/
    mv /${USER}/ca.key /etc/kubernetes/pki/
    mv /${USER}/sa.pub /etc/kubernetes/pki/
    mv /${USER}/sa.key /etc/kubernetes/pki/
    mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
    mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
    mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
    # Quote this line if you are using external etcd
    mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
    
    [root@k8s-master01 ~]# sh mv-cert.sh
    
    [root@k8s-master02 ~]# cat mv-cert.sh
    USER=root # customizable
    mkdir -p /etc/kubernetes/pki/etcd
    mv /${USER}/ca.crt /etc/kubernetes/pki/
    mv /${USER}/ca.key /etc/kubernetes/pki/
    mv /${USER}/sa.pub /etc/kubernetes/pki/
    mv /${USER}/sa.key /etc/kubernetes/pki/
    mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
    mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
    mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
    # Quote this line if you are using external etcd
    mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
    
    [root@k8s-master02 ~]# sh mv-cert.sh
    

    7.10、剩余的master节点通过kubeadm join加入集群

    其他的master backup节点都需要执行:

    [root@k8s-master02 ~]# kubeadm join 192.168.108.222:6443 --token 711yyq.kauqnz8sbpqy5mck     --discovery-token-ca-cert-hash sha256:ba3cd3361994078d2d48634e1ec2fca64b95135460c4be1f64d3fd220721b8a9     --control-plane
    ......
    This node has joined the cluster and a new control plane instance was created:
    
    * Certificate signing request was sent to apiserver and approval was received.
    * The Kubelet was informed of the new secure connection details.
    * Control plane (master) label and taint were applied to the new node.
    * The Kubernetes control plane instances scaled up.
    * A new etcd member was added to the local/stacked etcd cluster.
    
    To start administering your cluster from this node, you need to run the following as a regular user:
    
            mkdir -p $HOME/.kube
            sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
            sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Run 'kubectl get nodes' to see this node join the cluster.
    
    
    [root@k8s-master02 ~]# mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    kubeadm join 可以加上参数--v=2 来打印日志
    token信息可以在主master上执行kubeadm token list来查看

    join过程如果看到[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"这样的警告信息,如果你想解决,可以如下图: image.png

    7.11、重新安装一下网络插件flannel

    参考前面方法在每个节点上都重新安装一次即可(如果不装会出现kube-flannel.yml)

    # kubectl apply -f kube-flannel.yml  #kube-flannel.yml的下载和内容修改见前文
    

    如果有一个master没有安装,则最后在使用kubectl get node的时候会发现节点状态是NotReady
    进一步执行kubectl describe node时会显示network plugin is not ready: cni config uninitialized

    说明:我遇到过一个错误,上一次有个节点已经配置了一个不同的keepalive集群,在执行过程中有错误,提示不能连接的一个ip上上一个集群的VIP,这个时候你可以试一下下面的操作

    # kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
    # mkdir -p $HOME/.kube
    # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    # sudo chown $(id -u):$(id -g) $HOME/.kube/config
    # kubectl apply -f kube-flannel.yml 
    
    下面的是为后面的join准备的
    # kubeadm reset
    # rm -rf  /var/lib/etcd 
    

    etcd的数据会挂载到master节点/var/lib/etcd

    7.12、所有node节点通过kubeadm join加入集群

    在每个node上都执行(如果node曾经join过别的集群,需要执行kubeadm reset)

    [root@k8s-node1 ~]# kubeadm reset
    
    [root@k8s-node1 ~]# kubeadm join 192.168.108.222:6443 --token 711yyq.kauqnz8sbpqy5mck --discovery-token-ca-cert-hash 
    

    最后结果为:


    image.png

    7.13、高可用测试

    在任意一个master上创建一个nginx-deployment.yaml文件,并创建这个Deployment

     #cat nginx-deployment.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-ingress-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx-ingress-test
      template:
        metadata:
          labels:
            app: nginx-ingress-test
        spec:
          containers:
            - name: nginx
              image: nginx
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-svc
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 80
      selector:
        app: nginx-ingress-test
    
    # kubectl apply -f nginx-deployment.yaml
    
    image.png

    下面可以看到VIP在master02上(因为我们在配置keepalive时给它配置的优先级最高)


    image.png
    执行重启master02后: VIP漂移到了master03上
    重启成功后:
    VIP又漂移回master02上

    相关文章

      网友评论

          本文标题:03 CenterOS7.9 安装K8s过程

          本文链接:https://www.haomeiwen.com/subject/olfimltx.html