美文网首页
kubernetes高可用用kubeadm方式部署实战

kubernetes高可用用kubeadm方式部署实战

作者: 每天进步一典 | 来源:发表于2019-11-22 11:23 被阅读0次

    简介

    使用kubeadm方式 部署kubernetes HA

    架构信息

    系统版本:CentOS 7
    内核:3.10.0-957.el7.x86_64
    Kubernetes: v1.16.0
    Docker-ce: 18.06
    推荐硬件配置:2核4G

    Keepalived保证apiserever服务器的IP高可用
    Haproxy实现apiserver的负载均衡
    为了减少服务器数量,haproxy、keepalived配置在master-01和master-02。

    节点名称 角色 IP 安装软件
    负载VIP VIP 10.1.1.16

    节点 角色 IP地址 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
    master-01 master 10.1.1.10 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
    master-02 master 10.1.1.3 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
    master-03 master 10.1.1.4 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
    node-01 node 10.1.1.5 kubeadm、kubelet、kubectl、docker
    Pod网段 10.224.0.0/16
    service网段 10.96.0/16

    部署前准备工作

    1. 关闭selinux和防火墙
    sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
    setenforce 0
    systemctl disable firewalld
    systemctl stop firewalld
    
    1. 关闭swap
    swapoff –a
    /etc/fstab挂载的也要注释。
    
    1. 为每台服务器添加host解析记录
    cat >> /etc/hosts <<EOF
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.1.1.10 master-01
    10.1.1.3 master-02
    10.1.1.4 node-01
    10.1.1.5 master-03
    EOF
    
    1. 创建并分发密钥
      在master-01创建ssh密钥。
    root@master-01 ~]# ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:+QAoEdGjIo9NWonLEzM9LMEQjposaeuB9Oc+ShJUXNk root@master-01
    The key's randomart image is:
    +---[RSA 2048]----+
    |=.==..o          |
    |oo.ooo E         |
    |.+=o...          |
    |BB==   . .       |
    |B#= .   S        |
    |Oo*      o       |
    |.+.o .    .      |
    |. + o.           |
    | . .oo.          |
    +----[SHA256]-----+
    [root@master-01 ~]# ls .ssh/
    id_rsa  id_rsa.pub  known_hosts
    分发到其他服务器
    
    [root@master-01 ~]# for n in `seq -w 01 02 03`;do ssh-copy-id -p43999 master-$n;done
    
    [root@master-01 ~]# for n in `seq -w 01`;do ssh-copy-id node-$n;done
    
    1. 部署keepalived和haproxy

    在master-01和master-02安装keepalived和haproxy

    yum install -y keepalived haproxy
    

    修改配置
    keepalived配置
    master-01的priority为100,master-02的priority为90,其他配置一样。

    
    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
            mrli@163.com
       }
       notification_email_from Alexandre.Cassen@firewall.loc
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id LVS_1
    }
    
    vrrp_instance VI_1 {
        state MASTER          
        interface ens33
        lvs_sync_daemon_inteface ens33     #按实际网口修改
        virtual_router_id 88
        advert_int 1
        priority 100         
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
          10.1.1.16/24
        }
    }
    
    [root@master-01 k8s]# cat  <<EOF  >  /etc/keepalived/keepalived.conf 
    
    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
            mrli@163.com
       }
       notification_email_from Alexandre.Cassen@firewall.loc
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id LVS_1
    }
    
    vrrp_instance VI_1 {
        state MASTER          
        interface ens33
        lvs_sync_daemon_inteface ens33
        virtual_router_id 88
        advert_int 1
        priority 100         
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
          10.1.1.16/24
        }
    }
    EOF
    

    haproxy配置
    master-01和master-02的haproxy配置是一样的。因为haproxy是和k8s apiserver是部署在同一台服务器上,都用6443会冲突,此处我们监听的是10.1.1.16的8443端口。

    [root@master-01 k8s]# cat  <<EOF > /etc/haproxy/haproxy.cfg
    
    global
            chroot  /var/lib/haproxy
            daemon
            group haproxy
            user haproxy
            log 127.0.0.1:514 local0 warning
            pidfile /var/lib/haproxy.pid
            maxconn 20000
            spread-checks 3
            nbproc 8
    
    defaults
            log     global
            mode    tcp
            retries 3
            option redispatch
    
    listen https-apiserver
            bind 10.1.1.16:8443
            mode tcp
            balance roundrobin
            timeout server 900s
            timeout connect 15s
          server apiserver01 10.1.1.10:6443 check port 6443 inter 5000 fall 5
            server apiserver02 10.1.1.3:6443 check port 6443 inter 5000 fall 5
    EOF
    

    在master-01和master-02上启动服务

    systemctl enable keepalived && systemctl start keepalived 
    systemctl enable haproxy && systemctl start haproxy 
    

    部署kubernetes

    配置yum安装源

    cat <<EOF  > /etc/yum.repos.d/kubernetes.repo 
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    1、安装相关软件

    yum install -y kubelet-1.16.0 kubeadm-1.16.0 kubectl-1.16.0 ipvsadm ipset docker-ce-18.06.1.ce
    

    2、启动相关服务

    systemctl enable kubelet 
    

    3、创建初始化配置文件

    
    [root@master-01 k8s]# cat  <<EOF > kubeadm-config.yaml 
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.16.0
    controlPlaneEndpoint: "10.1.1.16:6443"
    imageRepository: "registry.aliyuncs.com/google_containers"
    networking:
      podSubnet: "10.244.0.0/16"
    apiServer:
      certSANs:
      - "k8s.mytest.com"
    EOF
    

    注意:podSubet应该和后面的fannel网络保持一致,否则创建fannel会失败

    4、预下载镜像

    [root@master-01 k8s]# kubeadm config images pull --config kubeadm-init.yaml 
    

    5、初始化master-01

    [root@master-01 k8s]# kubeadm init --config kubeadm-init.yaml   
    

    6、根据提示准备kubeconfig

    mkdir -p \$HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf \$HOME/.kube/config
    sudo chown \$(id -u):\$(id -g) \$HOME/.kube/config
    

    7、其他master加入到集群方法(根据上一步提示获得)
    复制master-01的证书到其他master节点

    [root@master-01 k8s]# cat base/cp-cert.sh 
    #!/bin/bash
    ##拷贝证书到其他master节点
    USER=root
    CONTROL_PLANE_IPS="master01 master02 master03"
    for host in $CONTROL_PLANE_IPS; do
        ssh -p43999 "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
        scp -P43999 /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
        scp -P43999 /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
        scp -P43999 /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
        scp -P43999 /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
    

    复制完成在其他master上加入集群的方法

    kubeadm join 10.1.1.16:6443 --token sigq9b.m6lwwk40n1piqfbh \
        --discovery-token-ca-cert-hash sha256:9c3602a14cf4717202acf6ae004629188551a35603007c8bd1fc3dfc7f19061b \
    --control-plane --certificate-key b0be6c57cb55983dabf8df1a94d4d32eeb2c7efd1966ac5b37ae515f8a0a99c9
    

    8、其他node加入集群方法(根据上一步提示获得)

    kubeadm join 10.1.1.16:6443 --token sigq9b.m6lwwk40n1piqfbh \
    --discovery-token-ca-cert-hash sha256:9c3602a14cf4717202acf6ae004629188551a35603007c8bd1fc3dfc7f19061b
    

    9、创建fannel网络
    脚本yaml如下(可在官网下载)

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    
    [root@master-01 k8s]# kubectl apply -f kube-flannel.yml
    

    10、检查组件状态

    [root@master-01 k8s]# kubectl get pod -n kube-system
    NAME                                READY   STATUS    RESTARTS   AGE
    coredns-58cc8c89f4-24s7h            1/1     Running   0          6h28m
    coredns-58cc8c89f4-d47h4            1/1     Running   0          6h28m
    etcd-master-01                      1/1     Running   0          6h27m
    etcd-master-02                      1/1     Running   2          6h19m
    etcd-master-03                       1/1     Running   0          6h18m
    kube-apiserver-master-01            1/1     Running   1          6h27m
    kube-apiserver-master-02            1/1     Running   4          6h19m
    kube-apiserver-master-03              1/1     Running   1          6h18m
    kube-controller-manager-master-01   1/1     Running   3          6h27m
    kube-controller-manager-master-02   1/1     Running   2          6h19m
    kube-controller-manager-master-03    1/1     Running   3          6h17m
    kube-flannel-ds-amd64-6pddd         1/1     Running   0          6h22m
    kube-flannel-ds-amd64-72dp8         1/1     Running   0          6h18m
    kube-flannel-ds-amd64-bn5th         1/1     Running   0          6h19m
    kube-flannel-ds-amd64-lt7jr         1/1     Running   0          5h58m
    kube-proxy-75vdx                    1/1     Running   0          6h19m
    kube-proxy-8lptw                    1/1     Running   0          6h18m
    kube-proxy-bz9lb                    1/1     Running   0          5h58m
    kube-proxy-ffgnk                    1/1     Running   0          6h28m
    kube-scheduler-master-01            1/1     Running   1          6h27m
    kube-scheduler-master-02            1/1     Running   3          6h19m
    kube-scheduler-master-03              1/1     Running   3          6h17m
    
    

    都为runnning表示正常
    11、检查集群状态

    [root@master-01 k8s]# kubectl get  node
    NAME        STATUS   ROLES    AGE     VERSION
    master-01   Ready    master   6h30m   v1.16.0
    master-02   Ready    master   6h20m   v1.16.0
    node-01    Ready    node   6h19m   v1.16.2
    master-03       Ready    master   6h19m   v1.16.2
    

    12、验证高可用

    在master-01上创建一个简单的pod

    [root@master-01 k8s]# cat test.yaml 
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deploy
      namespace: default
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: myapp
          release: canary
      template:
        metadata:
          labels:
            app: myapp
            release: canary
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v2
            ports:
            - name: http
              containerPort: 80
    

    查看状态

    [root@master-01 k8s]# kubectl get pod
    NAME                            READY   STATUS    RESTARTS   AGE
    myapp-deploy-798dc9b584-fdmlp   1/1     Running   0          102m
    myapp-deploy-798dc9b584-fq546   1/1     Running   0          102m
    

    当前的vip在master-01上


    1.png

    把master-01的keepalived 停掉模拟故障

    2.png

    vip就漂到了master-02上

    3.png

    在master-02上修改test.yaml 把replicas改成3,可以看到pod运行的副本数由原来的2变成3,pod运行正常

    4.png

    到此高可用配置完成,有问题欢迎多多交流

    相关文章

      网友评论

          本文标题:kubernetes高可用用kubeadm方式部署实战

          本文链接:https://www.haomeiwen.com/subject/imvfwctx.html