美文网首页
部署k8s-v1.22.3版本高可用集群

部署k8s-v1.22.3版本高可用集群

作者: 归海听雪 | 来源:发表于2021-11-23 15:04 被阅读0次

    本文采用的是etcd、master、HA混合部署方式,当然也可以把etcd cluster独立出来部署也是可以


    k8s高可用拓扑图 (1).png

    本文中k8s高可用主要体现在对master节点组件及etcd存储的高可用,文中使用到的服务器ip及角色对应如下:

    一、环境准备

    CentOS Linux release 7.7.1908 (Core) 3.10.0-1062.el7.x86_64

    kubeadm-1.22.3-0.x86_64
    kubelet-1.22.3-0.x86_64
    kubectl-1.22.3-0.x86_64
    kubernetes-cni-0.8.7-0.x86_64

    | 主机名 | IP | VIP |
    | k8s-master01 | 192.168.30.106 | 192.168.30.115 |
    | k8s-master02 | 192.168.30.107 |
    | k8s-master03 | 192.168.30.108 |
    | k8s-node01 | 192.168.30.109 |
    | k8s-node02 | 192.168.30.110 |

    二、安装Docker

    1、各节点下载docker源

    cd /etc/yum.repos.d/
    wget https://download.docker.com/linux/centos/docker-ce.repo
    
    image.gif

    2、各节点配置docker加速器并修改成k8s驱动
    daemon.json文件如果没有自己创建

    cat >/etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ]
    }
    EOF
    
    
    image.gif

    3、重启Docker服务

    systemctl restart docker
    
    image.gif

    4、配置各节点hosts文件,实际生产环境中,可以规划好内网dns,每台机器可以做一下主机名解析,就不需要配hosts文件

    cat > /etc/hosts <<EOF
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.30.106 k8s-master01
    192.168.30.107 k8s-master02
    192.168.30.108 k8s-master03
    192.168.30.109 k8s-node01
    192.168.30.110 k8s-node02
    EOF
    
    
    image.gif

    三、配置环境变量

    1、关掉各节点防火墙,安装相关软件

    yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git lrzsz
    systemctl stop firewalld && systemctl disable firewalld
    yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
    
    image.gif

    2、关闭各节点selinux

    setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    
    image.gif

    3、关闭各节点swap分区

    swapoff -a && sed -i  '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab
    
    image.gif

    4、同步各节点的时间,这个时间服务,如果机器可以直通外网,那就按以下命令执行就行。

    如果机器无法通外网,需要做一台时间服务器,然后别的服务器全部从这台时间服务器同步时间。

    yum -y install chrony
    systemctl start chronyd.service
    systemctl enable chronyd.service
    timedatectl set-timezone Asia/Shanghai
    chronyc -a makestep
    
    image.gif

    5、各节点内核调整

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.ipv4.ip_nonlocal_bind = 1
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    net.ipv4.tcp_tw_recycle=0
    vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
    vm.overcommit_memory=1 # 不检查物理内存是否够用
    vm.panic_on_oom=0 # 开启 OOM
    fs.inotify.max_user_instances=8192
    fs.inotify.max_user_watches=1048576
    fs.file-max=52706963
    fs.nr_open=52706963
    net.ipv6.conf.all.disable_ipv6=1
    net.netfilter.nf_conntrack_max=2310720
    
    EOF
    
    sysctl -p /etc/sysctl.d/k8s.conf
    
    
    image.gif

    6、配置各节点k8s的yum源

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    
    image.gif

    7、各节点开启ipvs模块

    cat >/etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/sh
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    #modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4
    modprobe -- nf_conntrack
    EOF
    
    
    image.gif
     chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
    
    image.gif

    8、设置 rsyslogd 和 systemd journald

    mkdir /var/log/journa
    mkdir -p  /etc/systemd/journald.conf.d/
    
    cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
    [Journal]
    # 持久化保存到磁盘
    Storage=persistent
    # 压缩历史日志
    Compress=yes
    SyncIntervalSec=5m
    RateLimitInterval=30s
    RateLimitBurst=1000
    # 最大占用空间 10G
    SystemMaxUse=10G
    # 单日志文件最大 200M
    SystemMaxFileSize=200M
    # 日志保存时间 2 周
    MaxRetentionSec=2week
    # 不将日志转发到 syslog
    ForwardToSyslog=no
    EOF
    
    systemctl restart systemd-journald
    
    image.gif

    10、系统内核升级到最新

    CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定

    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
    # 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
    
    image.gif

    升级内核

    yum --enablerepo=elrepo-kernel install -y kernel-lt
    
    image.gif

    设置开机从新内核启动,这地方要注意一下,看你当时最新的版本是多少,括号里就填多少

    grub2-set-default 'CentOS Linux (5.4.159-1.el7.elrepo.x86_64) 7 (Core)'
    
    image.gif

    9、重启服务器(reboot)

    四、所有master节点安装keepalived和haproxy服务

    1、各master节点安装服务

    yum -y install haproxy keepalived
    
    image.gif

    2、修改k8s-master01配置文件

    第一台master为master,后面两台master为backup

    # vim /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    
    global_defs {
       router_id LVS_DEVEL
    
    # 添加如下内容
       script_user root
       enable_script_security
    }
    
    vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
        interval 3
        weight -2
        fall 10
        rise 2
    }
    
    vrrp_instance VI_1 {
        state MASTER            # MASTER
        interface ens192         # 本机网卡名
        virtual_router_id 51
        priority 100             # 权重100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.30.115      # 虚拟IP
        }
        track_script {
            check_haproxy       # 模块
        }
    }
    
    
    image.gif

    3、修改k8s-master02配置文件

    把这个/etc/keepalived/keepalived.conf配置文件scp 到k8s-master02和k8s-master03,只需要修改

    state MASTER # MASTER

    priority 100 # 权重100

    # vim /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    
    global_defs {
       router_id LVS_DEVEL
    
    # 添加如下内容
       script_user root
       enable_script_security
    }
    
    vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
        interval 3
        weight -2
        fall 10
        rise 2
    }
    
    vrrp_instance VI_1 {
        state BACKUP            # BACKUP
        interface ens192         # 本机网卡名
        virtual_router_id 51
        priority 99             # 权重99
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.30.115      # 虚拟IP
        }
        track_script {
            check_haproxy       # 模块
        }
    }
    
    
    image.gif

    4、修改k8s-master03配置文件

    # vim /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    
    global_defs {
       router_id LVS_DEVEL
    
    # 添加如下内容
       script_user root
       enable_script_security
    }
    
    vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
        interval 3
        weight -2
        fall 10
        rise 2
    }
    
    vrrp_instance VI_1 {
        state BACKUP            # BACKUP
        interface ens192         # 本机网卡名
        virtual_router_id 51
        priority 98             # 权重98
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.30.115      # 虚拟IP
        }
        track_script {
            check_haproxy       # 模块
        }
    }
    
    
    image.gif

    5、配置三台haproxy.cfg配置文件,三台g配置文件完全一样‘

    vim /etc/haproxy/haproxy.cfg
    #---------------------------------------------------------------------
    # Example configuration for a possible web application.  See the
    # full configuration options online.
    #
    #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
    #
    #---------------------------------------------------------------------
    
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
        # to have these messages end up in /var/log/haproxy.log you will
        # need to:
        #
        # 1) configure syslog to accept network log events.  This is done
        #    by adding the '-r' option to the SYSLOGD_OPTIONS in
        #    /etc/sysconfig/syslog
        #
        # 2) configure local2 events to go to the /var/log/haproxy.log
        #   file. A line like the following can be added to
        #   /etc/sysconfig/syslog
        #
        #    local2.*                       /var/log/haproxy.log
        #
        log         127.0.0.1 local2
    
        chroot      /var/lib/haproxy
        pidfile     /var/run/haproxy.pid
        maxconn     4000
        user        haproxy
        group       haproxy
        daemon
    
        # turn on stats unix socket
        stats socket /var/lib/haproxy/stats
    
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
        mode                    http
        log                     global
        option                  httplog
        option                  dontlognull
        option http-server-close
        option forwardfor       except 127.0.0.0/8
        option                  redispatch
        retries                 3
        timeout http-request    10s
        timeout queue           1m
        timeout connect         10s
        timeout client          1m
        timeout server          1m
        timeout http-keep-alive 10s
        timeout check           10s
        maxconn                 3000
    
    #---------------------------------------------------------------------
    # main frontend which proxys to the backends
    #---------------------------------------------------------------------
    frontend  kubernetes-apiserver
        mode                        tcp
        bind                        *:16443
        option                      tcplog
        default_backend             kubernetes-apiserver
    
    #---------------------------------------------------------------------
    # static backend for serving up images, stylesheets and such
    #---------------------------------------------------------------------
    listen stats
        bind            *:1080
        stats auth      admin:111111
        stats refresh   5s
        stats realm     HAProxy\ Statistics
        stats uri       /admin?stats
    
    #---------------------------------------------------------------------
    # round robin balancing between the various backends
    #---------------------------------------------------------------------
    backend kubernetes-apiserver
        mode        tcp
        balance     roundrobin
        server  k8s-master01 192.168.30.106:6443 check
        server  k8s-master02 192.168.30.107:6443 check
        server  k8s-master03 192.168.30.108:6443 check
    
    
    image.gif

    6、配置keepalived的haproxy检测脚本,三台master都是一样

    vim /etc/keepalived/check_haproxy.sh
    #!/bin/sh
    # HAPROXY down
    pid=`ps -C haproxy --no-header | wc -l`
    if [ $pid -eq 0 ]
    then
        systemctl start haproxy
        if [ `ps -C haproxy --no-header | wc -l` -eq 0 ]
        then
            killall -9 haproxy
    
            #这里大家可以自已决定事件处理方法,例如可以发邮件,发短信等等
            echo "HAPROXY down" >>/tmp/haproxy_check.log
            sleep 10
        fi
    
    fi
    
    
    image.gif

    7、给检测脚本添加执行权限

    chmod 755 /etc/keepalived/check_haproxy.sh
    
    image.gif

    8、启动haproxy和keepalived服务

    systemctl start keepalived && systemctl enable keepalived
    systemctl start haproxy && systemctl enable haproxy
    
    image.gif

    9、查看vip地址

    这边配的是k8s-master01是master,所以只能在这台机器上看

     ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:0c:29:aa:ad:f2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.30.106/24 brd 192.168.30.255 scope global ens192
           valid_lft forever preferred_lft forever
        inet 192.168.30.115/32 scope global ens192  #VIP
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:feaa:adf2/64 scope link
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:66:67:ee:0b brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    
    
    image.gif

    五、部署集群

    安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!

    1、安装k8s软件包,每个节点都需要安装这些软件

    #直接装最新的
    yum install -y kubeadm kubectl kubelet
     systemctl enable kubelet && systemctl daemon-reload
    
    image.gif

    2、获取默认配置文件,登录到k8s-master01机器

    kubeadm config print init-defaults > kubeadm-config.yaml
    
    image.gif

    3、修改配置文件

    vim  kubeadm-config.yaml
    --------------------------------------------------------------------
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.30.106     # 本机IP
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: k8s-master01        # 本主机名
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: "192.168.30.115:16443"    # VIP和haproxy端口
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
     # 镜像仓库源要根据自己实际情况修改
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: 1.22.3     # k8s版本
    networking:
      dnsDomain: cluster.local
      podSubnet: "10.244.0.0/16"
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    #featureGates:   #1.20版本以上已经不支持featureGates特性了,注掉就可以
    #  SupportIPVSProxyMode: true
    mode: ipvs
    
    
    image.gif

    4、下载相关镜像文件

    kubeadm config images pull --config kubeadm-config.yaml
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.3
    [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5
    [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0
    [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.4
    
    
    image.gif

    5、初始化集群

    # kubeadm init --config kubeadm-config.yaml
    [init] Using Kubernetes version: v1.22.0
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.30.106 192.168.30.115]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.30.106 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.30.106 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 15.041373 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: abcdef.0123456789abcdef
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d \
            --control-plane
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:dc58e29a96a7bd4c9c9682f93089a0fef39ee18975b41e9d54512d1989e5a07d
    
    
    image.gif

    6、在其它两个master节点要创建etcd目录

    mkdir -p /etc/kubernetes/pki/etcd
    
    image.gif

    7、把主master节点证书分别复制到其他2个master节点

    
    scp /etc/kubernetes/pki/ca.* root@192.168.30.107:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/ca.* root@192.168.30.108:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.* root@192.168.30.107:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.* root@192.168.30.108:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.30.107:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.30.108:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.* 192.168.30.107:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/pki/etcd/ca.* 192.168.30.108:/etc/kubernetes/pki/etcd/
    
    #这个文件master和node节点都需要
    scp  /etc/kubernetes/admin.conf 192.168.30.107:/etc/kubernetes/
    scp  /etc/kubernetes/admin.conf 192.168.30.108:/etc/kubernetes/
    scp  /etc/kubernetes/admin.conf 192.168.30.109:/etc/kubernetes/
    scp  /etc/kubernetes/admin.conf 192.168.30.110:/etc/kubernetes/
    
    
    image.gif

    8、另外两个master节点加入集群

    kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:47f35fcf0584b3ca586d041057753f2be84dd389004f42a3f92b6c4eb5a42eb1 \
            --control-plane
    
    image.gif

    9、两个node节点加入集群

    kubeadm join 192.168.30.115:16443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:47f35fcf0584b3ca586d041057753f2be84dd389004f42a3f92b6c4eb5a42eb1
    
    image.gif

    10、所有master节点执行以下命令,node节点随意

    root用户

    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    source .bash_profile
    
    image.gif

    非root用户

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    image.gif

    11、在k8s-master01上查看所有节点状态

    # kubectl get nodes
    NAME           STATUS   ROLES                  AGE    VERSION
    k8s-master01   Ready    control-plane,master   171m   v1.22.3
    k8s-master02   Ready    control-plane,master   167m   v1.22.3
    k8s-master03   Ready    control-plane,master   167m   v1.22.3
    k8s-node01     Ready    <none>                 166m   v1.22.3
    k8s-node02     Ready    <none>                 166m   v1.22.3
    
    
    image.gif

    12、安装网络插件,在k8s-master01机器上执行

    如果没有翻墙有可能下不来

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    image.gif
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
      - configMap
      - secret
      - emptyDir
      - hostPath
      allowedHostPaths:
      - pathPrefix: "/etc/cni/net.d"
      - pathPrefix: "/etc/kube-flannel"
      - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    rules:
    - apiGroups: ['extensions']
      resources: ['podsecuritypolicies']
      verbs: ['use']
      resourceNames: ['psp.flannel.unprivileged']
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
            image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
            image: quay.io/coreos/flannel:v0.15.1
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.15.1
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
    
    
    image.gif

    13、再查看节点状态

    # kubectl get pods --all-namespaces
    NAMESPACE     NAME                                   READY   STATUS    RESTARTS       AGE
    kube-system   coredns-7f6cbbb7b8-shbl8               1/1     Running   0              177m
    kube-system   coredns-7f6cbbb7b8-w6cnn               1/1     Running   0              177m
    kube-system   etcd-k8s-master01                      1/1     Running   2              177m
    kube-system   etcd-k8s-master02                      1/1     Running   1              174m
    kube-system   etcd-k8s-master03                      1/1     Running   0              173m
    kube-system   kube-apiserver-k8s-master01            1/1     Running   2              177m
    kube-system   kube-apiserver-k8s-master02            1/1     Running   1              174m
    kube-system   kube-apiserver-k8s-master03            1/1     Running   2 (173m ago)   174m
    kube-system   kube-controller-manager-k8s-master01   1/1     Running   4 (173m ago)   177m
    kube-system   kube-controller-manager-k8s-master02   1/1     Running   1              174m
    kube-system   kube-controller-manager-k8s-master03   1/1     Running   1              174m
    kube-system   kube-flannel-ds-49bv5                  1/1     Running   0              172m
    kube-system   kube-flannel-ds-68wq2                  1/1     Running   0              172m
    kube-system   kube-flannel-ds-bc686                  1/1     Running   0              172m
    kube-system   kube-flannel-ds-cgwrl                  1/1     Running   0              172m
    kube-system   kube-flannel-ds-sxn2h                  1/1     Running   0              172m
    kube-system   kube-proxy-7dpmx                       1/1     Running   0              173m
    kube-system   kube-proxy-7n7pl                       1/1     Running   0              173m
    kube-system   kube-proxy-d9z59                       1/1     Running   0              177m
    kube-system   kube-proxy-j8fgg                       1/1     Running   0              174m
    kube-system   kube-proxy-k7qsm                       1/1     Running   0              174m
    kube-system   kube-scheduler-k8s-master01            1/1     Running   4 (173m ago)   177m
    kube-system   kube-scheduler-k8s-master02            1/1     Running   1              174m
    kube-system   kube-scheduler-k8s-master03            1/1     Running   1              174m
    
    
    image.gif

    一定是所有的状态为Running才是正常的,如果有以下状态 ,就要去看一下日志(tail -f /var/log/message),分析一下问题,一般的问题 是ipvs环境变量不对等等

    kube-system   kube-flannel-ds-28jks                  0/1     Error               1          28s
    kube-system   kube-flannel-ds-4w9lz                  0/1     Error               1          28s
    kube-system   kube-flannel-ds-8rflb                  0/1     Error               1          28s
    kube-system   kube-flannel-ds-wfcgq                  0/1     Error               1          28s
    kube-system   kube-flannel-ds-zgn46                  0/1     Error               1          28s
    kube-system   kube-proxy-b8lxm                       0/1     CrashLoopBackOff    4          2m15s
    kube-system   kube-proxy-bmf9q                       0/1     CrashLoopBackOff    7          14m
    kube-system   kube-proxy-bng8p                       0/1     CrashLoopBackOff    6          7m31s
    kube-system   kube-proxy-dpkh4                       0/1     CrashLoopBackOff    6          10m
    kube-system   kube-proxy-xl45p                       0/1     CrashLoopBackOff    4          2m30s
    
    image.gif

    14、下载etcdctl客户端工具,并安装,在k8s-master01上安装就行

    wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz
    
    tar -zxf etcd-v3.4.14-linux-amd64.tar.gz
    mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin
    chmod +x /usr/local/bin/
    
    image.gif

    15、验证etcdctl是否能用,只要能输出以下这些日志,说明是没有问题的

    # etcdctl
    NAME:
            etcdctl - A simple command line client for etcd3.
    
    USAGE:
            etcdctl [flags]
    
    VERSION:
            3.4.14
    
    API VERSION:
            3.4
    
    COMMANDS:
            alarm disarm            Disarms all alarms
            alarm list              Lists all alarms
            auth disable            Disables authentication
            auth enable             Enables authentication
            check datascale         Check the memory usage of holding data for different workloads on a given server endpoint.
            check perf              Check the performance of the etcd cluster
            compaction              Compacts the event history in etcd
            defrag                  Defragments the storage of the etcd members with given endpoints
            del                     Removes the specified key or range of keys [key, range_end)
            elect                   Observes and participates in leader election
            endpoint hashkv         Prints the KV history hash for each endpoint in --endpoints
            endpoint health         Checks the healthiness of endpoints specified in `--endpoints` flag
            endpoint status         Prints out the status of endpoints specified in `--endpoints` flag
            get                     Gets the key or a range of keys
            help                    Help about any command
            lease grant             Creates leases
            lease keep-alive        Keeps leases alive (renew)
    
    
    image.gif

    16、查看etcd集群的各种状态

    查看etcd集群健康状态

    ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 endpoint health
    
    image.gif
    +---------------------+--------+-------------+-------+
    |      ENDPOINT       | HEALTH |    TOOK     | ERROR |
    +---------------------+--------+-------------+-------+
    | 192.168.30.106:2379 |   true | 35.474753ms |       |
    | 192.168.30.107:2379 |   true | 39.358382ms |       |
    | 192.168.30.108:2379 |   true | 47.269479ms |       |
    +---------------------+--------+-------------+-------+
    
    
    image.gif

    查看etcd集群可用列表

    ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 member list
    
    
    image.gif
    +------------------+---------+--------------+-----------------------------+-----------------------------+------------+
    |        ID        | STATUS  |     NAME     |         PEER ADDRS          |        CLIENT ADDRS         | IS LEARNER |
    +------------------+---------+--------------+-----------------------------+-----------------------------+------------+
    | 33aecdbe33accd41 | started | k8s-master01 | https://192.168.30.106:2380 | https://192.168.30.106:2379 |      false |
    | 6bdbdbf3772b7e2c | started | k8s-master02 | https://192.168.30.107:2380 | https://192.168.30.107:2379 |      false |
    | ce323eca1d06e307 | started | k8s-master03 | https://192.168.30.108:2380 | https://192.168.30.108:2379 |      false |
    +------------------+---------+--------------+-----------------------------+-----------------------------+------------+
    
    
    image.gif

    查看etcd集群leader状态

    ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.30.106:2379,192.168.30.107:2379,192.168.30.108:2379 endpoint status
    
    
    image.gif
    +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    |      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
    +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    | 192.168.30.106:2379 | 33aecdbe33accd41 |   3.5.0 |  3.3 MB |      true |      false |         3 |      31405 |              31405 |        |
    | 192.168.30.107:2379 | 6bdbdbf3772b7e2c |   3.5.0 |  3.2 MB |     false |      false |         3 |      31405 |              31405 |        |
    | 192.168.30.108:2379 | ce323eca1d06e307 |   3.5.0 |  3.3 MB |     false |      false |         3 |      31405 |              31405 |        |
    +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
    
    
    image.gif

    六、部署k8s的dashboard

    注:我这边k8s的api版本已经是1.22.3版本了,所有dashboard必需升级到v2.3以上,不能再用v.2.0版本

    1、下载recommended.yaml文件,如果没有翻墙,估计下不来

    wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
    
    image.gif

    这边我就把下载的文件直接贴出来,以下有两个地方做了修改,就是增加了NodePort模式和端口,看注释就行。

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort   #NodePort模式
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30000  #用的是30000端口
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-
    metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.4.0
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
        spec:
          securityContext:
            seccompProfile:
              type: RuntimeDefault
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.7
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    
    
    image.gif

    安装

    kubectl apply -f recommended.yaml
    
    image.gif

    2、 查看安装结果

    
    ingress-nginx          ingress-nginx-admission-create--1-d5xsw     0/1     Completed   0                4h22m
    ingress-nginx          ingress-nginx-admission-patch--1-t975b      0/1     Completed   1                4h22m
    ingress-nginx          ingress-nginx-controller-55888bbc94-82vsl   1/1     Running     0                4h22m
    ingress-nginx          ingress-nginx-controller-55888bbc94-z2f97   1/1     Running     0                4h22m
    kube-system            coredns-7f6cbbb7b8-8n9vq                    1/1     Running     0                3h44m
    kube-system            coredns-7f6cbbb7b8-jqxc2                    1/1     Running     0                3h44m
    kube-system            etcd-k8s-master01                           1/1     Running     8 (3h49m ago)    3d23h
    kube-system            etcd-k8s-master02                           1/1     Running     2                3d23h
    kube-system            etcd-k8s-master03                           1/1     Running     1                3d23h
    kube-system            kube-apiserver-k8s-master01                 1/1     Running     8 (3h49m ago)    3d23h
    kube-system            kube-apiserver-k8s-master02                 1/1     Running     2                3d23h
    kube-system            kube-apiserver-k8s-master03                 1/1     Running     4 (3d23h ago)    3d23h
    kube-system            kube-controller-manager-k8s-master01        1/1     Running     11 (3h49m ago)   3d23h
    kube-system            kube-controller-manager-k8s-master02        1/1     Running     2                3d23h
    kube-system            kube-controller-manager-k8s-master03        1/1     Running     2                3d23h
    kube-system            kube-flannel-ds-hqhc7                       1/1     Running     1 (2d23h ago)    3d22h
    kube-system            kube-flannel-ds-k2kgk                       1/1     Running     8 (2d23h ago)    3d23h
    kube-system            kube-flannel-ds-s7sxm                       1/1     Running     6 (3h49m ago)    3d23h
    kube-system            kube-flannel-ds-t7l8t                       1/1     Running     0                3d23h
    kube-system            kube-flannel-ds-vthj9                       1/1     Running     0                3d23h
    kube-system            kube-proxy-6wk8x                            1/1     Running     1 (2d23h ago)    3d1h
    kube-system            kube-proxy-gxjrr                            1/1     Running     8 (2d23h ago)    3d1h
    kube-system            kube-proxy-q9w7m                            1/1     Running     0                3d1h
    kube-system            kube-proxy-trq2p                            1/1     Running     0                3d1h
    kube-system            kube-proxy-w8lfw                            1/1     Running     4 (3h49m ago)    3d1h
    kube-system            kube-scheduler-k8s-master01                 1/1     Running     11 (3h49m ago)   3d23h
    kube-system            kube-scheduler-k8s-master02                 1/1     Running     2                3d23h
    kube-system            kube-scheduler-k8s-master03                 1/1     Running     2                3d23h
    kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-ckhcl   1/1     Running     0                9m28s
    kubernetes-dashboard   kubernetes-dashboard-576cb95f94-jnv7f       1/1     Running     0                9m28s
    
    
    image.gif

    3、 查看dashboard服务

    kubectl get service -n kubernetes-dashboard  -o wide
    NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE    SELECTOR
    dashboard-metrics-scraper   ClusterIP   10.96.145.101   <none>        8000/TCP        132m   k8s-app=dashboard-metrics-scraper
    kubernetes-dashboard        NodePort    10.110.14.77    <none>        443:30000/TCP   132m   k8s-app=kubernetes-dashboard
    
    
    image.gif

    4、 创建dashoard管理员

    vim dashboard-admin.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: dashboard-admin
      namespace: kubernetes-dashboard
    
    image.gif

    部署

    kubectl apply -f dashboard-admin.yaml
    
    
    image.gif

    5、 为管理员分配权限

    vim dashboard-admin-bind-cluster-role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: dashboard-admin-bind-cluster-role
      labels:
        k8s-app: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: dashboard-admin
      namespace: kubernetes-dashboard
    
    image.gif

    部署

    kubectl apply -f dashboard-admin-bind-cluster-role.yaml
    
    image.gif

    6、 查看管理员Token

    # kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
    Name:         dashboard-admin-token-tzp2d
    Namespace:    kubernetes-dashboard
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: 37a23381-007c-4bab-a07b-42767a56d859
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1099 bytes
    namespace:  20 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InRFRF9MWlhDLVZ2MkJjT2tXUXQ4QlRhWVowOTVTRTBkZ2tDcF9xaE5qOFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdHpwMmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzdhMjMzODEtMDA3Yy00YmFiLWEwN2ItNDI3NjdhNTZkODU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.TIWkVlu7SrwK9GetIC9eE32sgzuta0Zy52Ta3KkPmlQaINgqZx38I3nrFJ1u_641tENNu_60T3PjCbZweiqmpPTiyazL9Lw8uSQ5sbX3hauSzC5xOA1CX4AH1KEUnBYwWhuI-1VpXeXX-nVn7PoDElNoHBdXZ2l3NNLx2KmmaFoXHiVXAiIzTvSGY4DxJ9y6g2Tyz7GFOlOfOgpKYbVZlKufqrXEiO5SoUE_WndJSlt65UydQZ_zwmhA_6zWSxTDj2jF1o76eYXjpMLT0ioM51k-OzgljnRKZU7Jy67XJzj5VdJuDUdTZ0KADhF2XAkh-Vre0tjMk0867VHq0K_Big
    
    
    image.gif

    7、 打开浏览器,输入:http://192.168.30.115:30000

    选择Token,输入前面查到的Token

    image image.gif image image.gif

    到此高可用的k8s服务已经部署完成,测试了一下把k8s-master01关掉,服务一直可用,只有dashboard页面会闪断。

    相关文章

      网友评论

          本文标题:部署k8s-v1.22.3版本高可用集群

          本文链接:https://www.haomeiwen.com/subject/zphwtrtx.html