美文网首页k8s学习空间
Ubuntu18.04 二进制K8S集群搭建

Ubuntu18.04 二进制K8S集群搭建

作者: 阿当运维 | 来源:发表于2021-09-26 16:17 被阅读0次

    规划:

    服务器 角色 备注
    k8s-master Master+Node+Etcd
    k8s-node1 Node+Etcd
    k8s-node2 Node+Etcd
    k8s-center kubectl,cfssl 用来部署k8s集群的机器

    IP规划:
    cluster-cidr=172.30.0.0/16
    cluster-ip(svc): 10.96.0.1/16
    系统:Ubantu18.04

    image.png

    安装前先清楚k8s各个组件:
    master: apiserver , controller-manager ,scheduler
    node: kube-proxy ,kubelet

    安装思路

    通过一台中心机(可以免密码登录k8s集群3台节点的机器),在这台机器上做服务的启动文件,配置文件,ssl证书文件生成,需要用的程序包等操作做好后,通过scp或脚本形式传到节点机器上,简化安装操作过程。

    安装过程

    大致分为:

    • 1.节点机器初始化

      1. 节点机器安装docker
    • 3.规划k8s用到相关文件夹

      1. 制作所有需要的证书
    • 5.部署ETCD

    • 6.配置kubeconfig

    • 7.部署master

    • 8.部署node

    • 9.部署网络

    前期操作: 文件夹规划说明(k8s-center中心机上):

    /root/下

    ├── conf   #放配置文件
    │   ├── etcd
    │   └── kubernetes
    ├── k8s_setup  #k8s安装过程中用到的配置文件,为了方便传输写的脚本,程序包等
    │   └── bin_file   #安装文件夹下放一些部署脚本
    │       ├── 1-ssl_install 
    │       ├── 2-etcd_install
    │       ├── 3-kubeconfig_install
    │       └── 4-master_install
            ├── 5-node_install
    │       ├── 6-cni_install
    
    |       └──soft   #存放软件
    ├── kubeconfig
    └── pki         #证书位置
        ├── etcd
        ├── kubernetes
        └── pki_json
    

    一.初始化

    初始化操作全部在节点机器上操作。
    (批量化 可以在xshell 点击工具-发送所有输入到所有会话)

    1. 修改主机名
    vim /etc/hosts
    10.0.3.202  k8s-master
    10.0.3.203  k8s-node1
    10.0.3.204  k8s-node2
    
    1. 安装相关工具
    apt update
    apt install ntpdate git vim curl wget jq psmisc net-tools telnet lvm2
    apt install ipvsadm ipset sysstat conntrack
    
    1. 一些基础配置和 系统优化
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    echo 'Asia/Shanghai' > /etc/timezone
    ntpdate ntp.myhuaweicloud.com
    
    # 加入到crontab
    sed -i "/ntp.myhuaweicloud.comd" /etc/crontab
    echo "*/5 * * * * root /usr/sbin/ntpdate ntp.myhuaweicloud.com" >> /etc/crontab
    systemctl restart cron
    
    # 关闭防火墙
    systemctl  stop firewalld 2> /dev/null|| echo ok > /dev/null
    systemctl  disable firewalld 2> /dev/null|| echo ok > /dev/null
    
    # 关闭swap
    swapoff -a && sysctl -w vm.swappiness=0
    sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
    
    #系统优化
    cat << EOF > /etc/security/limits.conf
    * soft nofile 655360
    * hard nofile 131072
    * soft nproc 655350
    * hard nproc 655350
    * soft memlock unlimited
    * hard memlock unlimited
    root soft nofile 655360
    root hard nofile 655360
    root soft nproc 655350
    root hard nproc 655350
    root soft memlock unlimited
    root hard memlock unlimited
    
    EOF
    
    
    cat << EOF >> /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    fs.may_detach_mounts = 1
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.inotify.max_user_watches=89100
    fs.file-max=52706963
    fs.nr_open=52706963
    net.netfilter.nf_conntrack_max=2310720
    
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_intvl =15
    net.ipv4.tcp_max_tw_buckets = 36000
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_max_orphans = 327680
    net.ipv4.tcp_orphan_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.ip_conntrack_max = 65536
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.tcp_timestamps = 0
    net.core.somaxconn = 16384
    EOF
    
    cat << EOF > /etc/modules-load.d/ipvs.conf
    ip_vs
    ip_vs_lc
    ip_vs_wlc
    ip_vs_rr
    ip_vs_wrr
    ip_vs_lblc
    ip_vs_lblcr
    ip_vs_dh
    ip_vs_sh
    ip_vs_fo
    ip_vs_nq
    ip_vs_sed
    ip_vs_ftp
    ip_vs_sh
    nf_conntrack
    ip_tables
    ip_set
    xt_set
    ipt_set
    ipt_rpfilter
    ipt_REJECT
    ipip
    
    EOF
    
    1. 重启系统。

    二. 安装Docker

    也是在节点机器上批量操作

    # docker版本
    DOCKER_VERSION="5:19.03.15~3-0~ubuntu-$(lsb_release -cs)"
    # 卸载docker
    apt remove docker docker-engine docker-ce docker.io -y
    
    # 安装docker
    apt install -y apt-transport-https ca-certificates software-properties-common
    curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
    
    apt install -y docker-ce=$DOCKER_VERSION
    # 配置docker
    cat << EOF > /etc/docker/daemon.json
    {
        "registry-mirrors": [
            "https://dec3s4wu.mirror.aliyuncs.com", 
            "https://registry.docker-cn.com"],
        "exec-opts": ["native.cgroupdriver=systemd"],
        "max-concurrent-downloads": 10,  
        "max-concurrent-uploads": 5, 
        "log-opts": {
            "max-size": "300m",
            "max-file": "2"},  
        "live-restore": true 
    }
    EOF
    
    # 启动docker
    systemctl restart docker
    systemctl enable docker
    

    三. 部署二进制程序包

    在中心机操作

    1.下载k8s,etcd程序件包文件
    mkdir -pv  /root/k8s_setup/bin_file/soft
    cd    /root/k8s_setup/bin_file/soft
    wget http://hw.files.jiankangyouyi.com/etcd-v3.4.15-linux-amd64.tar
    wget http://hw.files.jiankangyouyi.com/kubernetes-server1.20.6-linux-amd64.tar.gz
    tar -xf kubernetes-server1.20.6-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kubectl
    
    2. 给节点机器 创建相关文件夹
    for host in k8s-master k8s-node1 k8s-node2;do
     ssh root@${host} 'mkdir -p /etc/etcd/ssl/ /etc/kubernetes/pki/'
     ssh root@${host}  'ln -sf /etc/etcd/ssl /etc/kubernetes/pki/etcd'
    done
    
    3.拷贝二进制程序到节点
    cd /root/k8s_setup/bin_file/soft
    for host in  k8s-master k8s-node1 k8s-node2;do
     scp etcd-v3.4.15-linux-amd64.tar root@${host}:~/
     scp kubernetes-server1.20.6-linux-amd64.tar.gz root@${host}:~/
    done
    
    for host in k8s-master k8s-node1 k8s-node2;do
     ssh root@${host} 'tar -xf etcd-v3.4.15-linux-amd64.tar --strip-components=1 -C /usr/local/bin etcd-v3.4.15-linux-amd64/etcd{,ctl}'
     ssh root@${host} 'tar -xf kubernetes-server1.20.6-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}'
    done
    
    
    for host in k8s-master k8s-node1 k8s-node2;do
     ssh root@${host} 'kubelet --version'
     ssh root@${host} 'etcdctl version'
    done
    

    四. 生成所有证书

    前提操作-下载证书工具
    wget http://hw.files.jiankangyouyi.com/cfssl_linux-amd64 -O /usr/local/bin/cfssl
    wget http://hw.files.jiankangyouyi.com/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
    wget http://hw.files.jiankangyouyi.com/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
    
    1.创建证书脚本目录

    mkdir -pv /root/k8s_setup/bin_file/1-ssl_install

    2.创建证书暂存目录
    mkdir -p ~/pki/{etcd,kubernetes,pki_json}
    
    3.添加并运行脚本

    vim /root/k8s_setup/bin_file/1-ssl_install/1.json_EOF.sh

    #!/bin/bash
    # 配置创建的证书json文件
    DIRNAME=/root/pki/pki_json
    cat << EOF >  ${DIRNAME}/ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "876000h"
        },
        "profiles": {
          "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "876000h"
          }
        }
      }
    }
    
    EOF
    
    #etcd
    cat << EOF > ${DIRNAME}/etcd-ca-csr.json
    {
      "CN": "etcd",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ],
      "ca": {
        "expiry": "876000h"
      }
    }
    EOF
    
    cat << EOF > ${DIRNAME}/etcd-csr.json
    {
      "CN": "etcd",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ]
    }
    EOF
    #
    ##k8s 组件相关证书json
    cat << EOF > ${DIRNAME}/ca-csr.json
    {
      "CN": "kubernetes",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "Kubernetes",
          "OU": "Kubernetes-manual"
        }
      ],
      "ca": {
        "expiry": "876000h"
      }
    }
    EOF
    
    cat << EOF > ${DIRNAME}/apiserver-csr.json
    {
      "CN": "kube-apiserver",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "Kubernetes",
          "OU": "Kubernetes-manual"
        }
      ]
    }
    EOF
    
    cat << EOF > ${DIRNAME}/admin-csr.json
    {
      "CN": "admin",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "system:masters",
          "OU": "Kubernetes-manual"
        }
      ]
    }
    EOF
    
    cat << EOF > ${DIRNAME}/manager-csr.json
    {
      "CN": "system:kube-controller-manager",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "system:kube-controller-manager",
          "OU": "Kubernetes-manual"
        }
      ]
    }
    EOF
    
    cat << EOF > ${DIRNAME}/scheduler-csr.json
    {
      "CN": "system:kube-scheduler",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "system:kube-scheduler",
          "OU": "Kubernetes-manual"
        }
      ]
    }
    EOF
    
    cat << EOF > ${DIRNAME}/kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Beijing",
          "L": "Beijing",
          "O": "system:kube-proxy",
          "OU": "Kubernetes-manual"
        }
      ]
    }
    EOF
    
    cat << EOF > ${DIRNAME}/front-proxy-ca-csr.json
    {
      "CN": "kubernetes",
      "key": {
         "algo": "rsa",
         "size": 2048
      }
    }
    
    EOF
    
    cat << EOF > ${DIRNAME}/front-proxy-client-csr.json
    {
      "CN": "front-proxy-client",
      "key": {
         "algo": "rsa",
         "size": 2048
      }
    }
    EOF
    

    vim /root/k8s_setup/bin_file/1-ssl_install/2.ca.sh

    #!/bin/bash
    #为etcd 以及k8s各个组件生成证书文件
    set -e
    ###########etcd 证书文件#########################
    cd /root/pki/pki_json
    
    etcd_hostname="127.0.0.1,k8s-master,k8s-node1,k8s-node2,10.0.3.202,10.0.3.203,10.0.3.204"
    cfssl gencert -initca  ~/pki/pki_json/etcd-ca-csr.json | cfssljson -bare ~/pki/etcd/etcd-ca
    
    cfssl gencert \
       -ca=/root/pki/etcd/etcd-ca.pem \
       -ca-key=/root/pki/etcd/etcd-ca-key.pem \
       -config= ca-config.json \
       -hostname=${etcd_hostname} \
       -profile=kubernetes \
       etcd-csr.json | cfssljson -bare ~/pki/etcd/etcd
    
    if [ `ls ~/pki/etcd/etcd*|wc -l` > 0 ];then
        echo "ectd 证书已生成"
        sleep 2
    fi
    
    ###########K8S 组件证书文件########################
    #apiserver
    cd /root/pki/pki_json
    
    k8s_hostname="10.96.0.1,127.0.0.1,10.0.3.202,10.0.3.203,10.0.3.204,10.0.3.205,10.0.3.188,k8s-master,k8s-node1,k8s-node2,k8s-center,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local"
    
    cfssl gencert -initca  ~/pki/pki_json/ca-csr.json | cfssljson -bare ~/pki/kubernetes/ca
    cfssl gencert \
        -ca=/root/pki/kubernetes/ca.pem \
        -ca-key=/root/pki/kubernetes/ca-key.pem \
        -config=ca-config.json \
        -hostname=${k8s_hostname} \
        -profile=kubernetes \
        apiserver-csr.json | cfssljson -bare ~/pki/kubernetes/apiserver
    
    if [ `ls ~/pki/kubernetes/apiserver*|wc -l` > 0 ];then
            echo "apiserver 证书已生成"
            sleep 2
    fi
    
    #admin
    cfssl gencert     -ca=/root/pki/kubernetes/ca.pem      -ca-key=/root/pki/kubernetes/ca-key.pem     -config=ca-config.json     -profile=kubernetes    admin-csr.json | cfssljson -bare ~/pki/kubernetes/admin
    
    if [ `ls  ~/pki/kubernetes/admin*|wc -l` > 0 ];then
            echo "admin 证书已生成"
            sleep 2
    fi
    
    ##controller
    cfssl gencert \
       -ca=/root/pki/kubernetes/ca.pem \
       -ca-key=/root/pki/kubernetes/ca-key.pem \
       -config=ca-config.json \
       -profile=kubernetes \
       manager-csr.json | cfssljson -bare ~/pki/kubernetes/controller-manager
    
    if [ `ls  ~/pki/kubernetes/controller-manager*|wc -l` > 0 ];then
            echo "controller-manager 证书已生成"
            sleep 2
    fi
    
    #scheduler
    cfssl gencert \
       -ca=/root/pki/kubernetes/ca.pem \
       -ca-key=/root/pki/kubernetes/ca-key.pem \
       -config=ca-config.json \
       -profile=kubernetes \
       scheduler-csr.json | cfssljson -bare ~/pki/kubernetes/scheduler
    
    if [ `ls  ~/pki/kubernetes/scheduler*|wc -l` > 0 ];then
            echo "scheduler 证书已生成"
            sleep 2
    fi
    
    #kube-proxy
    cfssl gencert \
       -ca=/root/pki/kubernetes/ca.pem \
       -ca-key=/root/pki/kubernetes/ca-key.pem \
       -config=ca-config.json \
       -profile=kubernetes \
       kube-proxy-csr.json | cfssljson -bare ~/pki/kubernetes/proxy
    
    if [ `ls ~/pki/kubernetes/proxy*|wc -l` > 0 ];then
            echo "kube-proxy 证书已生成"
            sleep 2
    fi
    
    #聚合api  front-proxy-client
    cfssl gencert  -initca ~/pki/pki_json/front-proxy-ca-csr.json | cfssljson -bare ~/pki/kubernetes/front-proxy-ca
    cfssl gencert  \
        -ca=/root/pki/kubernetes/front-proxy-ca.pem  \
        -ca-key=/root/pki/kubernetes/front-proxy-ca-key.pem \
        -config=ca-config.json \
        -profile=kubernetes \
        front-proxy-client-csr.json | cfssljson -bare ~/pki/kubernetes/front-proxy-client
    
    if [ `ls ~/pki/kubernetes/front-proxy-client*|wc -l` > 0 ];then
            echo "front-proxy-client 证书已生成"
            sleep 2
    fi
    

    到此可以检查~pki下的所有证书文件是否生成。

    4.创建serviceAccount key
    openssl genrsa -out ~/pki/kubernetes/sa.key 2048
    openssl rsa -in ~/pki/kubernetes/sa.key -pubout -out ~/pki/kubernetes/sa.pub
    

    五. 部署ETCD

    中心机操作

    1.创建etcd部署目录
    mkdir -pv /root/k8s_setup/bin_file/2-etcd_install
    

    cd /root/k8s_setup/bin_file/2-etcd_install

    2.添加脚本并运行

    vim 1.etcd_conf.sh

    #!/bin/bash
    #脚本形式生成3台etcd节点的etcd配置文件 和  etcd.service配置文件
    
    #etcd.config
    HOST_LIST="k8s-master=https://10.0.3.202:2380,k8s-node1=https://10.0.3.203:2380,k8s-node2=https://10.0.3.204:2380"
    
    for host in k8s-master k8s-node1 k8s-node2;do
        case $host in 
            k8s-master)
              IP=10.0.3.202
              ;;
            k8s-node1)
              IP=10.0.3.203
              ;;
            k8s-node2)
              IP=10.0.3.204
              ;;
        esac
    
    
    cat << EOF > ~/conf/etcd/etcd.config_$host.yml
    name: '$host'
    data-dir: /var/lib/etcd
    wal-dir: /var/lib/etcd/wal
    snapshot-count: 5000
    heartbeat-interval: 100
    election-timeout: 1000
    quota-backend-bytes: 0
    listen-peer-urls: 'https://$IP:2380'
    listen-client-urls: 'https://$IP:2379,http://127.0.0.1:2379'
    max-snapshots: 3
    max-wals: 5
    cors:
    initial-advertise-peer-urls: 'https://$IP:2380'
    advertise-client-urls: 'https://$IP:2379'
    discovery:
    discovery-fallback: 'proxy'
    discovery-proxy:
    discovery-srv:
    initial-cluster: '$HOST_LIST'
    initial-cluster-token: 'etcd-k8s-cluster'
    initial-cluster-state: 'new'
    strict-reconfig-check: false
    enable-v2: true
    enable-pprof: true
    proxy: 'off'
    proxy-failure-wait: 5000
    proxy-refresh-interval: 30000
    proxy-dial-timeout: 1000
    proxy-write-timeout: 5000
    proxy-read-timeout: 0
    client-transport-security:
      cert-file: '/etc/etcd/ssl/etcd.pem'
      key-file: '/etc/etcd/ssl/etcd-key.pem'
      client-cert-auth: true
      trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
      auto-tls: true
    peer-transport-security:
      cert-file: '/etc/etcd/ssl/etcd.pem'
      key-file: '/etc/etcd/ssl/etcd-key.pem'
      peer-client-cert-auth: true
      trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
      auto-tls: true
    debug: false
    log-package-levels:
    log-outputs: [default]
    force-new-cluster: false
    EOF
    done
    
    
    
    #etcd.servcice
    cat << 'EOF' > ~/conf/etcd/etcd.service
    [Unit]
    Description=Etcd Service
    Documentation=https://coreos.com/etcd/docs/latest/
    After=network.target
    
    [Service]
    Type=notify
    ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
    Restart=on-failure
    RestartSec=10
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    Alias=etcd3.service
    EOF
    

    vim 2.deploy_etcd.sh

    #!/bin/bash
    #delpoy etcd SSL file, etcd.service, etcd config
    
    
    for host in k8s-master k8s-node1 k8s-node2;do
        #SSL FILE
        cd /root/pki/etcd
        scp ./*.pem root@$host:/etc/etcd/ssl/
    
        #etcd.service
        cd ~/conf/etcd
        scp etcd.service root@${host}:/lib/systemd/system/etcd.service
        #etcd config
        scp etcd.config_${host}.yml  root@${host}:/etc/etcd/etcd.config.yml
    done
    
    

    到此,etcd部署完毕,可以去k8s-master上测验一下etcd的服务是否正常起来
    etcd服务端口2380 ,客户端端口2379

    export ETCDCTL_API=3
    EP="10.0.3.202:2379,10.0.3.203:2379,10.0.3.204:2379"
    etcdctl --endpoints="${EP}" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
    

    六.配置kubeconfig文件

    1.创建kubeconfig脚本目录 和 kubeconfig暂存目录
    mkdir -pv /root/k8s_setup/bin_file/3-kubeconfig_install
    mkdir ~/kubeconfig
    
    2. k8s系统组件kubeconfig生成 (脚本形式)

    vim 1.kubeconfig.sh

    for item in controller-manager scheduler proxy;do
        server=https://10.0.3.202:6443
        kubeconfig_file=~/kubeconfig/${item}.kubeconfig
        ca_file=~/pki/kubernetes/ca.pem
        cert_file=~/pki/kubernetes/${item}.pem
        cert_key_file=~/pki/kubernetes/${item}-key.pem
        kubectl config set-cluster kubernetes \
             --certificate-authority=${ca_file} \
             --embed-certs=true \
             --server=${server} \
             --kubeconfig=${kubeconfig_file}
        kubectl config set-credentials system:kube-${item} \
             --client-certificate=${cert_file} \
             --client-key=${cert_key_file} \
             --embed-certs=true \
             --kubeconfig=${kubeconfig_file}
        kubectl config set-context system:kube-${item}@kubernetes \
            --cluster=kubernetes \
            --user=system:kube-${item} \
            --kubeconfig=${kubeconfig_file}
        kubectl config use-context system:kube-${item}@kubernetes \
             --kubeconfig=${kubeconfig_file}
    
    done
    
    admin kubeconfig生成 (脚本形式)

    vim 2.admin.sh

    server=https://10.0.3.202:6443
    item=admin
    kubeconfig_file=~/kubeconfig/${item}.kubeconfig
    ca_file=~/pki/kubernetes/ca.pem
    cert_file=~/pki/kubernetes/${item}.pem
    cert_key_file=~/pki/kubernetes/${item}-key.pem
    
    kubectl config set-cluster kubernetes \
         --certificate-authority=${ca_file} \
         --embed-certs=true \
         --server=${server} \
         --kubeconfig=${kubeconfig_file}
         
    kubectl config set-credentials kube-${item} \
         --client-certificate=${cert_file} \
         --client-key=${cert_key_file} \
         --embed-certs=true \
         --kubeconfig=${kubeconfig_file}
    
    kubectl config set-context kube-${item}@kubernetes \
        --cluster=kubernetes \
        --user=kube-${item} \
        --kubeconfig=${kubeconfig_file}
    
    kubectl config use-context kube-${item}@kubernetes \
         --kubeconfig=${kubeconfig_file}
    
    cat /etc/kubernetes/${item}.kubeconfig
    

    3. kubeconfig与证书 部署到master

    vim 3.deploy_kubeconfig.sh

    api_server=k8s-master
    for file in admin.kubeconfig controller-manager.kubeconfig proxy.kubeconfig scheduler.kubeconfig;do
    scp ~/kubeconfig/${file} root@${api_server}:/etc/kubernetes/
    done
    #远程master可能没有pki文件目录,创建上。
    #ssh root@k8s-master "mkdir -pv /etc/kubernetes/pki"
    scp  ~/pki/kubernetes/*.pem root@${api_server}:/etc/kubernetes/pki/
    scp  ~/pki/kubernetes/sa* root@${api_server}:/etc/kubernetes/pki/
    

    七. 部署master

    创建master部署的脚本目录
    mkdir -pv root/k8s_setup/bin_file/4-master_install
    
    1.部署api-server

    vim 1.deploy_apiserver.sh

    api_server=k8s-master
    
    cat << 'EOF' > ~/conf/kubernetes/kube-apiserver.service 
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/kube-apiserver \
          --v=2  \
          --logtostderr=true  \
          --allow-privileged=true  \
          --bind-address=0.0.0.0  \
          --secure-port=6443  \
          --insecure-port=0  \
          --advertise-address=10.0.3.202 \
          --service-cluster-ip-range=10.96.0.0/16  \
          --service-node-port-range=30000-32767  \
          --etcd-servers=https://10.0.3.202:2379,https://10.0.3.203:2379,https://10.0.3.204:2379 \
          --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
          --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
          --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
          --client-ca-file=/etc/kubernetes/pki/ca.pem  \
          --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
          --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
          --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
          --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
          --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
          --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
          --service-account-issuer=https://kubernetes.default.svc.cluster.local \
          --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
          --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
          --authorization-mode=Node,RBAC  \
          --enable-bootstrap-token-auth=true  \
          --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
          --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
          --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
          --requestheader-allowed-names=aggregator  \
          --requestheader-group-headers=X-Remote-Group  \
          --requestheader-extra-headers-prefix=X-Remote-Extra-  \
          --requestheader-username-headers=X-Remote-User
          # --token-auth-file=/etc/kubernetes/token.csv
    
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=65535
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    #delpoy master  --apiserver
    
    scp ~/conf/kubernetes/kube-apiserver.service root@${api_server}:/lib/systemd/system/kube-apiserver.service
    
    ssh root@${api_server}  'systemctl daemon-reload && systemctl enable --now kube-apiserver'
    
    2.部署kube-controller-manager

    vim 2.deploy_controller-manager.sh

    api_server=k8s-master
    
    cat<< 'EOF' > ~/conf/kubernetes/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/kube-controller-manager \
          --v=2 \
          --logtostderr=true \
          --address=127.0.0.1 \
          --root-ca-file=/etc/kubernetes/pki/ca.pem \
          --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
          --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
          --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
          --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
          --leader-elect=true \
          --use-service-account-credentials=true \
          --node-monitor-grace-period=40s \
          --node-monitor-period=5s \
          --pod-eviction-timeout=2m0s \
          --controllers=*,bootstrapsigner,tokencleaner \
          --allocate-node-cidrs=true \
          --cluster-cidr=172.30.0.0/16 \
          --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
          --node-cidr-mask-size=24
          
    Restart=always
    RestartSec=10s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    #deploy master controller-manager
    scp ~/conf/kubernetes/kube-controller-manager.service root@${api_server}:/lib/systemd/system/kube-controller-manager.service
    ssh root@${api_server} 'systemctl daemon-reload && systemctl enable --now kube-controller-manager'
    
    3.部署scheduler

    vim 3.deploy_scheduler.sh

    api_server=k8s-master
    
    cat << 'EOF' > ~/conf/kubernetes/kube-scheduler.service 
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/kube-scheduler \
          --v=2 \
          --logtostderr=true \
          --address=127.0.0.1 \
          --leader-elect=true \
          --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
    
    Restart=always
    RestartSec=10s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    scp ~/conf/kubernetes/kube-scheduler.service root@${api_server}:/lib/systemd/system/kube-scheduler.service
    
    ssh root@${api_server} 'systemctl daemon-reload && systemctl enable --now kube-scheduler' 
    

    到此master部署完成,查看master组件状态(k8s-master上看)

    root@k8s-master:/etc/kubernetes# kubectl get cs
    Warning: v1 ComponentStatus is deprecated in v1.19+
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-1               Healthy   {"health":"true"}   
    etcd-0               Healthy   {"health":"true"}   
    etcd-2               Healthy   {"health":"true"}   
    
    问题:执行kubectl get cs 提示8080 was refused - did you specify the right host or port?解决?
    解决:未设置环境变量
    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
    source /etc/profile
    

    如果想用中心机使用kubectl管理集群,将上述kubeconfig变量写为~/kubeconfig/admin.kubeconfig 即可。

    八.部署node

    部署kubelet

    1.bootstrap-kubelet.kubeconfig
    2.kubelet-config.yml (配置参数文件)
    3.kubelet.service服务文件

    注意: 部署Node时,在中心机center上生成node需要的相关配置文件 批量发布到所有node节点。

    1. 生成bootstrap-kubelet.kubeconfig

    注意:操作在中心机进行,并且事先准备好bootstrap.secret.yaml
    创建部署node时的文件夹

    mkdir -pv /root/k8s_setup/bin_file/5-node_install
    cd   /root/k8s_setup/bin_file/5-node_install
    
    • 准备bootstrap.secret.yaml
      注意这个bootstrap.secret.yaml中token-id 和 token-secret
    cat << 'EOF'> bootstrap.secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: bootstrap-token-c8ad9c
      namespace: kube-system
    type: bootstrap.kubernetes.io/token
    stringData:
      description: "The default bootstrap token generated by 'kubelet '."
      token-id: c8ad9c
      token-secret: 2e4d610cf3e7426e
      usage-bootstrap-authentication: "true"
      usage-bootstrap-signing: "true"
      auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubelet-bootstrap
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:node-bootstrapper
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:bootstrappers:default-node-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: node-autoapprove-bootstrap
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:bootstrappers:default-node-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: node-autoapprove-certificate-rotation
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:nodes
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:kube-apiserver-to-kubelet
    rules:
      - apiGroups:
          - ""
        resources:
          - nodes/proxy
          - nodes/stats
          - nodes/log
          - nodes/spec
          - nodes/metrics
        verbs:
          - "*"
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:kube-apiserver
      namespace: ""
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:kube-apiserver-to-kubelet
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: kube-apiserver
    EOF
    
    • vim 1.deploy_bootstrap-kubelet.kubeconfig.sh

    主要实现:
    生成bootstrap-kubelet.kubeconfig ;
    部署bootstrap-kubelet.kubeconfig到所有NODE节点
    部署ca证书到所有节点

    #!/bin/bash
    #生成bootstrap-kubelet.kubeconfig
    kubectl create -f bootstrap.secret.yaml 
    kubectl config set-cluster kubernetes \
        --certificate-authority=${HOME}/pki/kubernetes/ca.pem \
        --embed-certs=true \
        --server=https://10.0.3.202:6443 \
        --kubeconfig=${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig
    kubectl config set-credentials tls-bootstrap-token-user \
        --token=c8ad9c.2e4d610cf3e7426e \
        --kubeconfig=${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig
    kubectl config set-context tls-bootstrap-token-user@kubernetes \
        --cluster=kubernetes \
        --user=tls-bootstrap-token-user \
        --kubeconfig=${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig
    kubectl config use-context tls-bootstrap-token-user@kubernetes \
        --kubeconfig=${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig
    
    #部署bootstrap-kubelet.kubeconfig
    if [ -f ${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig ];then
        echo "bootstrap-kubelet.kubeconfig 文件已经生成"
        echo "即将开始部署到所有Node节点..."
        sleep 2
    fi
    
    FILE="${HOME}/kubeconfig/bootstrap-kubelet.kubeconfig"
    for host in k8s-master k8s-node1 k8s-node2;do
     ssh root@${host} "mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ /etc/kubernetes/pki/ /opt/cni/bin"
     scp $FILE root@${host}:/etc/kubernetes/
    done
    
    #部署所有证书到node节点
    CA_PATH=${HOME}/pki/kubernetes/
    
    for NODE in k8s-master k8s-node1 k8s-node2;do
         for FILE in ca.pem ca-key.pem  front-proxy-ca.pem; do
           scp ${CA_PATH}/$FILE $NODE:/etc/kubernetes/pki/${FILE}
         done
    done
    
    • 2.kubelet-conf_service_file.sh

    主要实现:
    生成kublet.service ; 10-kubelet.conf ; kubelet-conf.yml
    部署这三个文件到所有node
    重启Node节点上的kubelet服务

    #生成kublet.service  10-kubelet.conf kubelet-conf.yml
    cat << 'EOF' > /root/conf/kubernetes/kubelet.service
    
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    ExecStart=/usr/local/bin/kubelet
    
    Restart=always
    StartLimitInterval=0
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat << 'EOF' > /root/conf/kubernetes/10-kubelet.conf
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
    Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=harbor.hw.jiankangyouyi.com:5000/k8s-pubulic/pause-amd64:3.2"
    Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
    ExecStart=
    ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
    EOF
    
    
    
    cat << 'EOF' > /root/conf/kubernetes/kubelet-conf.yml
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    address: 0.0.0.0
    port: 10250
    readOnlyPort: 10255
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.pem
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    cgroupDriver: systemd
    cgroupsPerQOS: true
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    containerLogMaxFiles: 5
    containerLogMaxSize: 10Mi
    cont entType: application/vnd.kubernetes.protobuf
    cpuCFSQuota: true
    cpuManagerPolicy: none
    cpuManagerReconcilePeriod: 10s
    enableControllerAttachDetach: true
    enableDebuggingHandlers: true
    enforceNodeAllocatable:
    - pods
    eventBurst: 10
    eventRecordQPS: 5
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionPressureTransitionPeriod: 5m0s
    failSwapOn: true
    fileCheckFrequency: 20s
    hairpinMode: promiscuous-bridge
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 20s
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    imageMinimumGCAge: 2m0s
    iptablesDropBit: 15
    iptablesMasqueradeBit: 14
    kubeAPIBurst: 10
    kubeAPIQPS: 5
    makeIPTablesUtilChains: true
    maxOpenFiles: 1000000
    maxPods: 110
    nodeStatusUpdateFrequency: 10s
    oomScoreAdj: -999
    podPidsLimit: -1
    registryBurst: 10
    registryPullQPS: 5
    resolvConf: /etc/resolv.conf
    rotateCertificates: true
    runtimeRequestTimeout: 2m0s
    serializeImagePulls: true
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 4h0m0s
    syncFrequency: 1m0s
    volumeStatsAggPeriod: 1m0s
    EOF
    
    
    
    #将kubelet.service   10-kublet.conf-   kubelet-conf.yaml 部署到所有node机器
    cd /root/conf/kubernetes
    if [  -f "kubelet-conf.yml" -a -f "10-kubelet.conf" -a -f "kubelet.service"  ];then
        echo "kubelet.service,10-kublet.conf,kubelet-conf.yaml 均已生成"
        echo "即将开始部署到所有Node节点..."
        sleep 2
        for NODE in k8s-master k8s-node1 k8s-node2;do
            scp kubelet-conf.yml  root@${NODE}:/etc/kubernetes/kubelet-conf.yml
            scp kubelet.service root@${NODE}:/lib/systemd/system/kubelet.service
            scp 10-kubelet.conf root@${NODE}:/etc/systemd/system/kubelet.service.d/10-kubelet.conf
            ssh root@${NODE} 'systemctl daemon-reload && systemctl enable --now kubelet'
        done
    fi
    

    此时,kubectl get node 查看node状态 ,已经都接入到了集群

    NAME         STATUS     ROLES    AGE     VERSION
    k8s-master   NotReady   <none>   3h38m   v1.20.6
    k8s-node1    NotReady   <none>   3h34m   v1.20.6
    k8s-node2    NotReady   <none>   3h35m   v1.20.6
    

    部署kube-proxy

    vim 3.deploy_kube-proxy.sh
    主要实现:
    生成 kube-proxy.conf
    生成 kube-proxy.service
    将以上文件和前面生成过的proxy.kubeconfig部署到所有Node节点

    #!/bin/bash
    #生成kube-proxy.conf
    cat << EOF > /root/conf/kubernetes/kube-proxy.conf
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
      qps: 5
    clusterCIDR: 172.30.0.0/16
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      masqueradeAll: true
      minSyncPeriod: 5s
      scheduler: "rr"
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    udpIdleTimeout: 250ms
    EOF
    
    
    #kube-proxy.service 
    
    cat << EOF > /root/conf/kubernetes/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube Proxy
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/kube-proxy \
      --config=/etc/kubernetes/kube-proxy.conf \
      --v=2
    
    Restart=always
    RestartSec=10s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    
    
    for NODE in k8s-master k8s-node1 k8s-node2;do
      scp /root/conf/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
      scp /root/conf/kubernetes/kube-proxy.service $NODE:/lib/systemd/system/kube-proxy.service
      scp /root/kubeconfig/proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
    
      ssh root@${NODE} 'systemctl daemon-reload && systemctl enable --now kube-proxy'
    done
    

    九. 部署CNI网络-Calico组件

    创建cni部署文件夹(中心机)

    mkdir -pv  /root/k8s_setup/bin_file/6-cni_install
    
    1.创建calico-etcd.yaml

    注(已经将这个文件上传至gitlab可以直接拉下来)
    略。

    2.修改calico-etcd.yaml 并部署到master上

    vim 1.deploy_calico.sh

    #!/bin/bash
    #修改calico.yaml中的一些信息,并部署到master机器,应用。
    
    sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://10.0.3.202:2379,https://10.0.3.203:2379,https://10.0.3.204:2379"#g' calico-etcd.yaml
    
    #ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
    #ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
    #ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
    ETCD_CA=`cat /root/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
    ETCD_CERT=`cat /root/pki/etcd/etcd.pem | base64 | tr -d '\n'`
    ETCD_KEY=`cat /root/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
    
    POD_SUBNET="172.30.0.0/16"
    
    sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
    
    sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
    
    sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
    
    ssh root@k8s-master "mkdir -p /etc/kubernetes/cni"
    scp calico-etcd.yaml root@k8s-master:/etc/kubernetes/cni/
    ssh root@k8s-master "kubectl  apply -f /etc/kubernetes/cni/calico-etcd.yaml"
                                                                                                                                                             
    

    运行后,可以执行kubectl get node 查看node的状态

    NAME         STATUS   ROLES    AGE     VERSION
    k8s-master   Ready    <none>   5h37m   v1.20.6
    k8s-node1    Ready    <none>   5h34m   v1.20.6
    k8s-node2    Ready    <none>   5h34m   v1.20.6
    

    查看calio pod

    root@k8s-center:~# kubectl get pod -A -o wide 
    NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
    kube-system   calico-kube-controllers-c75f86dd5-7hk9z   1/1     Running   0          17h   10.0.3.204   k8s-node2    <none>           <none>
    kube-system   calico-node-2g5ks                         1/1     Running   0          17h   10.0.3.203   k8s-node1    <none>           <none>
    kube-system   calico-node-nn75b                         1/1     Running   0          17h   10.0.3.204   k8s-node2    <none>           <none>
    kube-system   calico-node-tt67g                         0/1     Running   0          17h   10.0.3.202   k8s-master   <none>           <none>
    

    十. 部署Coredns

    用于k8s集群内部service解析服务
    准备一个coredns的yaml文件 (gitlab也有)
    vim 6-cni_install/coredns.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: coredns
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
    data:
      Corefile: |
        .:53 {
            errors
            health {
              lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            forward . /etc/resolv.conf {
              max_concurrent 1000
            }
            cache 30
            loop
            reload
            loadbalance
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/name: "CoreDNS"
    spec:
      # replicas: not specified here:
      # 1. Default is 1.
      # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: kube-dns
      template:
        metadata:
          labels:
            k8s-app: kube-dns
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: coredns
          tolerations:
            - key: "CriticalAddonsOnly"
              operator: "Exists"
          nodeSelector:
            kubernetes.io/os: linux
          affinity:
             podAntiAffinity:
               preferredDuringSchedulingIgnoredDuringExecution:
               - weight: 100
                 podAffinityTerm:
                   labelSelector:
                     matchExpressions:
                       - key: k8s-app
                         operator: In
                         values: ["kube-dns"]
                   topologyKey: kubernetes.io/hostname
          containers:
          - name: coredns
            image: harbor.hw.jiankangyouyi.com:5000/k8s-pubulic/coredns:1.7.0
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            args: [ "-conf", "/etc/coredns/Corefile" ]
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /ready
                port: 8181
                scheme: HTTP
          dnsPolicy: Default
          volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: 10.96.0.10
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
      - name: metrics
        port: 9153
        protocol: TCP
    

    将文件发送到k8s-master安装并应用

    scp coredns.yaml root@k8s-master:/etc/kubernetes/cni
    ssh  root@k8s-master "kubectl apply -f  /etc/kubernetes/cni/coredns.yaml"
    

    中间发现coredns pod 没起来,kubectl logs 查看 发现报错

    [FATAL] plugin/loop: Loop (127.0.0.1:48100 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 639535139534040434.6569166625322327450."
    

    原因:
    是因为配置文件中指定了/etc/reslov.conf,而这个文件中的dns 的nameserver是127.0.0.1 导致此问题出现。
    解决:
    删除coredns pod ,删除coredns.yaml中的 ConfigMap 类型中loop字段,重新apply -f 即可正常,

    root@k8s-master:/etc/kubernetes/cni# kubectl get pod -A
    NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-c75f86dd5-7hk9z   1/1     Running   0          21h
    kube-system   calico-node-2g5ks                         1/1     Running   0          21h
    kube-system   calico-node-nn75b                         1/1     Running   0          21h
    kube-system   calico-node-tt67g                         0/1     Running   0          21h
    kube-system   coredns-85cf76fcdc-kngxp                  1/1     Running   0          8s
    

    启动一个ningx pod 测试

    kubectl create deployment nginx  --image=nginx
    kubectl expose deployment nginx --port=80 --type=NodePort
    

    查看状态并测试

    root@k8s-master:/etc/kubernetes/dashboard# kubectl get svc,pod 
    NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        41h
    service/nginx        NodePort    10.96.65.36   <none>        80:43989/TCP   94m
    
    NAME                         READY   STATUS    RESTARTS   AGE
    pod/nginx-6799fc88d8-v9g52   1/1     Running   0          95m
    
    root@k8s-master:/etc/kubernetes/dashboard# curl 10.96.65.36 -I
    HTTP/1.1 200 OK
    Server: nginx/1.21.3
    Date: Thu, 23 Sep 2021 05:32:26 GMT
    Content-Type: text/html
    Content-Length: 615
    Last-Modified: Tue, 07 Sep 2021 15:21:03 GMT
    Connection: keep-alive
    ETag: "6137835f-267"
    Accept-Ranges: bytes
    

    相关文章

      网友评论

        本文标题:Ubuntu18.04 二进制K8S集群搭建

        本文链接:https://www.haomeiwen.com/subject/ytmfgltx.html