美文网首页
二进制安装K8S(基于1.19.16版本)

二进制安装K8S(基于1.19.16版本)

作者: 背着耿鬼的蒜头 | 来源:发表于2022-02-24 12:09 被阅读0次

    前言

    通过kubeadmin安装K8S集群可以做到快速部署,但是如果需要调整K8S各个组件及服务的安全配置,高可用模式,最好通过二进制模式进行K8S的安装
    生产环境K8S Master节点最好在3个节点以上,且配置不低于4核16g
    生产环境:确保Master高可用,启用安全访问机制

    • Master的kube-apiserver,kube-controller-manager和kube-scheduler服务至少在三个节点多实例部署
    • Master启用基于CA认证的https安全机制
    • etcd至少以三个节点的集群模式部署
    • etcd集群启用基于CA认证的https安全机制
    • Master启用RBAC授权模式

    PS:

    • 以下安装过程完美支持 K8S v1.19 组件,如使用其他版本,则需要根据官方文档对相关配置进行微调
    • 当前服务k8s-master侧与k8s-node侧均进行了集群化部署,且进行了同时部署,若使用生产环境,需要将对应标志的集群服务分开部署,即k8s-master与k8s-node组件不能部署在同一台服务器中

    服务器节点:

    服务器节点只是起名字,与是否为k8s-master或k8s-node无关,可根据需要增加或删减测试用例的数量,如多台master主机只部署k8s-master组件。多台node主机只部署k8s-node组件
    master1 192.168.100.100
    node1 192.168.100.101
    node2 192.168.100.102

    架构图

    Kubernetes.drawio.png

    1、初始化设置

    1.1、服务器基本设置(所有服务器)

    #更新yum源
    yum update -y
    #安装相关依赖和工具
    yum install net-tools curl wget epel-release vim -y
    
    curl -o /etc/yum.repos.d/konimex-neofetch-epel-7.repo https://copr.fedorainfracloud.org/coprs/konimex/neofetch/repo/epel-7/konimex-neofetch-epel-7.repo
    
    yum install git lsof ncdu neofetch psmisc htop openssl -y
    
    yum install -y conntrack ipvsadm ipset jq sysstat iptables libseccomp -y
    
    #为每个节点设置hostname并写入/etc/hosts
    hostnamectl set-hostname [hostname]
    
    cat >> /etc/hosts <<EOF
    192.168.100.100 master
    192.168.100.101 node1
    192.168.100.102 node2
    EOF
    
    #创建密钥对文件,并将相互的密钥注册服务器中,保证三台服务器可以无密码登录(便于后期复制配置文件)
    ssh-keygen -t "rsa"
    
    #将公钥注册到服务器中
    ssh-copy-id root@master
    ssh-copy-id root@node1
    ssh-copy-id root@node2
    
    #测试连接并记录know hosts
    ssh root@master1
    ssh root@node1
    ssh root@node2
    

    1.2、docker安装(所有服务器)

    #安装docker
    mkdir -p /data/docker
    
    #安装yum utils
    yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
    
    #加载docker源
    yum-config-manager \
        --add-repo \
        http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    #安装docker
    yum install docker-ce docker-ce-cli containerd.io -y
    
    #创建docker配置文件夹
    mkdir -p /etc/docker
    
    #配置docker
    cat > /etc/docker/daemon.json <<EOF
    {
      "registry-mirrors": [
        "https://registry.docker-cn.com"
      ],
      "graph": "/data/docker"
    }
    EOF
    
    #启动docker并注册服务文件
    systemctl start docker
    systemctl enable docker
    
    #修改 docker.service 文件
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    #在上面一行后面加入下面的内容
    ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
    
    #重启docker
    systemctl daemon-reload
    systemctl restart docker
    

    1.3、k8s所需配置(所有服务器)

    #关闭防火墙
    systemctl stop firewalld
    systemctl disable firewalld
    
    #关闭selinux
    #永久关闭
    sed -i 's/enforcing/disable/' /etc/selinux/config
    #临时关闭
    setenforce 0
    
    #关闭swap
    #临时关闭
    swapoff -a
    #永久关闭
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    #将桥接的IPV4流量传递到iptables
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    #生效配置
    sysctl --system 或者 sysctl -p /etc/sysctl.d/k8s.conf
    
    #时间同步(如果当前服务器时间本身具有同步,则此步骤可以跳过)
    yum install ntpdate -y
    ntpdate time.windows.com
    
    #如果使用高版本的服务器时间同步使用的是chrony
    vim /etc/chrony.conf
    #修改server
    server ntp1.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp2.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp3.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp4.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp5.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp6.aliyun.com iburst minpoll 4 maxpoll 10
    server ntp7.aliyun.com iburst minpoll 4 maxpoll 10
    
    #重启服务
    systemctl restart chronyd
    
    #创建相关文件夹
    mkdir -p /data/etcd/{data,ssl}
    mkdir -p /data/k8s/{ssl,conf,log}
    

    2、用户创建(如果创建用户,后续相关操作请使用k8s用户进行)

    #创建k8s用户
    useradd -m k8s
    
    #创建K8S用户密码
    sh -c 'echo [密码] |passwd k8s --stdin'
    
    #修改visudo权限
    visudo
    #去掉%wheel ALL=(ALL) NOPASSWD: ALL这行的注释
    
    #查看是否更改成功
    grep '%wheel.*NOPASSWD: ALL' /etc/sudoers
    
    #将k8s用户归到wheel组
    gpasswd -a k8s wheel
    
    #查看用户
    id k8s
    
    #在每台机器上添加 docker 账户,将 k8s 账户添加到 docker 组中,同时配置 dockerd 参数
    useradd -m docker
    gpasswd -a k8s docker
    

    3、创建证书

    注:本节中创建的证书为部署流程全过程证书,测试用例为openssl生成证书,cfssl生成证书需要参考cfssl生成k8s证书,并在对应配置文件的相关证书位置替换对应证书

    3.1、创建CA根证书

    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.100.100" -days 36500 -out ca.crt
    

    3.2、创建etcd ssl证书配置

    cat > /data/k8s/ssl/etcd_ssl.conf <<EOF
    [ req ]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name
    
    [ req_distinguished_name ]
    
    [ v3_req ]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    
    [ alt_names ]
    IP.1 = 192.168.100.100
    IP.2 = 192.168.100.101
    IP.3 = 192.168.100.102
    EOF
    

    3.3、生成server证书和client证书

    #生成server证书
    openssl genrsa -out etcd_server.key 2048
    openssl req -new -key etcd_server.key -config etcd_ssl.conf -subj "/CN=etcd-server" -out etcd_server.csr
    openssl x509 -req -in etcd_server.csr -CA /data/k8s/ssl/ca.crt -CAkey /data/k8s/ssl/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.conf -out etcd_server.crt
    
    #生成client证书
    openssl genrsa -out etcd_client.key 2048
    openssl req -new -key etcd_client.key -config etcd_ssl.conf -subj "/CN=etcd-client" -out etcd_client.csr
    openssl x509 -req -in etcd_client.csr -CA /data/k8s/ssl/ca.crt -CAkey /data/k8s/ssl/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.conf -out etcd_client.crt
    

    3.4、编写配置文件,生成kube-apiserver服务需要的相关证书

    cat > /data/k8s/ssl/master_ssl.cnf <<EOF
    [ req ]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name
    
    [ req_distinguished_name ]
    
    [ v3_req ]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    
    [ alt_names ]
    DNS.1 = kubernetes
    DNS.2 = kubernetes.default
    DNS.3 = kubernetes.default.svc
    DNS.4 = kubernetes.default.svc.cluster.local
    DNS.5 = k8s-1
    DNS.6 = k8s-2
    DNS.7 = k8s-3
    IP.1 = 169.169.0.1
    IP.2 = 192.168.100.100
    IP.3 = 192.168.100.101
    IP.4 = 192.168.100.102
    IP.4 = 192.168.100.200
    EOF
    

    在配置文件中需要在[alt_names]设置Master服务的全部域名和IP地址,包括:

    • DNS主机名,例如k8s-1,k8s-2,k8s-3等;
    • Master Service虚拟服务名称,例如kubernetes.default
    • IP地址,包括各个kube-apiserver所在主机的IP地址和负载均衡器的IP地址,例如192.168.100.100,192.168.100.101,192.168.100.102还有192.168.100.200
    • Master Service 虚拟服务的ClusterIP地址,例如169.169.0.1

    3.5、生成kube-apiserver证书

    cd /data/k8s/ssl
    openssl genrsa -out apiserver.key 2048
    openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.100.100" -out apiserver.csr
    openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
    

    3.6、生成k8s客户端需要的相关证书(可以统一使用一个证书)

    • k8s客户端包括kube-controller-manager,kube-scheduler,kubelet,kube-proxy,可以统一使用一个证书
    • CN可以设置为admin,用于表示连接kube-apiserver客户端的用户名称
    cd /data/k8s/ssl
    openssl genrsa -out client.key 2048
    openssl req -new -key client.key -subj "/CN=admin" -out client.csr
    openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out client.crt
    

    分发证书

    scp /data/k8s/ssl/*  root@node1:/data/k8s/ssl
    scp /data/k8s/ssl/*  root@node2:/data/k8s/ssl
    

    验证证书

    openssl x509 -noout -text -in 证书.crt
    

    4、部署etcd(k8s-master)

    4.1、下载etcd

    wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
    tar -zxvf etcd-v3.5.2-linux-amd64.tar.gz
    cd etcd-v3.5.2-linux-amd64
    mv etcd* /usr/bin
    cd ..
    rm -rf etcd-v3.5.2-linux-amd64*
    

    4.2、编写etcd配置文件

    参数解析

    • ETCD_NAME:etcd节点名称,对于各个节点的名称需要不同
    • ETCD_DATA_DIR:etcd数据存储目录
    • ETCD_LISTEN_CLIENT_URLS,ETCD_ADVERTISE_CLIENT_URLS:客户端提供的服务监听URL地址
    • ETCD_LISTEN_PEER_URLS,ETCD_INITIAL_ADVERTISE_PEER_URLS:为本集群其他节点提供的服务提供的监听地址URL地址
    • ETCD_INITIAL_CLUSTER_TOKEN:集群名称
    • ETCD_INITIAL_CLUSTER:集群各节点的endpoint列表 具体为 etcd节点名称=服务IP地址:2380
    • ETCD_INITIAL_CLUSTER_STATE:初始集群状态,新建集群设置为new,集群已经存在设置为existing
    • ETCD_CERT_FILE:etcd服务端CA证书-crt文件全路径
    • ETCD_KEY_FILE:etcd服务端CA证书-key文件全路径
    • ETCD_TRUSTED_CA_FILE:CA根证书-crt文件全路径
    • ETCD_CLIENT_CERT_AUTH:是否启用客户端证书认证
    • ETCD_PEER_CERT_FILE:集群各节点相互认证使用的etcd服务端CA证书-crt文件全路径
    • ETCD_PEER_KEY_FILE:集群各节点相互认证使用的etcd服务端CA证书-key文件全路径
    • ETCD_PEER_TRUSTED_CA_FILE:CA根证书-crt文件全路径
    cd /data/etcd
    
    #master
    cat > /data/etcd/etcd.conf <<EOF
    # etcd节点一
    ETCD_NAME=etcd1
    ETCD_DATA_DIR=/data/etcd/data
    
    ETCD_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_CLIENT_CERT_AUTH=true
    ETCD_LISTEN_CLIENT_URLS=https://192.168.100.100:2379
    ETCD_ADVERTISE_CLIENT_URLS=https://192.168.100.100:2379
    
    ETCD_PEER_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_PEER_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_PEER_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_LISTEN_PEER_URLS=https://192.168.100.100:2380
    ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.100.100:2380
    
    ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
    ETCD_INITIAL_CLUSTER="etcd1=https://192.168.100.100:2380,etcd2=https://192.168.100.101:2380,etcd3=https://192.168.100.102:2380"
    ETCD_INITIAL_CLUSTER_STATE=new
    
    EOF
    
    #node1
    cat > /data/etcd/etcd.conf <<EOF
    # etcd节点二
    ETCD_NAME=etcd2
    ETCD_DATA_DIR=/data/etcd/data
    
    ETCD_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_CLIENT_CERT_AUTH=true
    ETCD_LISTEN_CLIENT_URLS=https://192.168.100.101:2379
    ETCD_ADVERTISE_CLIENT_URLS=https://192.168.100.101:2379
    
    ETCD_PEER_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_PEER_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_PEER_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_LISTEN_PEER_URLS=https://192.168.100.101:2380
    ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.100.101:2380
    
    ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
    ETCD_INITIAL_CLUSTER="etcd1=https://192.168.100.100:2380,etcd2=https://192.168.100.101:2380,etcd3=https://192.168.100.102:2380"
    ETCD_INITIAL_CLUSTER_STATE=new
    
    EOF
    
    #node2
    cat > /data/etcd/etcd.conf <<EOF
    # etcd节点三
    ETCD_NAME=etcd3
    ETCD_DATA_DIR=/data/etcd/data
    
    ETCD_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_CLIENT_CERT_AUTH=true
    ETCD_LISTEN_CLIENT_URLS=https://192.168.100.102:2379
    ETCD_ADVERTISE_CLIENT_URLS=https://192.168.100.102:2379
    
    ETCD_PEER_CERT_FILE=/data/k8s/ssl/etcd_server.crt
    ETCD_PEER_KEY_FILE=/data/k8s/ssl/etcd_server.key
    ETCD_PEER_TRUSTED_CA_FILE=/data/k8s/ssl/ca.crt
    ETCD_LISTEN_PEER_URLS=https://192.168.100.102:2380
    ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.100.102:2380
    
    ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
    ETCD_INITIAL_CLUSTER="etcd1=https://192.168.100.100:2380,etcd2=https://192.168.100.101:2380,etcd3=https://192.168.100.102:2380"
    ETCD_INITIAL_CLUSTER_STATE=new
    
    EOF
    

    4.3、创建etcd服务配置文件(service)

    cat > /usr/lib/systemd/system/etcd.service <<EOF
    [Unit]
    Description=etcd key-value store
    Documentation=https://github.com/etcd-io/etcd
    After=network.target
    
    [Service]
    EnvironmentFile=/data/etcd/etcd.conf
    ExecStart=/usr/bin/etcd
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    4.4、启动etcd服务并注册CA,检查集群状态

    systemctl start etcd
    systemctl status etcd
    systemctl enable etcd
    
    etcdctl --cacert=/data/k8s/ssl/ca.crt --cert=/data/k8s/ssl/etcd_client.crt --key=/data/k8s/ssl/etcd_client.key --endpoints=https://192.168.100.100:2379,https://192.168.100.101:2379,https://192.168.100.102:2379 endpoint health
    

    5、下载Kubernetes相关文件

    5.1、下载Kubernetes Server Binaries文件

    cd /data/packages
    wget https://dl.k8s.io/v1.19.16/kubernetes-server-linux-amd64.tar.gz
    

    5.2、下载Kubernetes Node Binaries文件

    cd /data/packages
    wget https://dl.k8s.io/v1.19.16/kubernetes-node-linux-amd64.tar.gz
    
    • Kubernetes Server Binaries中包含服务端所需要运行的全部服务程序文件
    • Kubernetes Node Binaries中则包含了Node所需要运行的全部服务程序文件
    • 在Kubernetes的Master节点上需要部署的服务包括etcd、kube-apiserver、kube-controller-manager和kube-scheduler
    • 在Kuberneter的Node节点上需要部署的服务包括Docker、kubelet和kube-proxy

    5.3、需要解压,并将对应压缩包中的可执行文件放入/usr/bin中

    mkdir -p /data/packages/kubernetes-server
    mkdir -p /data/packages/kubernetes-node
    mv kubernetes-server-linux-amd64.tar.gz /data/packages/kubernetes-server
    mv kubernetes-node-linux-amd64.tar.gz /data/packages/kubernetes-node
    cd /data/packages/kubernetes-server && tar -zxvf kubernetes-server-linux-amd64.tar.gz
    cd /data/packages/kubernetes-node && tar -zxvf kubernetes-node-linux-amd64.tar.gz
    cd /data/packages/kubernetes-server/kubernetes/server/bin && mv apiextensions-apiserver kube-apiserver kube-controller-manager kubectl kubeadm kube-proxy kube-scheduler mounter kube-aggregator kubelet /usr/bin
    cd /data/packages/kubernetes-node/kubernetes/node/bin && mv ./* /usr/bin
    

    文件名 说明
    kube-apiserver kube-apiserver 主程序
    kube-apiserver.docker_tag kube-apiserver docker 镜像tag
    kube-apiserver.tar kube-apiserver docker 镜像文件
    kube-controller-manager kube-controller-manager 主程序
    kube-controller-manager.docker_tag kube-controller-manager docker 镜像tag
    kube-controller-manager.tar kube-controller-manager docker 镜像文件
    kube-scheduler kube-scheduler 主程序
    kube-scheduler.docker_tag kube-scheduler docker 镜像tag
    kube-scheduler.tar kube-scheduler docker 镜像文件
    kubelet kubelet 主程序
    kube-proxy kube-proxy 主程序
    kube-proxy.docker_tag kube-proxy docker 镜像tag
    kube-proxy.tar kube-proxy docker 镜像文件
    kubectl 客户端命令行工具
    kubeadm Kubernetes 集群安装的命令行工具
    apiextensions-apiserver 提供实现自定义资源对象的扩展API Server
    kube-aggregator 聚合 API Server 程序

    6、部署kube-apiserver(k8s-master)

    6.1、创建kube-apiserver服务配置文件(service)

    vim  /usr/lib/systemd/system/kube-apiserver.service
    
    
    #配置文件
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=/data/k8s/conf/apiserver
    ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    6.2、创建环境配置文件

    cat > /data/k8s/conf/apiserver <<EOF
    KUBE_API_ARGS="--insecure-port=0 \
    --secure-port=6443 \
    --tls-cert-file=/data/k8s/ssl/apiserver.crt \
    --tls-private-key-file=/data/k8s/ssl/apiserver.key \
    --client-ca-file=/data/k8s/ssl/ca.crt \
    --apiserver-count=3 \
    --endpoint-reconciler-type=master-count \
    --etcd-servers=https://192.168.100.100:2379,https://192.168.100.101:2379,https://192.168.100.102:2379 \
    --etcd-cafile=/data/k8s/ssl/ca.crt \
    --etcd-certfile=/data/k8s/ssl/etcd_client.crt \
    --etcd-keyfile=/data/k8s/ssl/etcd_client.key \
    --service-cluster-ip-range=169.169.0.0/16 \
    --service-node-port-range=30000-32767 \
    --allow-privileged=true \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    

    配置文件详解:

    • --secure-port:https端口号,默认为6443
    • --insecure-port:http端口号,默认为8080,设置为0表示关闭http访问
    • --tls-cert-file:服务端证书文件全路径
    • --tls-private-key-file:服务端私钥文件全路径
    • --client-ca-file:CA根证书全路径
    • --apiserver-count:API Server实例数量,并需要同时设置--endpoint-reconciler-type=master-count
    • --etcd-servers:连接etcd的URL列表,注意端口为2379
    • --etcd-cafile:etcd使用的CA根证书文件全路径
    • --etcd-certfile:etcd客户端使用的证书文件全路径
    • --etcd-keyfile:etcd客户端使用的私钥文件全路径
    • --service-cluster-ip-range:Service虚拟IP地址范围,以CIDR格式表示,该IP地址范围不能与物理机的IP地址范围重合
    • --service-node-port-range:Service可使用的物理机端口范围,默认值为30000-32767
    • --allow-privileged:是否允许容器以特权模式运行
    • --logtostderr:是否将日志输出到stderr,默认为true,当使用systemd管理时,日志将被输出到journald子系统,设置为false可以输出到日志文件
    • --log-dir:日志的输出目录
    • --v:日志级别

    分发配置文件

    scp /data/k8s/conf/apiserver  root@node1:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-apiserver.service root@node1:/usr/lib/systemd/
    scp /data/k8s/conf/apiserver  root@node2:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-apiserver.service root@node2:/usr/lib/systemd/
    

    6.4、启动服务

    systemctl start kube-apiserver
    systemctl enable kube-apiserver
    systemctl status kube-apiserver
    

    7、部署k8s-master客户端(k8s-master)

    7.1、配置文件

    • 创建一个kubeconfig文件,作为连接kube-apiserver的配置文件,所有客户端组件(kube-controller-manager,kube-scheduler,kubelet,kube-proxy)统一使用
    cat > /data/k8s/conf/kubeconfig <<EOF
    apiVersion: v1
    kind: Config
    clusters:
    - name: default
      cluster:
        server: https://192.168.100.200:9443
        certificate-authority: /data/k8s/ssl/ca.crt
    users:
    - name: admin
      user:
        client-certificate: /data/k8s/ssl/client.crt
        client-key: /data/k8s/ssl/client.key
    contexts:
    - context:
        cluster: default
        user: admin
      name: default
    current-context: default
    EOF
    

    配置文件详解:

    • server:URL地址配置为负载均衡器(HAProxy)使用的VIP地址和HAProxy监听的端口号
    • certficate-authority:CA根证书的全路径
    • client-certficate:客户端证书文件全路径
    • client-key:客户端私钥文件全路径
    • users 中的 user.name 和 context 中的 user:连接API Server的用户名,设置为与客户端证书中的“/CN”名称保存一致

    7.2、部署kube-controller-manager

    7.2.1、创建kube-controller-manager服务配置文件(service)

    vim  /usr/lib/systemd/system/kube-controller-manager.service
    
    
    #配置文件
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=/data/k8s/conf/controller-manager
    ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    7.2.2、创建环境配置文件

    cat > /data/k8s/conf/controller-manager <<EOF
    KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --leader-elect=true \
    --service-cluster-ip-range=169.169.0.0/16 \
    --service-account-private-key-file=/data/k8s/ssl/apiserver.key \
    --root-ca-file=/data/k8s/ssl/ca.crt \
    --log-dir=/data/k8s/log/ \
    --logtostderr=false \
    --v=0"
    EOF
    

    配置详解

    • --kubeconfig:与API Server连接的相关配置
    • --leader-elect:启用选举机制
    • --service-account-private-key-file:ServerAccount自动颁发token使用的私钥文件全路径
    • --root-ca-file:CA证书全路径
    • --service-cluster-ip-range:Service虚拟IP地址范围,以CIDR格式表示,该IP地址范围与API Server中设置的一致

    分发配置文件

    scp /data/k8s/conf/kubeconfig root@node1:/data/k8s/conf
    scp /data/k8s/conf/controller-manager root@node1:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-controller-manager.service root@node1:/usr/lib/systemd/system/
    scp /data/k8s/conf/kubeconfig root@node2:/data/k8s/conf
    scp /data/k8s/conf/controller-manager root@node2:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-controller-manager.service root@node2:/usr/lib/systemd/system/
    

    7.2.3、启动服务

    systemctl start kube-controller-manager
    systemctl enable kube-controller-manager
    systemctl status kube-controller-manager
    

    7.3、部署kube-scheduler

    7.3.1、创建kube-scheduler服务配置文件(service)

    vim  /usr/lib/systemd/system/kube-scheduler.service
    
    
    #配置文件
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=/data/k8s/conf/scheduler
    ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    7.3.2、创建环境配置文件

    cat > /data/k8s/conf/scheduler <<EOF
    KUBE_SCHEDULER_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --leader-elect=true \
    --log-dir=/data/k8s/log/ \
    --logtostderr=false \
    --v=0"
    EOF
    

    分发配置文件

    scp /data/k8s/conf/scheduler root@node1:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-scheduler.service  root@node1:/usr/lib/systemd/system/
    scp /data/k8s/conf/scheduler root@node2:/data/k8s/conf
    scp /usr/lib/systemd/system/kube-scheduler.service  root@node2:/usr/lib/systemd/system/
    

    7.3.3、启动服务

    systemctl start kube-scheduler
    systemctl enable kube-scheduler
    systemctl status kube-scheduler
    

    8、部署高可用负载均衡器(k8s-master)

    在三个kube-apiserver的前段部署HAProxy和keepalived,使用VIP(虚拟IP地址)192.168.100.200作为Master的唯一入口地址,供客户端访问
    将HAProxy和keepalived均部署为至少有两个实例的高可用架构,以避免单点故障。
    接下来在主机master(192.168.100.100)和主机node1(192.168.100.101)中部署
    HAProxy负责将客户端请求转发到后端的3个kube-apiserver实例上,keepalived负责维护VIP的高可用

    8.1、架构图

    8.2、部署HAProxy

    准备 HAProxy 的配置文件 haproxy.cfg

    mkdir -p /data/haproxy/conf
    cd /data/haproxy/conf
    
    cat > /data/haproxy/conf/haproxy.cfg <<EOF
    global
      log           127.0.0.1 local2
      maxconn       4096
      user          haproxy
      group         haproxy
      daemon
      stats socket  /var/lib/haproxy/stats
    
    defaults
      mode          http
      log           global
      option        httplog
      option        dontlognull
      option        http-server-close
      option        redispatch
      retries       3
      timeout       http-request    10s
      timeout       queue           1m
      timeout       connect         10s
      timeout       client          1m
      timeout       server          1m
      timeout       http-keep-alive 10s
      timeout       check           10s
      maxconn       3000
    
    frontend kube-apiserver
      mode                  tcp
      bind                  *:9443
      option                tcplog
      default_backend       kube-apiserver
    
    listen stats
      mode          http
      bind          *:8888
      stats auth    admin:password
      stats refresh 5s
      stats realm   HAProxy\ Statistics
      stats uri     /stats
      log           127.0.0.1 local3 err
    
    backend kube-apiserver
      mode          tcp
      balance       roundrobin
      server        k8s-master1 192.168.100.100:6443 check
      server        k8s-master2 192.168.100.101:6443 check
      server        k8s-master3 192.168.100.102:6443 check
    EOF
    

    参数说明:

    • frontend:HAProxy 的监听协议和端口号,使用TCP,端口号9443
    • backend:后端三个kube-apiserver的地址,以 IP:Port 的方式表示,mode用于设置协议,balance用于设置负载均衡策略,roundrobin(轮询)
    • listen stats:状态监控的服务配置,其中,bind用于设置监听端口号,如8888;stats auth用于配置访问账号(上面的配置文件中需要将password修改成需要的密码);stats uri用于配置访问URL路径

    分发配置文件

    scp /data/haproxy/conf/haproxy.cfg root@node1:/data/haproxy/conf/
    

    在master(192.168.100.100)和node1(192.168.100.101)上使用docker部署HAProxy,并将配置文件挂载到容器的/usr/local/etc/haproxy下

    docker run -d --name k8s-haproxy \
        --net=host \
      --restart=always \
      -v /data/haproxy/conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \
      haproxy:lts
    

    访问192.168.100.100:8888/stats,192.168.100.101:8888/stats

    8.3、部署keepalived

    keepalived用于维护VIP地址的高可用,同样在master和node1中部署

    #创建配置文件夹
    mkdir -p /data/keepalived/conf/
    

    主节点配置文件

    cat > /data/keepalived/conf/keepalived.conf <<EOF
    ! Configuration File for keepalived
    
    global_defs {
        router_id LVS_1
    }
    
    vrrp_script checkhaproxy {
        script "/usr/bin/check-haproxy.sh"
      interval 2
      weight -30
    }
    
    vrrp_instance VI_1 {
        state MASTER
      interface ens33
      virtual_router_id 51
      priority 100
      advert_int 1
      
      virtual_ipaddress {
        192.168.100.200/24 dev ens33
      }
      
      authentication {
        auth_type PASS
        auth_pass password
      }
      
      track_script {
        checkhaproxy
      }
    }
    EOF
    

    子节点配置文件

    cat > /data/keepalived/conf/keepalived.conf <<EOF
    ! Configuration File for keepalived
    
    global_defs {
        router_id LVS_1
    }
    
    vrrp_script checkhaproxy {
        script "/usr/bin/check-haproxy.sh"
      interval 2
      weight -30
    }
    
    vrrp_instance VI_1 {
        state BACKUP
      interface ens33
      virtual_router_id 51
      priority 100
      advert_int 1
      
      virtual_ipaddress {
        192.168.100.200/24 dev ens33
      }
      
      authentication {
        auth_type PASS
        auth_pass password
      }
      
      track_script {
        checkhaproxy
      }
    }
    EOF
    

    参数解析

    • vrrp_instance VI_1:设置keepalived虚拟路由器VRRP的名称
    • state:设置为MASTER,将其他keeplived设置为BACKUP
    • interface:待设置VIP地址的网卡名称
    • virtual_router_id:虚拟路由ID
    • priority:优先级
    • virtual_ipaddress:VIP 地址
    • authentication:访问keepalived的鉴权信息
    • track_script:健康检查脚本

    主节点和子节点配置异同

    • vrrp_instance中state的值应设置为BACKUP,除了MASTER节点,其余均设置为BACKUP
    • vrrp_instance的值:VI_1需要与MASTER的配置相同,保证主节点down掉后的自动选举

    健康检查脚本
    keeplived需要持续监控HAProxy的运行状态,在某个节点运行不正常时转移到另外的节点上去
    监控运行状态需要创建健康检查脚本,定期运行监控
    脚本内容如下:

    cat > /data/keepalived/bin/check-haproxy.sh <<EOF
    #!/bin/bash
    
    #检查端口是否运行
    count=`netstat -apn | grep 9443 | wc -l`
    
    #若检查则返回0,否则返回1
    if [ $count -gt 0 ];then
        exit 0
    else
        exit 1
    fi
    EOF
    

    分发脚本

    scp /data/keepalived/bin/check-haproxy.sh root@node1:/data/keepalived/bin/
    
    

    在master(192.168.100.100)和node1(192.168.100.101)上使用docker部署keeplived,并将配置文件挂载到容器的/container/service/keepalived/assets下,将脚本挂载到容器的/usr/bin下

    docker run -d --name k8s-keepalived \
        --restart=always \
      --net=host \
      --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW \
      -v /data/keepalived/bin/check-haproxy.sh:/usr/bin/check-haproxy.sh \
      -v /data/keepalived/conf/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
      osixia/keepalived:2.0.20 --copy-service
    

    检查ens33网卡是否存在keeplived VIP地址

    ip addr
    
    $ ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:53:c6:3f brd ff:ff:ff:ff:ff:ff
        inet 192.168.100.100/24 brd 192.168.100.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet 192.168.100.200/24 scope global secondary ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::bfa4:a3de:addb:9013/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    

    检测keeplived是否进行转发

    curl -vk https://192.168.100.200:9443
    
    * About to connect() to 192.168.100.200 port 9443 (#0)
    *   Trying 192.168.100.200...
    * Connected to 192.168.100.200 (192.168.100.200) port 9443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * skipping SSL peer certificate verification
    * NSS: client certificate not found (nickname not specified)
    * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    * Server certificate:
    *       subject: CN=192.168.100.100
    *       start date: 2月 21 02:21:49 2022 GMT
    *       expire date: 1月 28 02:21:49 2122 GMT
    *       common name: 192.168.100.100
    *       issuer: CN=192.168.100.100
    > GET / HTTP/1.1
    > User-Agent: curl/7.29.0
    > Host: 192.168.100.200:9443
    > Accept: */*
    >
    < HTTP/1.1 401 Unauthorized
    < Cache-Control: no-cache, private
    < Content-Type: application/json
    < Date: Tue, 22 Feb 2022 08:44:31 GMT
    < Content-Length: 165
    <
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {
    
      },
      "status": "Failure",
      "message": "Unauthorized",
      "reason": "Unauthorized",
      "code": 401
    * Connection #0 to host 192.168.100.200 left intact
    }#                                           
    

    9、部署kubelet(k8s-node)

    注:master集群已经配置完毕,后续需要在node中需要部署docker,kubelet,kube-proxy,且在加入k8s集群后,还需要部署CNI网络插件、DNS插件等
    之前已经在master,node1,node2中部署了docker,接下来需要部署其他服务组件,本节部署kubelet组件

    9.1、创建kubelet服务配置文件(service)

    vim  /usr/lib/systemd/system/kubelet.service
    
    
    #配置文件
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.target
    
    [Service]
    EnvironmentFile=/data/k8s/conf/kubelet
    ExecStart=/usr/bin/kubelet $KUBELET_ARGS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    9.2、创建环境配置文件

    cat > /data/k8s/conf/kubelet <<EOF
    KUBELET_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --config=/data/k8s/conf/kubelet.config \
    --hostname-override=192.168.100.100 \
    --network-plugin=cni \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    
    cat > /data/k8s/conf/kubelet <<EOF
    KUBELET_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --config=/data/k8s/conf/kubelet.config \
    --hostname-override=192.168.100.101 \
    --network-plugin=cni \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    
    cat > /data/k8s/conf/kubelet <<EOF
    KUBELET_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --config=/data/k8s/conf/kubelet.config \
    --hostname-override=192.168.100.102 \
    --network-plugin=cni \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    

    参数说明

    • --kubeconfig:设置与API Server连接的相关配置,与之前kube-controller-manager使用的配置文件一致
    • --config:kubelet配置文件,设置可以让多个node共享的配置参数,具体请参考k8s文档
    • --hostname-override:设置本node在集群中的名称,默认为主机名,但当前为了避免混淆,设置为主机IP地址,生产环境建议使用域名
    • --network-plugin:网络插件类型,建议使用CNI插件

    9.3、创建配置文件

    cat > /data/k8s/conf/kubelet.config <<EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 0.0.0.0
    port: 10250
    cgroupDriver: cgroupfs
    clusterDNS: ["169.169.0.100"]
    clusterDomain: cluster.local
    authentication:
      anonymous:
        enabled: true
    EOF
    

    参数说明

    • address:服务监听IP地址
    • port:服务监听端口号,默认为10250
    • cgroupDriver:设置为cgroupDriver驱动,默认值为cgroupfs,可选项包括systemd
    • clusterDNS:集群DNS服务的IP地址
    • clusterDomain:服务DNS域名后缀
    • authentication:设置是否允许匿名访问或使用webhook进行鉴权

    9.4、启动服务

    systemctl start kubelet
    systemctl enable kubelet
    systemctl status kubelet
    

    10、部署kube-proxy服务

    10.1、创建kube-proxy服务配置文件(service)

    vim  /usr/lib/systemd/system/kube-proxy.service
    
    
    #配置文件
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=/data/k8s/conf/kubeproxy
    ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    10.2、创建环境配置文件

    cat > /data/k8s/conf/kubeproxy <<EOF
    KUBE_PROXY_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --hostname-override=192.168.100.100 \
    --proxy-mode=iptables \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    
    cat > /data/k8s/conf/kubeproxy <<EOF
    KUBE_PROXY_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --hostname-override=192.168.100.101 \
    --proxy-mode=iptables \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    
    cat > /data/k8s/conf/kubeproxy <<EOF
    KUBE_PROXY_ARGS="--kubeconfig=/data/k8s/conf/kubeconfig \
    --hostname-override=192.168.100.102 \
    --proxy-mode=iptables \
    --logtostderr=false \
    --log-dir=/data/k8s/log/ \
    --v=0"
    EOF
    

    参数说明

    • --kubeconfig:设置与API Server连接的相关配置,与之前kube-controller-manager,kubelet使用的配置文件一致
    • --hostname-override:设置本node在集群中的名称,默认为主机名,但当前为了避免混淆,设置为主机IP地址,生产环境建议使用域名
    • --proxy-mode:代理模式,包括iptables,IPVS,kernelspace等

    10.3、启动服务

    systemctl start kube-proxy
    systemctl enable kube-proxy
    systemctl status kube-proxy
    

    11、检查组件服务是否正常

    systemctl status etcd
    systemctl status kube-apiserver
    systemctl status kube-controller-manager
    systemctl status kube-scheduler
    systemctl status kubelet
    systemctl status kube-proxy
    

    12、在Master上通过kubectl验证node信息

    kubectl --kubeconfig=/data/k8s/conf/kubeconfig get nodes
    

    当前显示所有node为NOT READY,需要部署完网络插件才可使用
    为方便使用kubectl

    echo "alias kubectl='kubectl --kubeconfig=/data/k8s/conf/kubeconfig'" >> /etc/profile
    source /etc/profile
    
    # ZSH
    # echo "alias kubectl='kubectl --kubeconfig=/data/k8s/conf/kubeconfig'" >> ~/.zshrc
    # source ~/.zshrc
    

    13、部署CNI网络插件

    13.1、下载pause镜像(如果服务器能够直连k8s.gcr.io仓库,则不需要下载)

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 k8s.gcr.io/pause:3.2
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
    
    #远程执行部署
    kubectl -f "https://docs.projectcalico.org/manifests/calico.yaml"
    
    #可以自己调整参数
    cd /data/k8s/conf
    wget https://docs.projectcalico.org/manifests/calico.yaml
    kubectl -f calico.yaml
    

    13.2、重新查看节点

    kubectl get nodes
    
    #查看calico pods状态
    kubectl get pods --all-namespaces
    
    MobaXterm_VBse97XeJS.png
    MobaXterm_irEepSq3hx.png

    相关文章

      网友评论

          本文标题:二进制安装K8S(基于1.19.16版本)

          本文链接:https://www.haomeiwen.com/subject/voezlrtx.html