美文网首页
第2章 Kubernetes集群部署

第2章 Kubernetes集群部署

作者: 六弦极品 | 来源:发表于2019-05-28 19:20 被阅读0次

    一. 官方提供的三种部署方式

    minikube

    Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
    部署地址:https://kubernetes.io/docs/setup/minikube/

    kubeadm

    Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
    部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

    二进制包

    推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
    下载地址:https://github.com/kubernetes/kubernetes/releases/

    二. Kubernetes平台环境规划

    1.组件版本

    软件 版本
    linux 系统 centos7.6_x64
    Kubernetes 1.12
    Docker 18.xx-ce
    Etcd 3.x
    Flannel 0.10

    2. 角色节点IP组件

    角色 IP 组件 推荐配置
    master01 10.40.6.201 kube-apiserver
    kube-controller-manager
    kube-scheduler
    etcd
    2核4G+
    master02 10.40.6.209 kube-apiserver
    kube-controller-manager
    kube-scheduler
    2核4G+
    node01 10.40.6.210 kubelet
    kube-proxy
    docker
    flannel
    etcd
    2核4G+
    node02 10.40.6.213 kubelet
    kube-proxy
    docker
    flannel
    etcd
    2核4G+
    Load Balancer(Master) 10.40.6.166
    10.40.6.175 (VIP)
    Nginx L4 2核4G+
    Load Balancer(Backup) 10.40.6.167 Nginx L4 2核4G+
    Registry 10.40.6.214 Harbor 2核4G+

    3. 集群架构

    单Master集群架构图.png
    多Master集群架构图.png

    三. k8s自签SSL证书

    部署前建议把selinux, firewalld,关闭,
    将配置文件/etc/selinux/config参数改为SELINUX=disabled,即时生效setenforce 0
    停止firewalld: systemctl stop firewalld
    每台主机修改为相应的主机名

    组件 使用的证书
    etcd ca.pem,server.pem,server-key.pem
    flannel ca.pem,server.pem,server-key.pem
    kube-apiserver ca.pem,server.pem,server-key.pem
    kubelet ca.pem,ca-key.pem
    kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
    kubectl ca.pem,admin.pem,admin-key.pem

    四. Etcd数据库集群部署

    •二进制包下载地址
    https://github.com/etcd-io/etcd/releases

    角色 IP 组件
    k8s-master01 10.40.6.201 kube-apiserver
    kube-controller-manager
    kube-scheduler
    etcd
    k8s-node1 10.40.6.210 kubelet
    kube-proxy
    docker
    flannel
    etcd
    k8s-node2 10.40.6.213 kubelet
    kube-proxy
    docker
    flannel
    etcd

    1. 安装cfssl工具

    使用cfssl来生成自签证书,先下载cfssl工具:

    # wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    # chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    # mv cfssl_linux-amd64 /usr/local/bin/cfssl
    # mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    # mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    

    2. 生成etcd证书

    创建以下三个文件:cd /usr/local/src/k8s/etcd-cert

    # cat ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    
    # cat ca-csr.json
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    
    # cat server-csr.json
    {
        "CN": "etcd",
        "hosts": [
        "10.40.6.201",
        "10.40.6.210",
        "10.40.6.213"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    

    生成证书:

    # cd /usr/local/src/k8s/etcd-cert
    # cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    # ls *pem
    ca-key.pem  ca.pem  server-key.pem  server.pem
    

    3. 部署Etcd

    二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
    以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的IP和节点名
    解压二进制包:

    # mkdir /opt/etcd/{bin,cfg,ssl} -p
    # tar xvf etcd-v3.3.10-linux-amd64.tar.gz
    # mv etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
    

    创建etcd配置文件:

    # cat /opt/etcd/cfg/etcd   
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://10.40.6.201:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.40.6.201:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.40.6.201:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.40.6.201:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://10.40.6.201:2380,etcd02=https://10.40.6.210:2380,etcd03=https://10.40.6.213:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    

    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_NAME 节点名称
    ETCD_DATA_DIR 数据目录
    ETCD_LISTEN_PEER_URLS 集群通信监听地址
    ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
    ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
    ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
    ETCD_INITIAL_CLUSTER 集群节点地址
    ETCD_INITIAL_CLUSTER_TOKEN 集群Token
    ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

    systemd管理etcd:

    # cat /usr/lib/systemd/system/etcd.service 
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/etcd/cfg/etcd
    ExecStart=/opt/etcd/bin/etcd \
    --name=${ETCD_NAME} \
    --data-dir=${ETCD_DATA_DIR} \
    --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
    --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
    --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
    --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
    --initial-cluster=${ETCD_INITIAL_CLUSTER} \
    --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
    --initial-cluster-state=new \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --peer-cert-file=/opt/etcd/ssl/server.pem \
    --peer-key-file=/opt/etcd/ssl/server-key.pem \
    --trusted-ca-file=/opt/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    证书拷贝至配置文件指定的位置:

    # cp /usr/local/src/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl/
    # ls /opt/etcd/ssl/
    ca.pem  server-key.pem  server.pem
    

    启动并设置开启启动:

    # systemctl start etcd
    Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
    查看日志可以发现其他两个节点未加入集群
    # systemctl enable etcd
    

    其他连个节点部署:
    将10.40.6.201节点的相关目录文件cp到其他两个节点,并修改etcd 配置文件即可:
    /opt/etcd/cfg/etcd

    # scp -r /opt/etcd 10.40.6.210:/opt/
    # scp -r /opt/etcd 10.40.6.213:/opt/
    # scp /usr/lib/systemd/system/etcd.service 10.40.6.210:/usr/lib/systemd/system/
    # scp /usr/lib/systemd/system/etcd.service 10.40.6.213:/usr/lib/systemd/system/
    

    部署完成后,检查etcd集群状态:

    # /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379" cluster-health
    
    member 11e9f13e775913c8 is healthy: got healthy result from https://10.40.6.213:2379
    member 188c1664ca149fb2 is healthy: got healthy result from https://10.40.6.210:2379
    member 1e3d872c12b243a1 is healthy: got healthy result from https://10.40.6.201:2379
    cluster is healthy
    

    如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

    五. Node节点安装Docker

    # yum install -y yum-utils device-mapper-persistent-data lvm2
    # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    # yum install docker-ce -y
    # curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
    # systemctl start docker
    # systemctl enable docker
    

    六. Flannel容器集群网络部署

    1. K8S网络模型(CNI)

    Container Network Interface(CNI):容器网络接口,Google和CoreOS主导

    2. K8S网络模型设计基本要求

    ① 一个Pod一个IP
    ② 每个Pod 独立一个IP, Pod内所有容器共享网络(同一个IP)
    ③ 所有容器都可以与所有其他容器通信
    ④ 所有节点都可以与所有容器通信

    3. K8S最常用网络插件

    flannel: 隧道方案,对数据做封装然后在解封装,较消耗性能,node在100台机器以下较为推荐
    Calico: 路由方案,通过路由表转发,不用对数据包做封装和解封装,性能较好,node100台以上

    4.部署Kubernetes网络 Flannel

    Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
    VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目 标地址。
    Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、Host-GW、AWS VPC和GCE路由等数据转发方式。

    node节点都在一个局域网内建议使用 Host-GW,性能几乎没有损耗
    node节点跨网段,建议使用 VXLAN,对基础网络环境比较严格,只要在任何互联网络里,只要能通信就可以使用。

    1). Flannel网络工作原理

    Overlay Network.png
    flannel工作原理.png

    2). 分配子网段写入etcd

    分配子网段写入etcd供flanneld使用:

    # /opt/etcd/bin/etcdctl \
    --ca-file=/opt/etcd/ssl/ca.pem \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379"  \
    set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
    
    { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
    

    给/coreos.com/network/config key 划分一个大的子网172.17.0.0/16,类型为 vxlan。
    可以通过get 获取这个/coreos.com/network/config key的值

    # /opt/etcd/bin/etcdctl \
    --ca-file=/opt/etcd/ssl/ca.pem \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379"  \
    get /coreos.com/network/config
    

    3). 下载二进制包

    https://github.com/coreos/flannel/releases

    # wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    # tar xvf flannel-v0.11.0-linux-amd64.tar.gz
    # mkdir /opt/kubernetes/bin -p
    # mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
    

    4). 部署与配置Flannel

    # mkdir /opt/kubernetes/cfg -p
    # cat /opt/kubernetes/cfg/flanneld
    FLANNEL_OPTIONS="--etcd-endpoints=https://10.40.60.201:2379,https://10.40.60.210:2379,https://10.40.6.213:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    

    5). systemd管理Flannel

    flannel 启动后配置的子网保存到/run/flannel/subnet.env文件中

    # cat /usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/kubernetes/cfg/flanneld
    ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
    ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

    6). 配置Docker启动使用Flannel生成的子网

    docker启动时读取flannel子网文件 /run/flannel/subnet.env

    # cat /usr/lib/systemd/system/docker.service 
    
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
    
    [Install]
    WantedBy=multi-user.target
    

    7). 重启flannel和docker

    # systemctl daemon-reload
    # systemctl start flanneld
    # systemctl enable flanneld
    # systemctl restart docker
    

    8). 检查是否生效

    # ps -ef |grep docker
    root     17311     1  0 16:07 ?        00:00:00 /usr/bin/dockerd --bip=172.17.31.1/24 --ip-masq=false --mtu=1450
    #  ip addr
        .......
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:94:ca:12:8a brd ff:ff:ff:ff:ff:ff
        inet 172.17.31.1/24 brd 172.17.31.255 scope global docker0
           valid_lft forever preferred_lft forever
    4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
        link/ether 16:c8:15:51:0c:30 brd ff:ff:ff:ff:ff:ff
        inet 172.17.31.0/32 scope global flannel.1
           valid_lft forever preferred_lft forever
        inet6 fe80::14c8:15ff:fe51:c30/64 scope link 
           valid_lft forever preferred_lft forever
    

    确保docker0与flannel.1在同一网段。
    测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:

    # ping -c 2 172.17.59.1
    PING 172.17.59.1 (172.17.59.1) 56(84) bytes of data.
    64 bytes from 172.17.59.1: icmp_seq=1 ttl=64 time=0.355 ms
    64 bytes from 172.17.59.1: icmp_seq=2 ttl=64 time=0.293 ms
    

    如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel

    启动一个容器,在另一个node节点ping 容器的IP,两个节点都启动一个容器,容器里互ping,测试容器是否互通

    # docker run -it busybox
    

    9). 获取Etcd中的Flannel网络信息

    列出子网父目录:

    # /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379" ls /coreos.com/network/
    /coreos.com/network/config         
    /coreos.com/network/subnets      
    
    # /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379" ls /coreos.com/network/subnets
    /coreos.com/network/subnets/172.17.31.0-24
    /coreos.com/network/subnets/172.17.59.0-24
    
    # /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379" get /coreos.com/network/subnets/172.17.31.0-24
    {"PublicIP":"10.40.6.210","BackendType":"vxlan","BackendData":{"VtepMAC":"16:c8:15:51:0c:30"}}
    

    /coreos.com/network/config :分配的子网存储key
    /coreos.com/network/subnets :分配node节点的子网存储key父目录,key名标识了节点给容器分配的子网

    七. 部署Master组件

    在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续

    第三个组件:
    kube-apiserver
    kube-controller-manager
    kube-scheduler
    步骤: 配置文件 -> systemd管理组件 -> 启动

    1. 生成证书

    创建CA证书:

    # cat ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    
    # cat ca-csr.json
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    
    # cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    

    生成apiserver证书:

    下面的IP主要是master和LB的IP, 第一个IP是service
    # cat server-csr.json
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",  
          "127.0.0.1",
          "10.40.6.201",
          "10.40.6.209",
          "10.40.6.166",
          "10.40.6.175",
          "10.40.6.167",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    

    生成kube-proxy证书:

    # cat kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    
    # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    

    最终生成以下证书文件:

    # ls *pem
    ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
    

    拷贝证书到证书目录:/opt/kubernetes/ssl/

    # cp ca.pem server.pem server-key.pem ca-key.pem /opt/kubernetes/ssl/
    

    2. 部署apiserver组件

    下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
    下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。

    # mkdir /opt/kubernetes/{bin,cfg,ssl} -p
    # tar xvf kubernetes-server-linux-amd64.tar.gz
    # cd kubernetes/server/bin
    # cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
    

    创建token文件,用于kubelet请求签名(请求加入集群时颁发证书使用):

    # head -c 16 /dev/urandom |od -An -t x |tr -d ' '    ##生成 token id
    5b2ecab909e3ae8f0dc611ba255777c2
    
    # cat /opt/kubernetes/cfg/token.csv
    5b2ecab909e3ae8f0dc611ba255777c2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    

    第一列:随机字符串,自己可生成
    第二列:用户名
    第三列:UID
    第四列:用户组,kubernetes的一个用户角色

    创建apiserver配置文件:

    # cat /opt/kubernetes/cfg/kube-apiserver 
    
    KUBE_APISERVER_OPTS="--logtostderr=false \
    --log-dir=/opt/kubernetes/logs/kube-apiserver \
    --v=4 \
    --etcd-servers=https://10.40.6.201:2379,https://10.40.6.210:2379,https://10.40.6.213:2379 \
    --bind-address=10.40.6.201 \
    --secure-port=6443 \
    --advertise-address=10.40.6.201 \
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    --service-node-port-range=30000-50000 \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --enable-bootstrap-token-auth \
    --token-auth-file=/opt/kubernetes/cfg/token.csv \
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --etcd-cafile=/opt/etcd/ssl/ca.pem \
    --etcd-certfile=/opt/etcd/ssl/server.pem \
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    

    配置好前面生成的证书,确保能连接etcd。

    参数说明:

    --logtostderr 启用日志,true日志将会写到/var/log/messages,不用指定log日志路径;false,自定义日志文件
    ---v 日志等级,值越大,日志越少
    --etcd-servers etcd集群地址
    --bind-address 监听地址
    --secure-port https安全端口
    --advertise-address 集群通告地址
    --allow-privileged 启用授权,容器层面的
    --service-cluster-ip-range Service负责均衡的虚拟IP地址段
    --service-node-port-range Service Node类型默认分配端口范围
    --enable-admission-plugins 准入控制模块(插件)
    --authorization-mode 认证授权模式,启用RBAC授权和节点自管理
    --enable-bootstrap-token-auth 启用TLS bootstrap功能,用于验证kubelet发过来的请求,给kubeltet颁发证书,如node加入集群等
    --token-auth-file token文件

    创建日志目录:

    # mkdir /opt/kubernetes/logs/kube-apiserver -p
    

    systemd管理apiserver:

    # cat /usr/lib/systemd/system/kube-apiserver.service 
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

    启动:

    # systemctl daemon-reload
    # systemctl enable kube-apiserver
    # systemctl restart kube-apiserver
    

    查看进程:ps aux |grep kube

    3. 部署scheduler组件

    创建schduler配置文件:

    # cat /opt/kubernetes/cfg/kube-scheduler 
    
    KUBE_SCHEDULER_OPTS="--logtostderr=false \
    --log-dir=/opt/kubernetes/logs/kube-scheduler \
    --v=4 \
    --master=127.0.0.1:8080 \
    --leader-elect"
    

    参数说明:
    --master 连接本地apiserver
    --leader-elect 当该组件启动多个时,自动选举(HA)

    创建日志目录:

    # mkdir /opt/kubernetes/logs/kube-scheduler -p
    

    systemd管理schduler组件:

    # cat /usr/lib/systemd/system/kube-scheduler.service 
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

    启动:

    # systemctl daemon-reload
    # systemctl enable kube-scheduler
    # systemctl restart kube-scheduler
    

    4. 部署controller-manager组件

    创建controller-manager配置文件:

    # cat /opt/kubernetes/cfg/kube-controller-manager 
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
    --log-dir=/opt/kubernetes/logs/kube-controller-manager \
    --v=4 \
    --master=127.0.0.1:8080 \
    --leader-elect=true \
    --address=127.0.0.1 \
    --service-cluster-ip-range=10.0.0.0/24 \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
    --root-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --experimental-cluster-signing-duration=87600h0m0s"
    

    参数说明:
    --master kube-apiserver监听地址
    --leader-elect 集群角色选举
    --address 监听地址,controller-manager不对外服务
    --cluster-name 集群名字
    --cluster-signing-cert-file 签名,为给kubelet颁发证书使用
    --cluster-signing-key-file 签名,为给kubelet颁发证书使用
    --root-ca-file 签名,为给kubelet颁发证书使用
    --service-account-private-key-file 签名,为给kubelet颁发证书使用
    --experimental-cluster-signing-duration=87600h0m0s 给kubelet颁发证书时间,默认一年

    创建日志目录:

    # mkdir /opt/kubernetes/logs/kube-controller-manager -p
    

    systemd管理controller-manager组件:

    # cat /usr/lib/systemd/system/kube-controller-manager.service 
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

    启动:

    # systemctl daemon-reload
    # systemctl enable kube-controller-manager
    # systemctl restart kube-controller-manager
    

    所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:

    # cp /usr/local/src/k8s/kubernetes/server/bin/kubectl /usr/bin/
    # kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    etcd-1               Healthy   {"health":"true"}   
    etcd-2               Healthy   {"health":"true"}   
    

    如上输出说明组件都正常。

    查看资源的缩写:kubectl api-resources

    八. 部署Node组件

    Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
    认证大致工作流程如图所示:


    工作流程.png

    master使用的token文件:

    # cat /opt/kubernetes/cfg/token.csv
    5b2ecab909e3ae8f0dc611ba255777c2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    
    第一列:token ID
    第二列:用户名 kubelet-bootstrap
    第三列:UID
    第四列:用户组,kubernetes的一个用户角色
    

    1. 角色与用户绑定

    创建clusterrolebinding kubelet-bootstrap并将用户kubelet-bootstrap绑定到system:node-bootstrapper集群角色

    # kubectl create clusterrolebinding kubelet-bootstrap \
    --clusterrole=system:node-bootstrapper \
    --user=kubelet-bootstrap
    

    2. 创建kubeconfig文件

    生成bootstrap.kubeconfig和kube-proxy.kubeconfig文件脚本:

    # cat kubeconfig.sh
    
    APISERVER=$1
    SSL_DIR=$2
    
    #token值要与master文件/opt/kubernetes/cfg/token.csv 里的一致
    BOOTSTRAP_TOKEN='5b2ecab909e3ae8f0dc611ba255777c2'  
    
    # 创建kubelet bootstrapping kubeconfig 
    export KUBE_APISERVER="https://$APISERVER:6443"
    
    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=$SSL_DIR/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
    #----------------------
    
    # 创建kube-proxy kubeconfig文件,存放连接apiserver验证信息
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=$SSL_DIR/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials kube-proxy \
      --client-certificate=$SSL_DIR/kube-proxy.pem \
      --client-key=$SSL_DIR/kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    

    传入apiserver IP 和生成kubernetes证书的目录两个参数,运行脚本生成bootstrap.kubeconfig和kube-proxy.kubeconfig两个配置文件

    # bash kubeconfig.sh 10.40.6.201 /usr/local/src/k8s/kube-apiserver
    # cat bootstrap.kubeconfig 
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVVVRhRFdja1dBT2xLd2s3S0ZMNjFTb0xkUmpJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURVeU9URTFNVGt3TUZvWERUSTBNRFV5TnpFMU1Ua3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBd3BFUW5hZCtkbkFmamVhbzNMVUIKRWdvWWN4ZFNUTjdkS0FrV2NxNkY3ZVZWVUR1RmZkWFc2VWdCR3RqeUpoTEhKREF1a01wc2gzSUw2cW95U0lraQoyTGdmTVFTOFhEQmhRUXY5OFhRVnVvMG44dVhzQ08yZjdpS2hpM3NUb0VIWTJGVmNYM1BUOTgvN1A4cTBpZzArCm84RjBrNXVaTzJjT1hIWDF0c0NLL3FrMWp1S3J3Wk04enpDRUszbGZJNmtROUltT01NZG93MHE0bzZEdStPWVAKaUE2MFVHRnhpd0VlTWs1b2JBN2liZ1BSai81ci9BSVdZbmUvV0Y3ODM5bW1kY2ZUVXRJdzlJbHd4eEN3MEV1dQo0NXkwcVAwL3RJNUZlSFA4VWJzRWdCOFJuWTArTG03TXk1Z2lDYSt4cTNQUlh0UVozRDBwYm8xYXFuakh6NnFqCkpRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVeDU2TkxhajVwNXliTEJ4M1k5VmR6cldmSjF3d0h3WURWUjBqQkJnd0ZvQVV4NTZOTGFqNQpwNXliTEJ4M1k5VmR6cldmSjF3d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNQXBoYW14a2tRRUQvYk80NlVmCkdCWVBtQjZMZy9WcWx0MlBxaFFWblEwRmMySldtRzFlY2l0Q1JLWVlYU2RPRkZjNXorcWlCZjVDQWFRVm5vWnUKNTB3QTBmajBqb3BSbnRCWDR6YzRXdmM0WEZWYjVKRXZFcjRqOEpkTTI1QXhHN0hsazA4RzRRbEZ3SzNuRVo4dwptMGtjaEJpb2tFZElmWEZvekttWThxUTNmY0o4MEVONmJBYXJHQVNoK2VQTXVxMXhqNjhwUVJBSnArcFNyMVNHCnFIamxhbnpRT1hSeitMZFBhSXQzQjEzMDFsSyt1ZnlZbHVGcGJ1c24ycmlXOHlmMXhyeEhRb0VyTTllZTMvR0sKUnZVMnc0WGpwVEdXbGNwMU1uNVA4MGNDVlQwa05VS1BBWVVzaFF6RlNITlhYU2lsZEpQNG80RnhIQ3JzMGxoRApqL2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://10.40.6.201:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: kubelet-bootstrap
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: kubelet-bootstrap
      user:
        token: 5b2ecab909e3ae8f0dc611ba255777c2
    
    
    # cat kube-proxy.kubeconfig
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVVVRhRFdja1dBT2xLd2s3S0ZMNjFTb0xkUmpJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURVeU9URTFNVGt3TUZvWERUSTBNRFV5TnpFMU1Ua3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBd3BFUW5hZCtkbkFmamVhbzNMVUIKRWdvWWN4ZFNUTjdkS0FrV2NxNkY3ZVZWVUR1RmZkWFc2VWdCR3RqeUpoTEhKREF1a01wc2gzSUw2cW95U0lraQoyTGdmTVFTOFhEQmhRUXY5OFhRVnVvMG44dVhzQ08yZjdpS2hpM3NUb0VIWTJGVmNYM1BUOTgvN1A4cTBpZzArCm84RjBrNXVaTzJjT1hIWDF0c0NLL3FrMWp1S3J3Wk04enpDRUszbGZJNmtROUltT01NZG93MHE0bzZEdStPWVAKaUE2MFVHRnhpd0VlTWs1b2JBN2liZ1BSai81ci9BSVdZbmUvV0Y3ODM5bW1kY2ZUVXRJdzlJbHd4eEN3MEV1dQo0NXkwcVAwL3RJNUZlSFA4VWJzRWdCOFJuWTArTG03TXk1Z2lDYSt4cTNQUlh0UVozRDBwYm8xYXFuakh6NnFqCkpRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVeDU2TkxhajVwNXliTEJ4M1k5VmR6cldmSjF3d0h3WURWUjBqQkJnd0ZvQVV4NTZOTGFqNQpwNXliTEJ4M1k5VmR6cldmSjF3d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNQXBoYW14a2tRRUQvYk80NlVmCkdCWVBtQjZMZy9WcWx0MlBxaFFWblEwRmMySldtRzFlY2l0Q1JLWVlYU2RPRkZjNXorcWlCZjVDQWFRVm5vWnUKNTB3QTBmajBqb3BSbnRCWDR6YzRXdmM0WEZWYjVKRXZFcjRqOEpkTTI1QXhHN0hsazA4RzRRbEZ3SzNuRVo4dwptMGtjaEJpb2tFZElmWEZvekttWThxUTNmY0o4MEVONmJBYXJHQVNoK2VQTXVxMXhqNjhwUVJBSnArcFNyMVNHCnFIamxhbnpRT1hSeitMZFBhSXQzQjEzMDFsSyt1ZnlZbHVGcGJ1c24ycmlXOHlmMXhyeEhRb0VyTTllZTMvR0sKUnZVMnc0WGpwVEdXbGNwMU1uNVA4MGNDVlQwa05VS1BBWVVzaFF6RlNITlhYU2lsZEpQNG80RnhIQ3JzMGxoRApqL2c9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://10.40.6.201:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: kube-proxy
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: kube-proxy
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzakNDQXNhZ0F3SUJBZ0lVVHNyZk1EeGZOVmlMb2o0RjN5NEkwQjRLT1ZFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURVek1EQXhOVGt3TUZvWERUSTVNRFV5TnpBeE5Ua3dNRm93YkRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUm93R0FZRFZRUURFeEZ6ZVhOMFpXMDZhM1ZpClpTMXdjbTk0ZVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2RjS2hWWHRQb1AKRDJHMXRQQnFRamU4bFdRTmo1U1NhZjlSQ0xESld3bDJNSm1JVjkyclpEQVE5S2dRNnFkTlZQODFKYVB1Zy8xaQpONmtSdEl0VElYSFVDQ2RSRzdjUWZNZDEyV081cE9LLy82dnpaSTBsWXJIdEk4QUFHNU5UcU9FYmJYT3NyNkxaCjRodWh1T3h2elZLUWQxeUxBVHRZK1RCTU9xdEFweUp0dzRLdGxZNDVMTXZzTzdJUWZsWCtuQ0h2V1JFeFBjSVMKb0RxYlA0UHAzdFZQNDVZd0xYUVRWRFl1MitRU0VPUUwrMHJTbDFEcGdNbHk1KzRKSVR5VmRnLzJKM0tTaWpPRApKUFUzLzdMaGJmMEJXMDdvenowSkdRMzBFWUhEeFBRQUlEZlNqd0N4U0hwaEhubUVGN0FtRlY0clJPNHhmOVQvCmh3L21Pd1o0QXJrQ0F3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0cKQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU2M1YmRqaGRxNApLVHJxRFk5a1NTNDNET0V2Y3pBZkJnTlZIU01FR0RBV2dCVEhubzB0cVBtbm5Kc3NISGRqMVYzT3RaOG5YREFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQUxCNXorNlUxbUVXdEZqeUVsemtPa2hyZzRUK0VqbG9BdkRFVFRZdEsKekxwY3dJU2ZseFg5Mk5FL3NBazhtVjZjdUhaNWNJQ0gyWjhSWmwvL3VJVWppT2pDYlpjcUxaYUl1cUlhUDE0NApaUnp2VmhMYkhyZXFQSit1bHdwZXExaEJydUgwK0pZeEhqN3VDbGxKWldRUFhXejNzR29od29MVWFSRXd2UlNVCnZ0OXk1dGRWNzJBOTdnSERQSGJDdFcySlhyR3NUSll3UndjcHNSbkg1QjVjelFNTkFrei9pUXdzUmN0ViszcXcKNFRZWkZWbTN0OTEwUTh0WDVtOXRGVTdOME1ydlZQL1FQSS9Idlp4ZEI2Q1R3cTVodnVaMVY2ck1mTWowVkZ1bwo5OE14ZFFBVUQxeVBqeldXMllQTEJ4enZFY09sQmdNVERmME03WVNaWDBsZG5RPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcDF3cUZWZTArZzhQWWJXMDhHcENON3lWWkEyUGxKSnAvMUVJc01sYkNYWXdtWWhYCjNhdGtNQkQwcUJEcXAwMVUvelVsbys2RC9XSTNxUkcwaTFNaGNkUUlKMUVidHhCOHgzWFpZN21rNHIvL3EvTmsKalNWaXNlMGp3QUFiazFPbzRSdHRjNnl2b3RuaUc2RzQ3Ry9OVXBCM1hJc0JPMWo1TUV3NnEwQ25JbTNEZ3EyVgpqamtzeSt3N3NoQitWZjZjSWU5WkVURTl3aEtnT3BzL2crbmUxVS9qbGpBdGRCTlVOaTdiNUJJUTVBdjdTdEtYClVPbUF5WExuN2draFBKVjJEL1luY3BLS000TWs5VGYvc3VGdC9RRmJUdWpQUFFrWkRmUVJnY1BFOUFBZ045S1AKQUxGSWVtRWVlWVFYc0NZVlhpdEU3akYvMVArSEQrWTdCbmdDdVFJREFRQUJBb0lCQUZsUjNmL3dCRjJrNWYrdQorN2VIN25sU3c2UlhmSGE5d2FhSytBbHFIWlVxSi92NUFYUUVBZitKUFJucGxXTGU2ZXNlMFV6eGdpNGNXanA0CmdaUU9OUDVNUEdISGJ2US83MmlBcEJvT1BVcnJUNmZVeWFodStJS2ZYb0lkVEpwUGZ3Vk5IeGdxWkw2VWJKRjAKdVg0dW1UVmtkdC9FTEU4aFNEVVhxZ1EyQ0QxZDNIRk5hbUtRMTFrNlBrSC9OZDRQN09aS0hGM244bnR1eWdNawpxald5Y3VSODgreFJFNC9OdzdKOXk3VXR4dEVPd1lVL05vS3M4T1RBMW1ZV2h1Z2o0dSt5bEZxU3JWcXVmblNtCkpVNHlKQStvWEdpY3ZJaE1ZUmxpdjNEYkRTSGg5VHFYVXV0eEhoWVpLbS93RmlTR1JaL0VYT3creUZHcmkwYWQKZmplbUFsRUNnWUVBMXAzQU8vTjYzazMxYUh2WndQYzdpbGN4MHpETlJQbzlibXZFa3BwNnZ0WWpCOWpVeDhaQQpnYmNmWVJua29KVVdlUnNVVUphODBxKzlLK2tJeVVmd1kwdTd6RTF6Y0hqUWJ6Zzl0TDFJQlFmZTlwaDRJRHROCm1NeHE1RnVIV0VNRlFIUU41OFdQT3ZRQWtVUFpYNDZFRTBKM3c2QXIyODhtZVIrSlE4bjVueDBDZ1lFQXg2R3MKU3ZsR2NVSGEvS3BWelk3Wmx6UHVKMDV1U2ZReS9FcU9QRnNzNGdNR3RVNGROUzBkWm5GUnBJOWRjZGJlMUdxZwptV0FTdXM4QmdDZytTdDA5WjdOSnJLMjVrQytzU0dIdVd3OFM0aFV1ZSt2MDBrNTNhblloeWNCcW4yOWl0WkxNCnBGZDNCKzU1akxuUExySTlrVFFjUjd4U3JXanhMYmVlc1VXa0UwMENnWUIvNURYQUJCSCtFNXJnanAxdXZtVysKeE1NdVJQQ3Q0Q2xuZWRVRVFBWlJYcTQxYU9NendWS0RlaXE2NUlFM3FHQmgvdDhXUHgxNnQ3c1ZSYU0wdnlmagpKQ2hmVVBBdjMrN2x1REFkV29abWFSQlhCdmplekRncmkvVk82N1ExeG9xRXBDUDlMOTl3bENNYWJjSkZqVm5yCldEcWlXdnFIM0dQaTNnWWdYV1hoaVFLQmdFeFIyOHVoOXpOUGFRZ1ZtczRHWWR0emlBWFE3MHNvcCtGYUkzeWgKb3N3Wk9oUlFjOHdqbmt6TzM5YVkxTEd6NHVhMGlRZDUrazhlMnNVREhhV0RaWGxpeXJUUWlkTzgxaEdxRnZVTApFejRKdVFhNVU1U2ZXUG9EaGJGYTlhaFViaGxhc1EvWFBITjAwVlZpcC9tRFBSUnBKcktxSmJXVUhEaE5MY2M2CkI1czFBb0dCQUxCYkMySFh2dENTSXRkdHVlZ2k1ZlA1SUtCYVUxbnB0NWkvZ01aYlpNQkJHQllUZmVuNTI0U1MKaVdMWnNKalpGdmFqS3Q4RktZSHVhU29ETWhQZVJ3cWlaaTlkZkp6WFlBOWZENEk5RytUc2dFTU9tR3JpZ3NnTwpuMExVMTYyRUhLM1ZvRDVTL0cxUFl2TG50S1ZNT29CWmdJZzdzOW9RZEl0R1ovbmdpNTlQCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    
    

    将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下:

    # scp bootstrap.kubeconfig kube-proxy.kubeconfig 10.40.6.210:/opt/kubernetes/cfg/
    # scp bootstrap.kubeconfig kube-proxy.kubeconfig 10.40.6.213:/opt/kubernetes/cfg/
    

    3. 部署kubelet组件

    前面在master下载的二进制包中的kubelet和kube-proxy拷贝到node节点的/opt/kubernetes/bin目录下:

    # scp /usr/local/src/k8s/kubernetes/server/bin/{kube-proxy,kubelet} 10.40.6.210:/opt/kubernetes/bin/
    # scp /usr/local/src/k8s/kubernetes/server/bin/{kube-proxy,kubelet} 10.40.6.213:/opt/kubernetes/bin/
    

    创建kubelet配置文件:

    # cat /opt/kubernetes/cfg/kubelet
    
    KUBELET_OPTS="--logtostderr=false \
    --log-dir=/opt/kubernetes/logs/kubelet \
    --v=4 \
    --hostname-override=10.40.6.210 \
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
    --config=/opt/kubernetes/cfg/kubelet.config \
    --cert-dir=/opt/kubernetes/ssl \
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
    

    参数说明:
    --hostname-override 在集群中显示的主机名
    --kubeconfig 指定kubeconfig文件位置,会自动生成
    --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
    --cert-dir 颁发证书存放位置
    --pod-infra-container-image 管理Pod网络的镜像

    /opt/kubernetes/cfg/kubelet.config文件配置自身信息,配置如下:

    # cat /opt/kubernetes/cfg/kubelet.config
    
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 10.40.6.210
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS: ["10.0.0.2"]
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
      anonymous:
        enabled: true 
    

    systemd管理kubelet组件:

    # cat /usr/lib/systemd/system/kubelet.service 
    
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    

    创建相关目录:

    # mkdir  /opt/kubernetes/{ssl,logs/kubelet} -p
    

    启动:

    # systemctl daemon-reload
    # systemctl enable kubelet
    # systemctl restart kubelet
    

    在Master审批Node加入集群:
    启动后还没加入到集群中,需要手动允许该节点才可以。
    在Master节点查看请求签名的Node:

    # kubectl get csr
    NAME                                                   AGE    REQUESTOR           CONDITION
    node-csr-N3b6ze5SPhItvld_iaByflG6tZn3mhUpyjxwOwTLdU4   4m4s   kubelet-bootstrap   Pending
    
    # kubectl certificate approve node-csr-N3b6ze5SPhItvld_iaByflG6tZn3mhUpyjxwOwTLdU4
    certificatesigningrequest.certificates.k8s.io/node-csr-N3b6ze5SPhItvld_iaByflG6tZn3mhUpyjxwOwTLdU4 approved
    
    # kubectl get node
    NAME          STATUS   ROLES    AGE   VERSION
    10.40.6.210   Ready    <none>   23s   v1.12.1
    

    部署第二个节点,修改配置文件中的相应IP即可

    # kubectl get node
    NAME          STATUS     ROLES    AGE     VERSION
    10.40.6.210   Ready      <none>   6m33s   v1.12.1
    10.40.6.213   NotReady   <none>   9s      v1.12.1
    

    4. 部署 kube-proxy组件

    创建kube-proxy配置文件:

    # cat /opt/kubernetes/cfg/kube-proxy
    
    KUBE_PROXY_OPTS="--logtostderr=false \
    --log-dir=/opt/kubernetes/logs/kube-proxy \
    --v=4 \
    --hostname-override=10.40.6.210 \
    --cluster-cidr=10.0.0.0/24 \
    --proxy-mode=ipvs \
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    

    参数说明:
    --cluster-cidr 分配集群的网段,service负载均衡IP段
    --proxy-mode 代理模式

    systemd管理kube-proxy组件:

    # cat /usr/lib/systemd/system/kube-proxy.service 
    
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

    创建相关目录:

    # mkdir /opt/kubernetes/logs/kube-proxy -p
    

    启动:

    # systemctl daemon-reload
    # systemctl enable kube-proxy
    # systemctl restart kube-proxy
    

    Node2部署方式一样, 修改配置文件相关IP

    九. 部署一个测试示例

    创建一个Nginx Web,测试集群是否正常工作:

    # kubectl run nginx --image=nginx --replicas=3
    
    # kubectl get deployment   ###查看刚创建的nginx deployment
    NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx   3         3         3            3           22s
    
    # kubectl get pod -o wide  ##创建的pod
    NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE
    nginx-dbddb74b8-5smx7   1/1     Running   0          2m44s   172.17.59.2   10.40.6.213   <none>
    nginx-dbddb74b8-hcjbw   1/1     Running   0          2m44s   172.17.31.2   10.40.6.210   <none>
    nginx-dbddb74b8-jtwt5   1/1     Running   0          2m44s   172.17.59.3   10.40.6.213   <none>
    
    ###查看所有运行的资源
    # kubectl get all
    NAME                        READY   STATUS    RESTARTS   AGE
    pod/nginx-dbddb74b8-5smx7   1/1     Running   0          4m56s
    pod/nginx-dbddb74b8-hcjbw   1/1     Running   0          4m56s
    pod/nginx-dbddb74b8-jtwt5   1/1     Running   0          4m56s
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   6h8m
    
    NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx   3         3         3            3           4m56s
    
    NAME                              DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-dbddb74b8   3         3         3       4m56s
    
    管理层级:deployment -----> replicaset -----> pod
    

    创建一个service,露88端口, 名称为nginx , pod监听的80,

    # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
    # kubectl get svc    ###查看Service
    NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        6h20m
    nginx        NodePort    10.0.0.153   <none>        88:40370/TCP   23s
    
    ## 88端口是对节点内部,40370端口是对节点外部
    

    访问集群中部署的Nginx:
    ① node之间访问地址:http://10.0.0.153:88, 如:curl http://10.0.0.153:88 -I
    ② node之外访问地址:http://POD_IP:40370 , 如:curl http://10.40.6.213:40370 -I

    查看nginx pod日志权限问题:

    # kubectl logs nginx-dbddb74b8-5smx7
    Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-5smx7)
    
    说明system:anonymous 匿名用户没有获取的权限,
    要将这个用户绑定到系统角色权限,使之有这角色权限。
    集群角色的绑定:将一个用户绑定到某个角色上,某个角色具备哪些权限
    
    # kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
    
    # kubectl logs nginx-dbddb74b8-hcjbw -f
    172.17.59.0 - - [30/May/2019:09:24:40 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
    10.0.0.153 - - [30/May/2019:09:26:37 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
    

    十. 部署Web UI(Dashboard)

    UI YAML配置文件托管项目地址:
    https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard
    dashboard-configmap.yaml 存放UI配置信息
    dashboard-controller.yaml 控制器
    dashboard-rbac.yaml 用于创建用户并授权
    dashboard-secret.yaml 存放敏感重要信息
    dashboard-service.yaml 将UI暴露出来,让我们访问

    以上这些文件在之前下载的kubernetes-server-linux-amd64.tar.gz 包都存在,

    # cd /usr/local/src/k8s/kubernetes && tar xvf kubernetes-src.tar.gz
    # cd cluster    ## 这个目录是github 地址上的文件目录一一对应的,addons目录下就是一些插件
    # cd addons/dashboard/    ## 这就是存放以上的一些yaml配置文件
    # ll
    total 32
    -rw-rw-r-- 1 root root  264 Oct  6  2018 dashboard-configmap.yaml
    -rw-rw-r-- 1 root root 1821 Oct  6  2018 dashboard-controller.yaml
    -rw-rw-r-- 1 root root 1353 Oct  6  2018 dashboard-rbac.yaml
    -rw-rw-r-- 1 root root  551 Oct  6  2018 dashboard-secret.yaml
    -rw-rw-r-- 1 root root  322 Oct  6  2018 dashboard-service.yaml
    -rw-rw-r-- 1 root root  242 Oct  6  2018 MAINTAINERS.md
    -rw-rw-r-- 1 root root  125 Oct  6  2018 OWNERS
    -rw-rw-r-- 1 root root  400 Oct  6  2018 README.md
    

    创建相应的pod:
    修改dashboard-controller.yaml配置文件镜像地址
    image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    浏览器打开地址:https://promotion.aliyun.com, 搜索kubernetes-dashboard-amd64这镜像找到较新的镜像

    # kubectl create -f dashboard-configmap.yaml
    # kubectl create -f dashboard-rbac.yaml
    # kubectl create -f dashboard-secret.yaml
    # kubectl create -f dashboard-controller.yaml
    

    查看启动情况:

    # kubectl get pod -n kube-system     ##  命名空间为kube-system 
    NAME                                    READY   STATUS    RESTARTS   AGE
    kubernetes-dashboard-774f47666c-97c86   1/1     Running   0          93s
    # kubectl logs kubernetes-dashboard-774f47666c-97c86 -n kube-system
    

    dashboard-service.yaml配置文件添加一个type: NodePort, 并创建一个service

    # cat dashboard-service.yaml
    
    apiVersion: v1
    kind: Service
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      type: NodePort
      selector:
        k8s-app: kubernetes-dashboard
      ports:
      - port: 443
        targetPort: 8443
    
    # kubectl create -f  dashboard-service.yaml
    # kubectl get svc -n kube-system
    NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
    kubernetes-dashboard   NodePort   10.0.0.198   <none>        443:30899/TCP   48s
    

    浏览器访问:https://10.40.6.210:30899
    首次登录会出现两种验证方式,这里我们选择token令牌验证方式:


    选择验证方式.png

    要用令牌登录,得先有个用户身份,这个用户可以用token标志:
    创建一个角色为ServiceAccount的dashboard-admin用户,然后给用户dashboard-admin绑定到cluster-admin角色,然后使用dashboard-admin产生的token 来登录访问,访问apiserverpod也是使用rbc授权,pod 使用角色为ServiceAccount的dashboard-admin用户访问apiserver。

    创建用户及授权yaml配置文件如下:

    # cat k8s-admin.yaml
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dashboard-admin
      namespace: kube-system
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: dashboard-admin
    subjects:
      - kind: ServiceAccount
        name: dashboard-admin
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    

    创建token:

    # kubectl create -f k8s-admin.yaml
    # kubectl get secret -n kube-system
    NAME                               TYPE                                  DATA   AGE
    dashboard-admin-token-tbszw        kubernetes.io/service-account-token   3      43s
    

    查看token值:

    # kubectl describe secret dashboard-admin-token-tbszw -n kube-system
    Name:         dashboard-admin-token-tbszw
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: b962b8b9-82da-11e9-8a6c-005056b66bc1
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1359 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdGJzenciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjk2MmI4YjktODJkYS0xMWU5LThhNmMtMDA1MDU2YjY2YmMxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.dNRBcXx-KdLr4tb2sqlKXchGdYPUxo2KoNnCH1ENae051P_7dE50SsJdN70eUR7pACo8LGbmPSVjnhIqYGTv4oS80bVBl1pZdYs1JS9Mc3jAG64npKLq_HfyMjsQSYW2c1Ial6WYRHIsqeegnVOy8vY22-gqSnPUYf1Sn5qYyJVRCy6yGMJ4P1Su1yBqRQO29rC-tgunEg28Rx339ADPoqsbKRCP3Q1Zwbkux1JBnXiGoZGKZjP_06lY3xAnmMzkI3wa4S5KQRIe68s6WH5RL-SWqkL5GiHWoz14CpkweiQ_4LUxH8zi_jQNH8Jsz3zd5eSYs2Pks5BKdj3-Drh17w
    

    然后使用token值验证登录UI


    UI web.png

    十一. 多Master集群-部署master01

    多Master集群架构图.png
    master 高可用主要是apiserver组件,scheduler调度器和controller-manager控制器本身就是高可用,可以通过配置文件可以知道,参数:--leader-elect自动选举。apiserver 是以http方式提供对外服务,所以做http高可用方案可选择nginx+keepalived或haproxy+keepalived等成熟方案,负载均衡器使用VIP,实现高可用。
    将master01节点的组件文件都拷贝到master02,并修改各个组件配置文件里的相应IP即可:
    # scp -r /opt/kubernetes 10.40.6.209:/opt/
    # scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service 10.40.6.209:/usr/lib/systemd/system/
    # scp -r /opt/etcd/ssl/ 10.40.6.209:/opt/etcd/ssl/
    

    master02启动服务:

    # systemctl start kube-apiserver
    # systemctl start kube-scheduler
    # systemctl start kube-controller-manager
    

    master02查看集群状态和节点:

    # /opt/kubernetes/bin/kube
    kube-apiserver           kube-controller-manager  kubectl                  kube-scheduler           
    [root@k8s logs]# /opt/kubernetes/bin/kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    etcd-2               Healthy   {"health":"true"}   
    etcd-1               Healthy   {"health":"true"}   
    
    # /opt/kubernetes/bin/kubectl get node
    NAME          STATUS   ROLES    AGE   VERSION
    10.40.6.210   Ready    <none>   22h   v1.12.1
    10.40.6.213   Ready    <none>   22h   v1.12.1
    

    node01和node02虽然没有连接master02, 是因为kubernetes的集群状态、配置、服务信息都存在etcd数据库中,只要能连上etcd就能获取集群相关信息。

    十二. 多Master集群(Nginx+Keepalive)

    这里使用nginx的4层负载均衡,分别在两台机器上安装nginx:

    # yum install yum-utils
    # cat /etc/yum.repos.d/nginx.repo
    [nginx-stable]    ###稳定库,默认使用稳定库
    name=nginx stable repo
    baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
    gpgcheck=1
    enabled=1
    gpgkey=https://nginx.org/keys/nginx_signing.key
    
    [nginx-mainline]   ###主线库
    name=nginx mainline repo
    baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
    gpgcheck=1
    enabled=0
    gpgkey=https://nginx.org/keys/nginx_signing.key
    
    # yum install nginx
    

    配置文件:/etc/nginx/nginx.conf
    stream字段与http字段同级,配置文件添加如下配置:

    stream {
        log_format main "$remote_addr $upstream_addr $time_local $status";
        access_log /var/log/nginx/k8s_apiserver-accese.log main;
    
        upstream k8s-apiserver {
            server 10.40.6.201:6443;
            server 10.40.6.209:6443;
        }
        server {
            listen 6443;
            proxy_pass k8s-apiserver;
        }
    }
    

    启动报错:bind() to 0.0.0.0:6443 failed (13: Permission denied)
    原因:监听端口6443 不在http允许访问的端口
    操作:

    # semanage port -l | grep http_port_t
    http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000
    pegasus_http_port_t            tcp      5988
    # semanage port -a -t http_port_t  -p tcp 6443
    # systemctl start nginx
    

    修改node两个节点三个配置文件中连接apiserver的地址为nginx IP地址:

    # grep 6443 ./*
    ./bootstrap.kubeconfig:    server: https://10.40.6.166:6443
    ./kubelet.kubeconfig:    server: https://10.40.6.166:6443
    ./kube-proxy.kubeconfig:    server: https://10.40.6.166:6443
    

    重启kubelet:

    # systemctl restart kubelet
    # systemctl restart kube-proxy
    

    在任何一台master查看集群状态:

    # kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    etcd-1               Healthy   {"health":"true"}   
    etcd-2               Healthy   {"health":"true"}   
    [root@k8s ~]# kubectl get node
    NAME          STATUS   ROLES    AGE   VERSION
    10.40.6.210   Ready    <none>   23h   v1.12.1
    10.40.6.213   Ready    <none>   23h   v1.12.1
    

    可以通过查看nginx 请求日志验证是否请求正常。
    接着在nginx两台机器安装keepalived:

    # yum install keepalived -y
    

    配置文件:

    # cat /etc/keepalived/keepalived.conf 
    
    ! Configuration File for keepalived 
     
    global_defs { 
       router_id NGINX_MASTER 
    } 
    
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    }
    
    vrrp_instance VI_1 { 
        state MASTER 
        interface eth0
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 90    # 优先级,备服务器设置 90 
        advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
        authentication { 
            auth_type PASS      
            auth_pass jNikdfK8
        }  
        virtual_ipaddress { 
            10.40.6.175/23
        } 
        track_script {
            check_nginx
        } 
    }
    

    nginx状态检测脚本:

    # cat /etc/keepalived/check_nginx.sh 
    
    #!/bin/bash
    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived.service
    fi
    

    Nginx+Keepalived配置测试VIP漂移没问题后,将两个node节点请求apiserver IP 改为VIP的IP 10.40.6.175

    #  grep 6443 /opt/kubernetes/cfg/*
    /opt/kubernetes/cfg/bootstrap.kubeconfig:    server: https://10.40.6.175:6443
    /opt/kubernetes/cfg/kubelet.kubeconfig:    server: https://10.40.6.175:6443
    /opt/kubernetes/cfg/kube-proxy.kubeconfig:    server: https://10.40.6.175:6443
    
    # systemctl restart kubelet
    # systemctl restart kube-proxy
    

    然后重启某个node的kubelet,查看nginx日志文件,可以发现请求OK

    相关文章

      网友评论

          本文标题:第2章 Kubernetes集群部署

          本文链接:https://www.haomeiwen.com/subject/gsdjtctx.html