美文网首页
kubernete 的 etcd3 集群搭建

kubernete 的 etcd3 集群搭建

作者: 耳机在哪里 | 来源:发表于2018-12-12 15:25 被阅读0次

    一、各节点准备环境

    ip hostname role
    10.39.14.204 k8s-etcd-host0 etcd-node
    10.39.14.205 k8s-etcd-host1 etcd-node
    10.39.14.206 k8s-etcd-host2 etcd-node

    1、 安装 docker, kubelet, kubeadm

    check docker

    $ docker --version
    Docker version 17.05.0-ce, build e1bfc47
    $ systemctl enable docker && systemctl restart docker && systemctl status docker
    docker is active
    

    下载 kubelet、kubeadm 到 /usr/bin

    链接: https://pan.baidu.com/s/1K5rA3Di_uVgE96pK3tst4Q 提取码: hm6r

    chmod +x /usr/bin/kubelet
    chmod +x /usr/bin/kubeadm
    

    2、 import etcd、pause image

    tar -xvf k8s-images.tar
    docker load < k8s-images/etcd.tar
    docker load < k8s-images/pause.tar
    

    3、将 kubelet 配置为 etcd 的服务管理器。

    必须通过创建具有更高优先级的新文件来覆盖 kubeadm 提供的 kubelet 单元文件。

    $ cat << EOF > /etc/systemd/system/kubelet.service
    [Unit]
    Description=kubelet: The Kubernetes Node Agent
    Documentation=https://kubernetes.io/docs/
    [Service]
    ExecStart=/usr/bin/kubelet
    Restart=always
    StartLimitInterval=0
    RestartSec=10
    [Install]
    WantedBy=multi-user.target
    EOF
    
    $ mkdir -p /etc/systemd/system/kubelet.service.d/
    
    $ cat << EOF > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    EnvironmentFile=-/etc/sysconfig/kubelet
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    EOF
    
    $ cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
    [Service]
    ExecStart=
    ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
    Restart=always
    EOF
    

    重启 kubelet

    systemctl daemon-reload
    systemctl enable kubelet && systemctl restart kubelet && systemctl status kubelet
    

    二、host0: 生成并分发证书

    在 host0 上生成所有证书,并仅将必要的文件分发给其他节点

    1、 为kubeadm创建配置文件

    使用以下脚本为每个将在其上运行 etcd 成员的主机生成一个 kubeadm 配置文件

    export HOST0=10.39.14.204
    export HOST1=10.39.14.205
    export HOST2=10.39.14.206
    rm -rf /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
    mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
    
    ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
    NAMES=("infra0" "infra1" "infra2")
    
    for i in "${!ETCDHOSTS[@]}"; do
    HOST=${ETCDHOSTS[$i]}
    NAME=${NAMES[$i]}
    cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
    apiVersion: "kubeadm.k8s.io/v1beta1"
    kind: ClusterConfiguration
    etcd:
        local:
            serverCertSANs:
            - "${HOST}"
            peerCertSANs:
            - "${HOST}"
            extraArgs:
                initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380
                initial-cluster-state: new
                name: ${NAME}
                listen-peer-urls: https://${HOST}:2380
                listen-client-urls: https://${HOST}:2379
                advertise-client-urls: https://${HOST}:2379
                initial-advertise-peer-urls: https://${HOST}:2380
    kubernetesVersion: 1.13.0
    EOF
    done
    

    2、生成证书颁发机构

    rm -rf /etc/kubernetes/pki
    kubeadm init phase certs etcd-ca
    

    这会创建两个文件
    /etc/kubernetes/pki/etcd/ca.crt
    /etc/kubernetes/pki/etcd/ca.key

    3、为每个成员创建证书

    # host2
    kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
    kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
    cp -R /etc/kubernetes/pki /tmp/${HOST2}/
    find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
    # host1
    kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
    kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
    cp -R /etc/kubernetes/pki /tmp/${HOST1}/
    find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
    # host0
    kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
    kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
    kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
    # No need to move the certs because they are for HOST0
    
    # clean up certs that should not be copied off this host
    find /tmp/${HOST2} -name ca.key -type f -delete
    find /tmp/${HOST1} -name ca.key -type f -delete
    

    4、到每个节点上复制证书和 kubeadm 配置

    已生成证书,现在必须将它们移动到各自的主机。
    (可以使用一个能 ssh 到所有节点的跳板机来做中转。)

    [跳板机]$ scp -r ~/host1 root@<host1_ip>:~
    [跳板机]$ ssh root@<host1_ip>
    chown -R root:root ~/host1/pki/
    rm -rf /etc/kubernetes/
    mkdir /etc/kubernetes
    mv ~/host1/pki/ /etc/kubernetes/
    mv -f ~/host1/kubeadmcfg.yaml ~
    cd ~
    rm -rf host1/
    

    5、确保存在所有预期文件

    所需文件的完整列表$HOST0是:

    /tmp/${HOST0}
    └── kubeadmcfg.yaml
    
    /etc/kubernetes/pki
    ├── apiserver-etcd-client.crt
    ├── apiserver-etcd-client.key
    └── etcd
        ├── ca.crt
        ├── ca.key
        ├── healthcheck-client.crt
        ├── healthcheck-client.key
        ├── peer.crt
        ├── peer.key
        ├── server.crt
        └── server.key
    
    $HOST1:
    
    $HOME
    └── kubeadmcfg.yaml
    ---
    /etc/kubernetes/pki
    ├── apiserver-etcd-client.crt
    ├── apiserver-etcd-client.key
    └── etcd
        ├── ca.crt
        ├── healthcheck-client.crt
        ├── healthcheck-client.key
        ├── peer.crt
        ├── peer.key
        ├── server.crt
        └── server.key
    
    $HOST2:
    
    $HOME
    └── kubeadmcfg.yaml
    ---
    /etc/kubernetes/pki
    ├── apiserver-etcd-client.crt
    ├── apiserver-etcd-client.key
    └── etcd
        ├── ca.crt
        ├── healthcheck-client.crt
        ├── healthcheck-client.key
        ├── peer.crt
        ├── peer.key
        ├── server.crt
        └── server.key
    

    三、创建静态pod清单

    现在证书和配置已到位,是时候创建清单了。在每个主机上运行 kubeadm 命令以生成 etcd 的静态清单。

    root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
    root@HOST1 $ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
    root@HOST2 $ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
    

    检查群集运行状况

    $ docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.39.14.204:2379 cluster-health
    member a04d01ef922eadf5 is healthy: got healthy result from https://10.39.14.205:2379
    member d0c00871a44c4535 is healthy: got healthy result from https://10.39.14.204:2379
    member e54116cb3d93012b is healthy: got healthy result from https://10.39.14.206:2379
    cluster is healthy
    

    相关文章

      网友评论

          本文标题:kubernete 的 etcd3 集群搭建

          本文链接:https://www.haomeiwen.com/subject/smfphqtx.html