美文网首页k8s运维管理
kubeadm 安装k8s HA

kubeadm 安装k8s HA

作者: 微凉哇 | 来源:发表于2019-12-15 20:26 被阅读0次

参考地址1
参考地址2

  • 架构图
ha.png
  • 节点说明

    hostname IP地址 应用
    node1 172.16.145.160 docker kubelet kubeadm kubectl keepalived harproxy control-plane
    node2 172.16.145.161 docker kubelet kubeadm kubectl keepalived harproxy control-plane
    node3 172.16.145.162 docker kubelet kubeadm kubectl keepalived harproxy control-plane
    172.16.145.200

172.16.145.200为虚拟IP

添加Host解析

cat >> /etc/hosts <<EOF
172.16.145.160 node1
172.16.145.161 node2 
172.16.145.162 node3
EOF

所有节点关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

所有节点关闭swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sysctl -p

所有节点设置内核参数

modprobe br_netfilter
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

所有节点安装ipvs管理工具

yum install ipvsadm ipset -y

所有节点添加ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

修改docker cgroup driver为systemd

根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,
使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,
因此这里修改各个节点上docker的cgroup driver为systemd

cat >> /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

重启

systemctl restart docker

所有节点安装kubeadm和kubelet

配置k8s源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache 

安装依赖

yum install -y epel-release conntrack jq sysstat curl iptables libseccomp yum-utils device-mapper-persistent-data lvm2

安装kubelet等

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

安装命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

修改kubelet配置

sed -i "s;KUBELET_EXTRA_ARGS=;KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\";g" /etc/sysconfig/kubelet

启动kubelet

systemctl enable --now kubelet

下载k8s相关镜像

for i in `kubeadm config images list 2>/dev/null |sed 's/k8s.gcr.io\///g'`; do
    docker pull gcr.azk8s.cn/google-containers/${i}
    docker tag gcr.azk8s.cn/google-containers/${i} k8s.gcr.io/${i}
    docker rmi gcr.azk8s.cn/google-containers/${i}
done

配置时钟同步

yum -y install ntpdate
ntpdate ntp1.aliyun.com
echo "*/5 * * * * bash ntpdate ntp1.aliyun.com" >> /etc/crontab

修改时区

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

配置负载均衡

keepalived+haproxy方式

所有节点安装harproxy

yum -y install haproxy

修改配置文件

vim /etc/haproxy/haproxy.cfg

添加以下配置

frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:8443
    option               tcplog
    default_backend      kubernetes-apiserver
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server      node1 172.16.145.160:6443 check
    server      node2 172.16.145.161:6443 check
    server      node3 172.16.145.162:6443 check

启动

systemctl enable haproxy --now

安装keepalived

yum install -y keepalived

node1节点配置

cat > /etc/keepalived/keepalived.conf <<-'EOF'
! Configuration File for keepalived

global_defs {
   router_id k8s-master01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass weiliang
    }
    virtual_ipaddress {
        172.16.145.200/24
    }
}
EOF

node2节点配置

cat > /etc/keepalived/keepalived.conf <<-'EOF'
! Configuration File for keepalived

global_defs {
   router_id k8s-master02
}

vrrp_instance VI_1 {
    state BAKUP
    interface ens33
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass weiliang
    }
    virtual_ipaddress {
        172.16.145.200/24
    }
}
EOF

node3节点配置

cat > /etc/keepalived/keepalived.conf <<-'EOF'
! Configuration File for keepalived

global_defs {
   router_id k8s-master03
}

vrrp_instance VI_1 {
    state BAKUP
    interface ens33
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass weiliang
    }
    virtual_ipaddress {
        172.16.145.200/24
    }
}
EOF

启动keepalived

systemctl enable keepalived --now

创建集群

节点1初始化

kubeadm init --control-plane-endpoint "172.16.145.200:8443" --upload-certs --ignore-preflight-errors=Swap

所有节点执行配置授权

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

配置网络

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

其他master节点加入

kubeadm join 172.16.145.200:8443 --token awtvoq.ljszotcg6j99uy66 \
    --discovery-token-ca-cert-hash sha256:7e902395862e37d768dc4df48300013ad5571902a52302b2443856fa565fd657 \
    --control-plane --certificate-key dcd50768c16c5f124b86248820eca802f44ed1e9e4f546661e0f4d81750ee7fa

其他node节点加入集群

kubeadm join 172.16.145.200:8443 --token awtvoq.ljszotcg6j99uy66 \
    --discovery-token-ca-cert-hash sha256:7e902395862e37d768dc4df48300013ad5571902a52302b2443856fa565fd657

node2 node3加入集群成为control-plane

kubeadm join 172.16.145.200:8443 --token awtvoq.ljszotcg6j99uy66 \
        --discovery-token-ca-cert-hash sha256:7e902395862e37d768dc4df48300013ad5571902a52302b2443856fa565fd657 \
        --control-plane --certificate-key dcd50768c16c5f124b86248820eca802f44ed1e9e4f546661e0f4d81750ee7fa

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config  

新worker节点加入集群

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sysctl -p

设置内核参数

modprobe br_netfilter
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

配置k8s源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1t
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache 

安装依赖

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp yum-utils device-mapper-persistent-data lvm2

安装kubelet等

yum install -y kubelet kubeadm --disableexcludes=kubernetes

安装命令补全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

下载k8s相关镜像

for i in `kubeadm config images list 2>/dev/null |sed 's/k8s.gcr.io\///g'`; do
    docker pull gcr.azk8s.cn/google-containers/${i}
    docker tag gcr.azk8s.cn/google-containers/${i} k8s.gcr.io/${i}
    docker rmi gcr.azk8s.cn/google-containers/${i}
done

配置时钟同步

yum -y install ntpdate
ntpdate ntp1.aliyun.com
echo "*/5 * * * * bash ntpdate ntp1.aliyun.com" >> /etc/crontab

修改时区

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

修改kubelet配置

sed -i "s;KUBELET_EXTRA_ARGS=;KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\";g" /etc/sysconfig/kubelet

启动kubelet

systemctl enable --now kubelet

加入集群

kubeadm join 172.16.145.200:8443 --token awtvoq.ljszotcg6j99uy66 \
        --discovery-token-ca-cert-hash sha256:7e902395862e37d768dc4df48300013ad5571902a52302b2443856fa565fd657

设置Hostname

hostnamectl --static set-hostname work1

查看节点信息

kubectl get node
work-node.png

相关文章

网友评论

    本文标题:kubeadm 安装k8s HA

    本文链接:https://www.haomeiwen.com/subject/kcelnctx.html