Kubernetes 高可用集群需要实现:
- API server 组件高可用
- etcd 集群高可用,需要至少3台服务器
所以 Kubernetes 高可用集群需要至少3台 master,2台 node。
安装前检查项:
- 服务器最小要求硬件配置 CPU 2C,内存 2GB
- 每个服务器hostname,MAC,product_uuid 必须唯一
# 查看 product_uuid cat /sys/class/dmi/id/product_uuid
- 修改 SELINUX 配置
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
- 配置 iptable/ipvs 流量转发功能
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system # 检查 iptable 是否加载 lsmod | grep br_netfilter
- 关闭Swap。参考 CentOS 关闭 swap
- 以下端口不能被其他程序占用
Protocol | Port Range | Purpose | Used By |
---|---|---|---|
TCP | 6443 | Kubernetes API server | All |
TCP | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | 10250 | Kubelet API | Self, Control plane |
TCP | 10251 | kube-scheduler | Self |
TCP | 10252 | kube-controller-manager | Self |
TCP | 30000-32767 | NodePort Services | All |
安装 IPVS
# 每次启动都加载 ipvs 所需模块
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
# 检查 ipvs 是否加载
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 安装 ipvs 管理工具
yum install -y ipset
yum install -y ipvsadm
touch /etc/sysconfig/ipvsadm
systemctl start ipvsadm
systemctl enable ipvsadm
安装 Docker
安装 Keepalive + HAProxy
为了实现 Kubernetes 集群高可用,需要使用 Keepalive + HAProxy 方式对 API server 进行反向代理,通过 VIP 方式访问 API server。
参考 CentOS 安装 Keepalive
参考 CentOS 安装 HAProxy
安装 kubeadm、kubelet、kubectl
- 添加 Kubernetes 国内源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes-aliyun] name=Kubernetes-aliyun baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=0 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
- 安装 kubeadm、kubelet、kubectl
yum install -y kubelet kubeadm kubectl systemctl enable kubelet systemctl restart kubelet
初始化 Kubernetes 集群
- 准备 kubeadm-init.yaml 文件
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.15.1 controlPlaneEndpoint: 192.168.1.244:26443 # API server VIP 访问地址 networking: dnsDomain: cluster.local # 集群域名后缀 serviceSubnet: 10.243.0.0/16 # service ip 范围 podSubnet: 10.244.0.0/16 # pod ip 范围 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs # 使用 ipvs 作为流量转发组件,默认为 iptable
- 在每台节点上拉取安装镜像
# 从阿里云拉取镜像 kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.1 # 重新tag镜像 docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1 docker rmi registry.aliyuncs.com/google_containers/pause:3.1 docker rmi registry.aliyuncs.com/google_containers/etcd:3.3.10 docker rmi registry.aliyuncs.com/google_containers/coredns:1.3.1
- 初始化第一台 master 节点
kubeadm init --config kubeadm-init.yaml --upload-certs # 初始化成功后,console 显示成功信息 Your Kubernetes control-plane has initialized successfully! kubeadm join 192.168.1.244:26443 --token 32y87c.ux6hmk9tkjqntoz3 \ --discovery-token-ca-cert-hash sha256:bc91781ae11a22b773ff918da00d78685f5548a4891714c276b6a28aac084ce0 \ --control-plane --certificate-key c1e22a571bb9a671222b3175f5969694670d152318a890c7c8b07231cb2cf4cb ...... kubeadm join 192.168.1.244:26443 --token 32y87c.ux6hmk9tkjqntoz3 \ --discovery-token-ca-cert-hash sha256:bc91781ae11a22b773ff918da00d78685f5548a4891714c276b6a28aac084ce0
- 复制 kubectl 配置文件到用户目录下
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 接着安装 flannel 网络组件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- 检查 pods 是否都正常启动
kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-5c98db65d4-59hc6 1/1 Running 0 122m 10.244.0.6 master01 <none> <none> kube-system coredns-5c98db65d4-n7nf6 1/1 Running 0 122m 10.244.0.7 master01 <none> <none> kube-system etcd-master01 1/1 Running 0 121m 192.168.1.108 master01 <none> <none> kube-system kube-apiserver-master01 1/1 Running 0 122m 192.168.1.108 master01 <none> <none> kube-system kube-controller-manager-master01 1/1 Running 0 122m 192.168.1.108 master01 <none> <none> kube-system kube-flannel-ds-amd64-tzf78 1/1 Running 0 121m 192.168.1.108 master01 <none> <none> kube-system kube-proxy-5b6xr 1/1 Running 0 122m 192.168.1.108 master01 <none> <none> kube-system kube-scheduler-master01 1/1 Running 0 121m 192.168.1.108 master01 <none> <none>
- 添加其他 master 节点
# 在其他 master 节点上使用第一台 master 生成的 token 加入集群 kubeadm join 192.168.1.244:26443 --token 32y87c.ux6hmk9tkjqntoz3 \ --discovery-token-ca-cert-hash sha256:bc91781ae11a22b773ff918da00d78685f5548a4891714c276b6a28aac084ce0 \ --control-plane --certificate-key c1e22a571bb9a671222b3175f5969694670d152318a890c7c8b07231cb2cf4cb # 配置 kubectl mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 添加 node 节点
# 在 node 节点上使用第一台 master 生成的 token 加入集群 kubeadm join 192.168.1.244:26443 --token 32y87c.ux6hmk9tkjqntoz3 \ --discovery-token-ca-cert-hash sha256:bc91781ae11a22b773ff918da00d78685f5548a4891714c276b6a28aac084ce0
重置节点
- 从集群删除节点
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name>
- 在节点上删除 Kubernetes 组件,重置 iptable/ipvs
kubeadm reset -f iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ipvsadm --clear rm -rf ~/.kube/
网友评论