Kubeadm 提供了 kubeadm init 和 kubeadm join 这两个工具, 作为快速创建 Kubernetes 集群的最佳实践。
本文的K8S集群是在Centos集群上搭建的,前期准备工作可以参考:
[K8S系列一]基于VirtualBox和Vagrant的Linux集群搭建
[K8S系列二]Centos安装docker
1 Centos系统配置
# 01 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 02 关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 03 关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
# 04 配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables \
-F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 05 设置系统参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 06 手动加载所有的配置文件
sysctl --system
2 安装 kubeadm, kubelet 和 kubectl
# 01 配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 02 安装kubeadm、kubelet、kubectl
yum install -y kubeadm kubelet kubectl
# 03 docker和k8s设置同一个cgroup
# 3.1 修改etc/docker/daemon.json, 文件不存在就新建;有的话,就加上这一句。注意Json格式
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 3.2 重启docker,一定要执行
systemctl restart docker
# 3.3 找不到内容没有关系
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# 3.4 重启kubelet,一定要执行
systemctl enable kubelet && systemctl start kubelet
3 初始化集群
3.1 因为GFW的原因,有些镜像通过默认的镜像源下载不下来,需要指定国内的镜像源,或者通过国内的镜像源提前下载下来。
这里对coredns特殊处理一下
docker pull coredns/coredns:1.8.6
docker tag coredns/coredns:1.8.6 registry.aliyuncs.com/google_containers/coredns:v1.8.6
3.2 初始化主节点
- 执行如下命令,初始化主节点,192.168.0.51 是主节点IP地址,image-repository指定阿里巴巴镜像源
kubeadm init --kubernetes-version=1.23.5 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--apiserver-advertise-address=192.168.0.51 \
--pod-network-cidr=10.244.0.0/16
执行后返回如下提示:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.51:6443 --token 439uik.r1m3zwpgalub0563 \
--discovery-token-ca-cert-hash sha256:1fea5bbde95cb0d5bf00002019d225845e80ac3c657e1cae46ed9d32e691001e
- 按照提示在主节点上执行:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 在主节点上执行
kubectl get nodes
,可以发现当前只有master节点
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 12m v1.23.5
-
kubectl get pods -n kube-system
可以发现coredns pod都是pending的
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-65c54cc984-8cdgr 0/1 Pending 0 20s
kube-system coredns-65c54cc984-fvzp9 0/1 Pending 0 20s
kube-system etcd-master 1/1 Running 0 34s
kube-system kube-apiserver-master 1/1 Running 0 34s
kube-system kube-controller-manager-master 1/1 Running 0 34s
kube-system kube-proxy-7fkw2 1/1 Running 0 20s
kube-system kube-scheduler-master 1/1 Running 0 33s
这是因为没有安装网络插件,可以参考之前提示 You should now deploy a pod network to the cluster
,这里选择Calico。
3.2 Calico 安装
curl -O https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml
一定要注意calico.yaml 的版本与calico版本匹配,否则会提示如下信息,然后启动各种报错:
unable to recognize "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
安装之后再执行kubectl get pods -n kube-system
,这回所有的pods都是running状态。
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-56fcbf9d6b-4dr86 1/1 Running 0 3m
kube-system calico-node-9sgm6 1/1 Running 0 3m
kube-system coredns-6d8c4cb4d-dfq9g 1/1 Running 0 24m
kube-system coredns-6d8c4cb4d-lhffl 1/1 Running 0 24m
kube-system etcd-master 1/1 Running 4 24m
kube-system kube-apiserver-master 1/1 Running 4 24m
kube-system kube-controller-manager-master 1/1 Running 4 24m
kube-system kube-proxy-rmwzt 1/1 Running 0 24m
kube-system kube-scheduler-master 1/1 Running 4 24m
3.3 子节点加入集群
在w1、w2子节点上分别执行:
kubeadm join 192.168.0.51:6443 --token 439uik.r1m3zwpgalub0563 \
--discovery-token-ca-cert-hash sha256:1fea5bbde95cb0d5bf00002019d225845e80ac3c657e1cae46ed9d32e691001e
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 12m v1.23.5
w1 Ready <none> 5m37s v1.23.5
w2 Ready <none> 5m37s v1.23.5
4. 简单示例
使用K8S创建Nginx pod
# 01 创建工作目录
mkdir pod_nginx_rs
# 02 切换到工作目录下
cd pod_nginx_rs
# 03 创建yaml文件
cat > pod_nginx_rs.yaml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
labels:
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
name: nginx
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
# 04 根据文件创建pod`
kubectl apply -f pod_nginx_rs.yaml
# 05 查看pod
kubectl get pods
kubectl get pods -o wide
kubectl describe pod nginx
# 06 感受通过RS将pod扩容
kubectl scale rs nginx --replicas=4
kubectl get pods -o wide
# 07 删除pod
kubectl delete -f pod_nginx_rs.yaml
5 一些常用的命令
# Show details of a specific resource or group of resources
kubectl describe pods calico-node-mcznh -n kube-system
# Print the logs for a container in a pod
kubectl logs calico-node-mcznh -n kube-system
# 删除节点mynode节点
kubectl drain mynode
kubectl delete node mynode
# 在mynode节点上执行
kubeadm reset
# 忘记kubeadm init生成的kubeadm join命令
kubeadm token generate
kubeadm token create <generated-token> --print-join-command --ttl=0
网友评论