step 1 Set up virtual machine.
Set at least 4 processors for virtual machine.
step 2 Set static IP
Edit file /etc/netplan/01-network-manager-all.yaml
Content:
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
enp0s3: # Get from ifconfig
dhcp4: yes
addresses: [192.168.1.190/24] # change
gateway4: 192.168.1.1 # change
# nameservers:
# addresses: [114.114.114.114] # change
Apply the change
netplan apply
step 3 Prepare openssh-server
# Install
sudo apt install openssh-server
# start ssh server
sudo service ssh start
step 4 Shutdown firewall
# show ufw status
sudo ufw status
# disable ufw
sudo ufw disable
step 5 shutdown swap
Edit file /etc/fstab
, comment out line which includes swap
.
Restart the OS, and run free
command.
The swap should be zero like below:
free
Swap: 0 0 0
step 6 Install docker
containerd
is also fine. We use docker here.
apt install docker.io
Change the control group driver to systemd
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://uy35zvn6.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# Reload
systemctl daemon-reload
systemctl restart docker
Check docker.
docker version
step 7 Set iptables.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
step 8 Set apt
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
# Add GPG
sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# Add k8s apt source
sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF
step 9 Install kubeadm,kubelet,kubectl
sudo apt-get update
sudo apt-get install -y kubelet=1.22.2-00 kubeadm=1.22.2-00 kubectl=1.22.2-00
sudo apt-mark hold kubelet kubeadm kubectl
Initialize k8s cluster. This will take a few seconds.
# apiserver-advertise-address should be the IP of VM.
kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--pod-network-cidr=192.168.0.0/16 \
--apiserver-advertise-address=192.168.1.190
Output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.190:6443 --token <token> \
--discovery-token-ca-cert-hash <hash>
Just follow the output to set kubectl config.
If we run kubectl get node
, we'll find that node is not ready. Because we didn't install network plugin yet.
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 NotReady control-plane,master 6m32s v1.22.2
Run journalctl -xeu kubelet
can get the following error.
"Unable to update cni config" err="no networks found in /etc/cni/net.d"
step 10 Install Calico
Remove taint of master node, otherwise Calico pods can not be scheduled to master node.
kubectl taint nodes --all node-role.kubernetes.io/master-
Calico Quick Start
Apply Calico yaml file.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/custom-resources.yaml
Install Calico may take few minutes.
we can check the calico pods status by running command:
kubectl get pod -n calico-system
Output:
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-78687bb75f-5gmjw 1/1 Running 0 8m29s
calico-node-kqwns 1/1 Running 0 8m29s
calico-typha-859b477db7-vtzbs 1/1 Running 0 8m29s
csi-node-driver-k5qdf 2/2 Running 0 5m20s
If all calico pods are running, check the node status.
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 30m v1.22.2
Node is ready now.
Check cs.
kubectl get cs
Output:
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
Scheduler is unhealthy, we need to delete --port=0
of file kube-controller-manager.yaml
and kube-scheduler.yaml
in directory /etc/kubernetes/manifests/
Then restart kubelet by running systemctl restart kubelet.service
. We'll see all cs is healthy now.
Output:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
11 Add other nodes to cluster.
Repeat step 1 to 10. But don't run kubeadm init
, just run kubeadm join
instead. And it will take few seconds to create a calico pod in the new node.
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-78687bb75f-5gmjw 1/1 Running 0 78m
calico-node-87bl4 1/1 Running 0 8m12s
calico-node-kqwns 1/1 Running 0 78m
calico-typha-859b477db7-vtzbs 1/1 Running 0 78m
csi-node-driver-k5qdf 2/2 Running 0 75m
csi-node-driver-tjr26 2/2 Running 0 4m10s
NAME STATUS ROLES AGE VERSION
k8s1 Ready control-plane,master 100m v1.22.2
k8s2 Ready <none> 8m33s v1.22.2
网友评论