美文网首页
Set Up K8s Cluster by Kubeadm

Set Up K8s Cluster by Kubeadm

作者: 海胆阶段 | 来源:发表于2019-02-17 01:15 被阅读14次

This article mainly talks about setting up k8s cluster by kubeadm manually, I will continue to update once new version released or configurations change. Later I will revisit and upgrade the installation through Ansible.

Prerequisite

I have a 3 nodes bare metal cluster called myk8s with CentOS version 7.5 operating system, their /etc/hosts file in each node is like this, actually this is the internal IP used inside cluster, the master node myk8s1 has public IP, used for providing services to external world.

172.16.158.44    myk8s1.fyre.ibm.com myk8s1
172.16.171.110   myk8s2.fyre.ibm.com myk8s2
172.16.171.227   myk8s3.fyre.ibm.com myk8s3

make sure every node has IP

config -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.158.44  netmask 255.255.0.0  broadcast 172.16.255.255
        ether 00:16:3e:01:9e:2c  txqueuelen 1000  (Ethernet)
        RX packets 1617615790  bytes 203050237209 (189.1 GiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 436  bytes 50037 (48.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 9.30.97.218  netmask 255.255.254.0  broadcast 9.30.97.255
        ether 00:20:09:1e:61:da  txqueuelen 1000  (Ethernet)
        RX packets 13350021  bytes 1424223654 (1.3 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 246436  bytes 45433438 (43.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

we can see this node has 2 physical network cards: eth0 and eth1, eth1 is the interface for public.

Configure and Install Components

=== for every node in cluster, performs following steps ===

Install utilities

yum update -y
yum install -y vim
yum install -y git

Disable firewall

check firewall status and disable it if active

systemctl status firewalld
systemctl disable firewalld
systemctl stop firewalld

Install kubeadm kubectl and kubelet

Install kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.

this will install latest version, currently is

Installed:
  kubeadm.x86_64 0:1.13.3-0               kubectl.x86_64 0:1.13.3-0               kubelet.x86_64 0:1.13.3-0

check your /etc/sysctl.conf file, for example, this is mine

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.ip_forward = 0
...

ensure that these 3 options exist and set to 1, because some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

if these 3 items not set, here we edit net.ipv4.ip_forward = 1 and append net.bridge.bridge-nf-call-ip6tables = 1 and net.bridge.bridge-nf-call-iptables = 1 in sysctl.conf file

then make sure that the net.bridge.bridge-nf-call is enabled, check if br_netfilter module is loaded. This can be done by running

lsmod | grep br_netfilter

if not, to load it explicitly call

modprobe br_netfilter

next run this command to reload setting

sysctl --system

then you can check the final setting:

sysctl -a | grep -E "net.bridge|net.ipv4.ip_forward"

Install docker

CRI installation in Kubernetes

Uninstall old versions

Older versions of Docker were called docker or docker-engine. If these are installed, uninstall them, along with associated dependencies.

yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

Official Docker installation guides

Install Docker CE

currently Docker version 18.06.2 is recommended, but 1.11, 1.12, 1.13 and 17.03 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes

## Set up the repository
yum install yum-utils device-mapper-persistent-data lvm2

## Add docker repository.
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

## Install docker ce.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart docker.
systemctl daemon-reload
systemctl restart docker

check result

[root@myk8s1 ~] docker version
Client:
 Version:           18.06.2-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        6d37f41
 Built:             Sun Feb 10 03:46:03 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.2-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       6d37f41
  Built:            Sun Feb 10 03:48:29 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Disable swap

why we need to disable swap?
The idea of Kubernetes is to tightly pack instances to as close to 100% utilized as possible. All deployments should be pinned with CPU/memory limits. So if the scheduler sends a pod to a machine it should never use swap at all. You don't want to swap since it'll slow things down. It's mainly for performance.

in /etc/fstab file, comment out swap setting

/dev/mapper/centos-swap swap                    swap    defaults        0 0

activate new configuration and check

swapoff -a
[root@myk8s3 ~] free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        189M        5.7G        136M        1.8G        7.0G
Swap:            0B          0B          0B

=== for worker nodes in cluster, done. Continue to do following in master node ===

Initialize kubernetes cluster

I will use Calico as the container network solution, in master node, run

kubeadm init --pod-network-cidr=192.168.0.0/16

you can see the output like this

...
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

...
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 9.30.97.218:6443 --token jjkiw2.n478eree0wrr3bmc --discovery-token-ca-cert-hash sha256:79659fb0b3fb0044f382ab5a5e317d4f775e821a61d0df4a401a4cbd8d8c5a7f


keep the last command for joining worker node later

kubeadm join 9.30.97.218:6443 --token jjkiw2.n478eree0wrr3bmc --discovery-token-ca-cert-hash sha256:79659fb0b3fb0044f382ab5a5e317d4f775e821a61d0df4a401a4cbd8d8c5a7f

then run following command in master node:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

now if you run kubectl version, you will get something like below:

[root@myk8s1 ~] kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

let's check what kind of docker images pulled from network to create the cluster in master

[root@myk8s1 ~] docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.3             fe242e556a99        2 weeks ago         181MB
k8s.gcr.io/kube-controller-manager   v1.13.3             0482f6400933        2 weeks ago         146MB
k8s.gcr.io/kube-proxy                v1.13.3             98db19758ad4        2 weeks ago         80.3MB
k8s.gcr.io/kube-scheduler            v1.13.3             3a6f709e97a0        2 weeks ago         79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        3 months ago        40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        4 months ago        220MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        14 months ago       742kB

Launch cluster network

[root@myk8s1 ~] kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-5dfh9                      0/1     Pending   0          9m30s
kube-system   coredns-86c58d9df4-d9bfm                      0/1     Pending   0          9m30s
kube-system   etcd-myk8s1.fyre.ibm.com                      1/1     Running   0          8m52s
kube-system   kube-apiserver-myk8s1.fyre.ibm.com            1/1     Running   0          8m37s
kube-system   kube-controller-manager-myk8s1.fyre.ibm.com   1/1     Running   0          8m34s
kube-system   kube-proxy-wxjx8                              1/1     Running   0          9m31s
kube-system   kube-scheduler-myk8s1.fyre.ibm.com            1/1     Running   0          8m46s

you can find some pods are not ready, for example coredns-86c58d9df4-5dfh9 and coredns-86c58d9df4-d9bfm, also the master node

[root@myk8s1 ~] kubectl get nodes
NAME                  STATUS     ROLES    AGE   VERSION
myk8s1.fyre.ibm.com   NotReady   master   11m   v1.13.3

it's time to set up network, you should first figure out which Calico version you need, check kubernetes release note, we see currently it support Calico version 3.3.1:

*   Calico was updated to v3.3.1 ([#70932](https://github.com/kubernetes/kubernetes/pull/70932))

you can also refer this link to install

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

after applying rbac-kdd.yaml and calico.yaml, now you can see

[root@myk8s1 ~] kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   calico-node-4vm2c                             2/2     Running   0          45s
kube-system   coredns-86c58d9df4-5dfh9                      1/1     Running   0          37m
kube-system   coredns-86c58d9df4-d9bfm                      1/1     Running   0          37m
kube-system   etcd-myk8s1.fyre.ibm.com                      1/1     Running   0          36m
kube-system   kube-apiserver-myk8s1.fyre.ibm.com            1/1     Running   0          36m
kube-system   kube-controller-manager-myk8s1.fyre.ibm.com   1/1     Running   0          36m
kube-system   kube-proxy-wxjx8                              1/1     Running   0          37m
kube-system   kube-scheduler-myk8s1.fyre.ibm.com            1/1     Running   0          36m
[root@myk8s1 ~] kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
myk8s1.fyre.ibm.com   Ready    master   38m   v1.13.3

Note: I encountered the problem that when join the worker nodes, the calico-node becomes not ready

[root@myk8s1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   calico-node-4vm2c                             1/2     Running   0          11m
kube-system   calico-node-zsbjj                             1/2     Running   0          96s
kube-system   coredns-86c58d9df4-5dfh9                      1/1     Running   0          48m
...

the reason is my master node has multiple eth, I need to specify which one to use in order to be consistent among all nodes.

wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

delete the previous Calico deployment and then edit and apply yaml file again:

# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
  value: "Always"
- name: IP_AUTODETECTION_METHOD
  value: "interface=eth0"
[root@myk8s1 ~] kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   calico-node-dpcsp                             2/2     Running   0          6m15s
kube-system   calico-node-gc5hs                             2/2     Running   0          6m15s
kube-system   coredns-86c58d9df4-5dfh9                      1/1     Running   0          81m
...

Join worker nodes

join is pretty easy, just run this command on all worker nodes:

kubeadm join 9.30.97.218:6443 --token jjkiw2.n478eree0wrr3bmc --discovery-token-ca-cert-hash sha256:79659fb0b3fb0044f382ab5a5e317d4f775e821a61d0df4a401a4cbd8d8c5a7f

check node status

[root@myk8s1 ~] kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
myk8s1.fyre.ibm.com   Ready    master   83m   v1.13.3
myk8s2.fyre.ibm.com   Ready    <none>   36m   v1.13.3
myk8s3.fyre.ibm.com   Ready    <none>   33s   v1.13.3

by default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node

kubeadm token create

If you don’t have the value of --discovery-token-ca-cert-hash, you can get it by running the following command chain on the master node:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

Now I have a fresh kubernetes cluster with 1 master and 2 worker nodes.

相关文章

网友评论

      本文标题:Set Up K8s Cluster by Kubeadm

      本文链接:https://www.haomeiwen.com/subject/pdlxsqtx.html