美文网首页Kubernetes
Kubernetes 集群搭建笔记

Kubernetes 集群搭建笔记

作者: N__G | 来源:发表于2020-07-23 19:54 被阅读0次

关键字

  • CentOS 7
  • Docker
  • Kubernetes
  • kubeadmin
  • kubectl
  • kubelet
  • ingress
  • master
  • worker

虚拟机配置及 IP、主机名

Role IP Hostname CPU RAM Disk
Master 192.168.2.120 k8s-master 2 Core 4 G 16 G
Ingress 192.168.2.130 k8s-ingress 2 Core 4 G 16 G
Worker 192.168.2.131 k8s-node01 2 Core 8 G 48 G
Worker 192.168.2.132 k8s-node02 2 Core 8 G 48 G
Worker 192.168.2.133 k8s-node03 2 Core 8 G 48 G
Worker 192.168.2.134 k8s-node04 2 Core 8 G 48 G

准备虚拟机

安装操作系统

所有 VM 操作系统保持统一, 具体版本如下:

[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)

关闭 selinux

在所有 VM 上执行如下命令:

setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service

设置时区

在所有 VM 上执行如下命令:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sudo echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile

配置时间同步

集群的 VM 之间使用 chronyd 进行时间同步,以 k8s-master 作为时间同步的服务端, 其它节点作为客户端与 k8s-master同步时间。

在所有 VM 上安装 chronyd

yum install -y chronyd

服务端配置

配置文件路径:/etc/chrony.conf

server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.2.0/16
local stratum 8
logdir /var/log/chrony

客户端配置

配置文件路径:/etc/chrony.conf

server k8s-master iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.2.120/24
local stratum 9
logdir /var/log/chrony

时间同步

服务端

[root@k8s-master ~]# chronyc sources -v
210 Number of sources = 4

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- ntp8.flashdance.cx            2  10   377   234  +1839us[+1839us] +/-  194ms
^* 119.28.183.184                2  10   377  109m  +3763us[+3713us] +/-   43ms
^+ tock.ntp.infomaniak.ch        1  10   377   201    +25ms[  +25ms] +/-  110ms
^- undefined.hostname.local>     2  10   377   119    -14ms[  -14ms] +/-  109ms

客户端

[root@k8s-node01 ~]# chronyc sources -v
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* k8s-master                    3   9   377   430  -3520ns[-4825ns] +/-   45ms

性能调优

cat >> /etc/sysctl.conf<<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
EOF
source -p

添加模块

文件名:add_mod.sh

#!/bin/sh
mods=(
br_netfilter
ip6_udp_tunnel
ip_set
ip_set_hash_ip
ip_set_hash_net
iptable_filter
iptable_nat
iptable_mangle
iptable_raw
nf_conntrack_netlink
nf_conntrack
nf_conntrack_ipv4
nf_defrag_ipv4
nf_nat
nf_nat_ipv4
nf_nat_masquerade_ipv4
nfnetlink
udp_tunnel
VETH
VXLAN
x_tables
xt_addrtype
xt_conntrack
xt_comment
xt_mark
xt_multiport
xt_nat
xt_recent
xt_set
xt_statistic
xt_tcpudp
)
for mod in ${mods[@]};do
    modprobe $mod
        lsmod |grep $mod
done
chmod a+x add_mod.sh
./add_mod.sh

安装并配置 docker

参考:https://kubernetes.io/docs/setup/production-environment/container-runtimes/

添加用户 dockerli 并允许 sudo

sudo adduser dockerli
sudo passwd dockerli
sudo echo 'dockerli ALL=(ALL) ALL' >> /etc/sudoers

安装必要的系统工具

sudo yum update -y
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion 

添加软件源

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

定义 docker 版本号

export docker_version=19.03.9

安装 docker-ce

sudo yum makecache all
yum list docker-ce.x86_64 --showduplicates | sort -r | grep ${docker_version} | awk '{print $2}' | awk 'NR==1'
sudo yum -y install --setopt=obsoletes=0 docker-ce-${docker_version} docker-ce-selinux-${docker_version}
sudo usermod -aG docker dockerli
sudo systemctl enable docker

配置 docker

创建目录:/etc/docker

mkdir -p /etc/docker

配置文件路径:/etc/docker/daemon.json

{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
        "https://7bezldxe.mirror.aliyuncs.com/"
    ], 
    "max-concurrent-downloads": 3, 
    "max-concurrent-uploads": 5, 
    "storage-driver": "overlay2", 
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ], 
    "log-driver": "json-file", 
    "log-opts": {
        "max-size": "100m", 
        "max-file": "3"
    }
}

启动 docker

sudo systemctl start docker

查看 docker 版本

[dockerli@master01 ~]$ docker --version
Docker version 19.03.11, build 42e35e61f3
[dockerli@master01 ~]$ sudo systemctl start docker
[dockerli@master01 ~]$ docker version
Client: Docker Engine - Community
 Version:           19.03.11
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        42e35e61f3
 Built:             Mon Jun  1 09:13:48 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.9
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       9d988398e7
  Built:            Fri May 15 00:24:05 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

禁用 swap

删除 swap 区所有内容

swapoff -a

删除 swap 挂载,这样系统下次启动不会再挂载 swap

注释 swap 行

vi /etc/fstab

重启系统,测试

reboot
free -h

swap 一行应该全部是 0

              total        used        free      shared  buff/cache   available
Mem:           3.7G        203M        3.1G        8.5M        384M        3.3G
Swap:            0B          0B          0B

安装 kubeadm、kubelet、kubectl

配置软件源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安装

Master 上安装 kubelet、kubectl、kubeadm

yum makecache fast 
yum install -y kubelet kubectl kubeadm

Worker 上安装 kubelet

yum makecache fast 
yum install -y kubelet kubeadm

创建 Kubenetes 集群

初始化 Master

初始化

k8s-master 上执行如下命令:

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.2.120

成功后会的提示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.120:6443 --token 5vmimy.bqdxbt0osdloxsoz \
    --discovery-token-ca-cert-hash sha256:19893841df818b60fcd8f004cbbe5241fde644b813d7d706c024f254250a7d1e

配置 kubectl

k8s-master 上执行如下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

参考:https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel
k8s-master 上执行如下命令:

curl https://docs.projectcalico.org/manifests/canal.yaml -O
kubectl apply -f canal.yaml

初始化 Ingress, Worker

依次在 k8s-ingress, k8s-node01, k8s-node02, k8s-node03, k8s-node04 上执行如果下命令:

kubeadm join 192.168.2.120:6443 --token 5vmimy.bqdxbt0osdloxsoz \
    --discovery-token-ca-cert-hash sha256:19893841df818b60fcd8f004cbbe5241fde644b813d7d706c024f254250a7d1e

配置节点标签和污点

给 Ingress 节点打标签并添加污点

k8s-master 上执行如下命令:

kubectl taint nodes k8s-ingress node-role.kubernetes.io/ingress='':NoSchedule
kubectl label nodes k8s-ingress node-role.kubernetes.io/ingress=''

给 Worker 节点打标签

k8s-master 上执行如下命令:

kubectl label nodes k8s-node01 node-role.kubernetes.io/worker=''
kubectl label nodes k8s-node02 node-role.kubernetes.io/worker=''
kubectl label nodes k8s-node03 node-role.kubernetes.io/worker=''
kubectl label nodes k8s-node04 node-role.kubernetes.io/worker=''

结果如下

[root@k8s-master ~]$ kubectl get nodes -o wide
NAME          STATUS   ROLES     AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-ingress   Ready    ingress   40d   v1.18.6   192.168.2.130   <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.9
k8s-master    Ready    master    40d   v1.18.6   192.168.2.120   <none>        CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.9
k8s-node01    Ready    worker    40d   v1.18.6   192.168.2.131   <none>        CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.9
k8s-node02    Ready    worker    40d   v1.18.6   192.168.2.132   <none>        CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.9
k8s-node03    Ready    worker    40d   v1.18.6   192.168.2.133   <none>        CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.9
k8s-node04    Ready    worker    40d   v1.18.6   192.168.2.134   <none>        CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.9

相关文章

网友评论

    本文标题:Kubernetes 集群搭建笔记

    本文链接:https://www.haomeiwen.com/subject/iuxklktx.html