美文网首页
部署K8SV1.11.0+CalicoV3.1.3集群

部署K8SV1.11.0+CalicoV3.1.3集群

作者: Firetheworld | 来源:发表于2018-09-23 00:27 被阅读0次

目录

一:K8S部署前准备。

1、系统以及软件版本

2、部署前系统环境要求

3、主节点以及工作节点的kubernetes安装包以及kubernetes、Calico相关的docker镜像

二、K8S部署

1、启动K8S并开机自设置启动,对主节点进行初始化创建集群。

2、主机集群环境变量设置以及创建Calico网络

3、 将节点加入到主机集群中


一、软件安装前准备

1、系统以及软件版本

IP Address Role Hostname 相关软件
10.18.223.243 Master k8s-node10-18-223-243 etcd,apiserver,controllermanager,scheduler,CalicoNode,kubelet、kubeadm、kubectl、kube-proxy、docker
10.18.223.244 Noder k8s-node10-18-223-244 kubelet、kubeadm、kubectl、kube-proxy、CalicoNode、docker
10.18.223.245 Node k8s-node10-18-223-245 kubelet、kubeadm、kubectl、kube-proxy、CalicoNode、docker

注意:三台服务器网络之间互通。

  • Centos7.4(1708)
  • Docker V1.13.1
  • kubernetes V1.11.0
  • CalicoV3.13

3、部署前系统环境要求

  • 关闭selinux并设置自开机启动禁用:
setenforce 0 && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g"   /etc/selinux/config 
  • 关闭防火墙并设置自开机启动禁用:
systemctl stop firewalld && systemctl disable firewalld.service
  • 关闭swap并设置自开机启动禁用:
sed  -i '/swap/s/^/# /' /etc/fstab && mount -a && swapoff -a
  • 修改hostname(命名规范)并添加至/etc/hosts
hostnamectl set-hostname k8s-node10-18-223-243
hostnamectl set-hostname k8s-node10-18-223-244
hostnamectl set-hostname k8s-node10-18-223-245

添加至/etc/hosts

10.18.223.243 k8s-node10-18-223-243
10.18.223.244 k8s-node10-18-223-244
10.18.223.245 k8s-node10-18-223-245
  • 同步时钟,添加时钟服务器,并开启:
vi /etc/ntp.conf
server 10.17.87.8 prefer #添加时钟服务器

systemctl ntpd start && systemctl enable ntpd && ntpq -p

3、kubernetes、安装包以及kubernetes、Calico相关的docker镜像

  • Master节点导入如下docker镜像、安装如下rpm包,注意要确保版本一致性。
[root@k8s-node10-18-223-243 kubelet.service.d]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager-amd64   v1.11.0             55b70b420785        2 months ago        155 MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.0             0e4a34a3b0e6        2 months ago        56.8 MB
k8s.gcr.io/kube-proxy-amd64                v1.11.0             1d3d7afd77d1        2 months ago        97.8 MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.0             214c48e87f58        2 months ago        187 MB
quay.io/calico/node                        v3.1.3              7eca10056c8e        3 months ago        248 MB
quay.io/calico/typha                       v0.7.4              c8f53c1b7957        3 months ago        56.9 MB
quay.io/calico/cni                         v3.1.3              9f355e076ea7        3 months ago        68.8 MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        4 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        5 months ago        219 MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        9 months ago        742 kB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        9 months ago        742 kB
-rw-r--r-- 1 root root  4383318 6月  29 00:08 cri-tools-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  7906382 6月  29 00:08 kubeadm-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  7859238 6月  29 00:08 kubectl-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root 19012178 6月  29 00:08 kubelet-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  9008838 3月   5 2018 kubernetes-cni-0.6.0-0.x86_64.rpm
-rw-r--r-- 1 root root   296632 8月  11 2017 socat-1.7.3.2-2.el7.x86_64.rpm
  • Noder节点导入如下镜像包、安装如下rpm包。
[root@k8s-node10-18-223-244 kubelet.service.d]# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64   v1.11.0             1d3d7afd77d1        2 months ago        97.8 MB
quay.io/calico/node           v3.1.3              7eca10056c8e        3 months ago        248 MB
quay.io/calico/typha          v0.7.4              c8f53c1b7957        3 months ago        56.9 MB
quay.io/calico/cni            v3.1.3              9f355e076ea7        3 months ago        68.8 MB
k8s.gcr.io/pause-amd64        3.1                 da86e6ba6ca1        9 months ago        742 kB
k8s.gcr.io/pause              3.1                 da86e6ba6ca1        9 months ago        742 kB
-rw-r--r-- 1 root root  4383318 6月  29 00:08 cri-tools-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  7906382 6月  29 00:08 kubeadm-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  7859238 6月  29 00:08 kubectl-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root 19012178 6月  29 00:08 kubelet-1.11.0-0.x86_64.rpm
-rw-r--r-- 1 root root  9008838 3月   5 2018 kubernetes-cni-0.6.0-0.x86_64.rpm
-rw-r--r-- 1 root root   296632 8月  11 2017 socat-1.7.3.2-2.el7.x86_64.rpm

二、部署K8S

1、启动K8S并开机自设置启动,对主节点进行初始化并创建集群

 systemctl enable kubelet && systemctl start kubelet
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.11.0 --token-ttl=0 --apiserver-advertise-address=10.18.223.243

初始化集群的参数含义:
--pod-network-cidr=192.168.0.0/16定义POD的网段为: 192.168.0.0/16。
--kubernetes-version=v1.11.0指定K8S版本,指定版本与导入的docker镜像以及标签要一致。
--apiserver-advertise-address指定主节点的ip。
--token-ttl=0 token 过期时间,如果设为 '0',该 token 将永不过期(默认为 24h)

另外,进行初始化后,出现错误,使用kubeadm reset进行K8S初始化还原,确认无误后,在重新进行初始化。

主节点集群初始化结果如下:

[root@k8s-node10-18-223-243 images]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.11.0 --token-ttl=0 --apiserver-advertise-address=10.18.223.243
I0922 23:47:01.452709    9096 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0922 23:47:01.502414    9096 kernel_validator.go:81] Validating kernel version
I0922 23:47:01.502932    9096 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-node10-18-223-243 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.18.223.243]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-node10-18-223-243 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-node10-18-223-243 localhost] and IPs [10.18.223.243 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 46.503993 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-node10-18-223-243 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-node10-18-223-243 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node10-18-223-243" as an annotation
[bootstraptoken] using token: kukzil.xxr5xpslxccjg7jt
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.18.223.243:6443 --token kukzil.xxr5xpslxccjg7jt --discovery-token-ca-cert-hash sha256:1ec23b3024bcf66bbcc01aac24ae452cd6d59b1da62120e2b56dbb3deb01c6f4

  • 主机集群初始化成功后,工作节点在加入主机集群的验证,单独记录下来,以备使用:
kubeadm join 10.18.223.243:6443 --token kukzil.xxr5xpslxccjg7jt --discovery-token-ca-cert-hash sha256:1ec23b3024bcf66bbcc01aac24ae452cd6d59b1da62120e2b56dbb3deb01c6f4

2、主机集群环境变量设置以及创建Calico网络

未进行环境变量设置会出现如下错误:

[root@k8s-node10-18-223-243 images]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
  • 环境变量设置:
[root@k8s-node10-18-223-243 images]# mkdir -p $HOME/.kube
[root@k8s-node10-18-223-243 images]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
[root@k8s-node10-18-223-243 images]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • 未进行创建网络,查看pod状态,dns处于Pending状态。
NAME                                            READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-hz5gh                        0/1       Pending   0          6m
coredns-78fcdf6894-q676c                        0/1       Pending   0          6m
etcd-k8s-node10-18-223-243                      1/1       Running   0          5m
kube-apiserver-k8s-node10-18-223-243            1/1       Running   0          5m
kube-controller-manager-k8s-node10-18-223-243   1/1       Running   0          5m
kube-proxy-9w6t6                                1/1       Running   0          6m
kube-scheduler-k8s-node10-18-223-243            1/1       Running   0          5m
[root@k8s-node10-18-223-243 images]# 
  • 创建网络:kubectl apply -f rbac-kdd.yaml 以及kubectl apply -f calico.yaml,输出如下:
[root@k8s-node10-18-223-243 images]# kubectl apply -f rbac-kdd.yaml 
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created

[root@k8s-node10-18-223-243 images]# kubectl  apply -f calico.yaml 
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
daemonset.extensions/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
serviceaccount/calico-node created
  • 创建网络成功:
[root@k8s-node10-18-223-243 images]# kubectl get pod -n kube-system
NAME                                            READY     STATUS    RESTARTS   AGE
calico-node-255t2                               2/2       Running   0          44s
coredns-78fcdf6894-hz5gh                        1/1       Running   0          8m
coredns-78fcdf6894-q676c                        1/1       Running   0          8m
etcd-k8s-node10-18-223-243                      1/1       Running   0          7m
kube-apiserver-k8s-node10-18-223-243            1/1       Running   0          7m
kube-controller-manager-k8s-node10-18-223-243   1/1       Running   0          7m
kube-proxy-9w6t6                                1/1       Running   0          8m
kube-scheduler-k8s-node10-18-223-243            1/1       Running   0          7m

3、 将节点加入到主机集群中

  • 查看集群节点:
[root@k8s-node10-18-223-243 images]# kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
k8s-node10-18-223-243   Ready     master    10m       v1.11.0
  • 将node2、node3加入到K8S集群中,:
[root@k8s-node10-18-223-244 ~]# kubeadm join 10.18.223.243:6443 --token kukzil.xxr5xpslxccjg7jt --discovery-token-ca-cert-hash sha256:1ec23b3024bcf66bbcc01aac24ae452cd6d59b1da62120e2b56dbb3deb01c6f4
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0923 00:00:14.205462    6502 kernel_validator.go:81] Validating kernel version
I0923 00:00:14.206088    6502 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.18.223.243:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.18.223.243:6443"
[discovery] Requesting info from "https://10.18.223.243:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.18.223.243:6443"
[discovery] Successfully established connection with API Server "10.18.223.243:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node10-18-223-244" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
  • 查看节点是否全部添加至集群中:
[root@k8s-node10-18-223-243 images]# kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
k8s-node10-18-223-243   Ready     master    14m       v1.11.0
k8s-node10-18-223-244   Ready     <none>    2m        v1.11.0
k8s-node10-18-223-245   Ready     <none>    2m        v1.11.0

至此,搭建KubernetesV1.11.0+CalicoV3.13集群已成功跑起来。

相关文章

网友评论

      本文标题:部署K8SV1.11.0+CalicoV3.1.3集群

      本文链接:https://www.haomeiwen.com/subject/pczunftx.html