美文网首页
(三)k8s v1.12.2部署-初始化master节点

(三)k8s v1.12.2部署-初始化master节点

作者: 枝头残月野狼嚎嗷嗷呜 | 来源:发表于2018-11-02 11:18 被阅读0次

参考官网链接
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

正常初始化master节点其实很简单,只要执行下面这句命令就可以了。

$ kubeadm init

不过因为墙的原因,在初始化过程中需要下载一些docker镜像无法下载,这就需要我们做一些额外的准备工作。

被墙的情况下,执行上面这条语句后的效果是这样的,许多镜像拉取失败了。


初始化镜像拉取失败

如何下载这些被墙的镜像呢,百度一下其实有很多文章都介绍了,我也不重复写了,这里借用一下这位大神的链接
https://blog.csdn.net/yjf147369/article/details/80290881

我只简单说说这里的原理吧:
1、利用百度云的镜像仓库或者docker hub来建立一个私有镜像仓库
2、建立自己的代码托管仓库,如github或者阿里云的代码托管(https://code.aliyun.com/),在里边建立需要的Dockerfile,并关联到私有镜像仓库上。这样就可以利用私有镜像仓库build需要的镜像了,因为私有镜像仓库本身就在海外,所以不会被墙。
3、利用docker命令将私有镜像仓库中的镜像拉取到本地,再用tag命令改为和kubeadm报错中所需镜像一致的名字。
4、再次运行kubeadm,因为本地已经有了该镜像,初始化时就会直接使用本地镜像了,就不再去网上拉取了。

具体需要哪些镜像,名字分别是什么,可以参考kubeadm init中的报错信息。以下是我从错误信息中抽出的镜像信息,可以参考

k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

好了,kubeadm init命令执行完以后,会出现如下的提示信息

[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

按照上面的提示,还需要三个步骤。
1、需要记录一下最后一段话
用来备用,一会要创建worker节点的时候需要用这句命令来让worker加入集群。

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

2、再执行如下命令,完成配置的初始化:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、安装网络插件
K8S的网络插件有很多种类可供选择,官网链接中已经给出了各种插件的安装方法。
这里用Weave Net

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

网络插件安装完后,来确认一下服务是否启动成功。

$ kubectl get pod -n kube-system
NAME                                              READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-dch4w                          1/1     Running   1          90m
coredns-576cbf47c7-whnzs                          1/1     Running   0          90m
etcd-izm5e9951st9peq42t8fkxz                      1/1     Running   0          89m
kube-apiserver-izm5e9951st9peq42t8fkxz            1/1     Running   0          89m
kube-controller-manager-izm5e9951st9peq42t8fkxz   1/1     Running   0          89m
kube-proxy-s5khx                                  1/1     Running   0          90m
kube-scheduler-izm5e9951st9peq42t8fkxz            1/1     Running   0          89m
weave-net-29mjv                                   2/2     Running   0          75m

如果都是Running状态就说明启动起来了(刚安装完网络插件可能要等一会)
另外还有这个命令,查看k8s集群中的所有节点状态

$ kubectl get nodes
NAME                      STATUS   ROLES    AGE   VERSION
izm5e9951st9peq42t8fkxz   Ready    master   92m   v1.12.2

因为现在集群中只有一个master节点,所以只有一行。
如果STATUS是Ready,就说明已经OK了。

消除master节点的隔离(可选)

默认情况下,k8s是不会在master节点上自动部署业务上需要的应用的,如果是测试环境机器数量比较少,可以将这个隔离给去掉。

$ kubectl taint nodes --all node-role.kubernetes.io/master-

会看到类似这样的输出

node/izm5e9951st9peq42t8fkxz untainted
error: taint "node-role.kubernetes.io/master:" not found

说明已经消除了隔离

好的,下一篇来说明如何初始化Worker节点。

相关文章

网友评论

      本文标题:(三)k8s v1.12.2部署-初始化master节点

      本文链接:https://www.haomeiwen.com/subject/dauqxqtx.html