美文网首页
kubeadm安装kubernetes

kubeadm安装kubernetes

作者: 迷茫_小青年 | 来源:发表于2019-11-28 18:26 被阅读0次

之前写给手动安装部署kubernetes,部署前来太麻烦,尤其是证书那块相对的繁琐。

今天尝试下利用kubeadm来部署kubernetes

前期工作基本略过,例如修改主机名,配置hosts,修改文件描述符,关闭selinux、firewalld 等等

1、配置kubernetes 的阿里镜像yum源

cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装kubeadm 、kubelet、kubectl 命令

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet.service

3、关闭swap

swapoff -a

这步可以省略在初始化的时候使用 --ignore-preflight-errors=swap 忽略,不然初始化会报错。

4、升级docker
默认centos 7的docker 1.13.1。 我们升级到18.06.1

yum -y install  yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.06.1.ce-3.el7

exec-opts是更改docker cgroup驱动,data-root修改docker 数据存储目录, 默认是/var/lib/docker,registry-mirrors是私服镜像,默认是hub.docker.com(但是,这些镜像好像都不好用,我线上的都把这个去掉了)

cat > /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "data-root": "/data/docker",
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn/", "https://registry.docker-cn.com"]
}
EOF

systemctl start docker
systemctl enable docker

5、安装 kubernetes (kubeadm 初始化)

kubeadm init \
--kubernetes-version=v1.13.1 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.1.200 \
--image-repository registry.aliyuncs.com/google_containers

[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-mater01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-mater01 localhost] and IPs [192.168.1.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-mater01 localhost] and IPs [192.168.1.200 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.010280 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-mater01" as an annotation
[mark-control-plane] Marking the node kubeadm-mater01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-mater01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yof5pj.1j961t39wuxuahvs
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.200:6443 --token peonuo.e4r06pm721rqjt6j --discovery-token-ca-cert-hash sha256:22800e2c3aaf2596b4544afd0ad4013048ddf6623af2537c4215fc31ff7f4c0d

6、配置使用kubectl访问集群

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

7、验证

#kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

如果安装的版本是1.16.x 可能会出现下面这种情况
#kubectl get cs
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>

这个可能是个bug
#kubectl get cs -o yaml 这样验证

#kubectl get pods --all-namespaces

8、安装网络组件flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl create -f kube-flannel.yml 

如果上面初始化的时候pod-network-cidr 改过的话,要在kube-flannel.yml里相应修改,不然网络是不通。

126   net-conf.json: |
127     {
128       "Network": "10.244.0.0/16",
129       "Backend": {
130         "Type": "vxlan"
131       }
132     }

如果Node有多个网卡的话,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1

查看Pod状态是否为Running状态

kubectl get pods --all-namespaces

节点加入集群

# 安装kubeadm
cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubelet
systemctl start kubelet
systemctl enable kubelet

#安装docker
yum -y install  yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl start docker.service
systemctl enable docker.service

#
kubeadm join 192.168.1.200:6443 --token peonuo.e4r06pm721rqjt6j --discovery-token-ca-cert-hash sha256:22800e2c3aaf2596b4544afd0ad4013048ddf6623af2537c4215fc31ff7f4c0d
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.200:6443"
[discovery] Requesting info from "https://192.168.1.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.200:6443"
[discovery] Successfully established connection with API Server "192.168.1.200:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
kubeadm-mater01   Ready    master   55m     v1.13.1
kubeadm-node01    Ready    <none>   3m22s   v1.13.1

token过期处理办法

kubeadm join 192.168.1.200:6443 --token c39ox1.v2c9gxumnw6eelv1 --discovery-token-ca-cert-hash sha256:066e3219e9286bc233fcc78dcf46152ee0c4c6faac4da6d74d4f6877be0c7773
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.200:6443"
[discovery] Requesting info from "https://192.168.1.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.200:6443"
[discovery] Successfully established connection with API Server "192.168.1.200:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized

# 从以上信息可以看到加入到集群失败,原因是未授权,
需要在master上重新生成token在加入

# 在master上查看token,可以看到这个token已经过期,token的有效期是24小时
[root@kubeadm-mater01 k8s]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
c39ox1.v2c9gxumnw6eelv1   <invalid>#无效的   2019-01-23T05:07:30-05:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 重新生成token,
[root@kubeadm-mater01 k8s]# kubeadm token create
oefzfi.gpvudofdekjkqh5z

[root@kubeadm-mater01 k8s]# kubeadm token list  
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
c39ox1.v2c9gxumnw6eelv1   <invalid>   2019-01-23T05:07:30-05:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
oefzfi.gpvudofdekjkqh5z   23h         2019-01-25T09:29:45+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

# 使用新生成的token加入集群
[root@kubeadm-node01 ~]# kubeadm join 192.168.1.200:6443 --token oefzfi.gpvudofdekjkqh5z  --discovery-token-ca-cert-hash sha256:066e3219e9286bc233fcc78dcf46152ee0c4c6faac4da6d74d4f6877be0c7773
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.200:6443"
[discovery] Requesting info from "https://192.168.1.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.200:6443"
[discovery] Successfully established connection with API Server "192.168.1.200:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

# 查看node
[root@kubeadm-mater01 k8s]# kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
kubeadm-mater01   Ready    master   39h     v1.13.2
kubeadm-node01     Ready    <none>   4m56s   v1.13.2

ingress 安装配置

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

kubectl create -f mandatory.yaml

cat > ingress-service.yaml <<-'EOF'
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30080
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      nodePort: 30443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF

kubectl create -f ingress-service.yaml

ingress 服务器就暴露在30080端口,和30443 端口上了。

本文很多地方都是参考https://www.jianshu.com/p/dde562325f30

相关文章

网友评论

      本文标题:kubeadm安装kubernetes

      本文链接:https://www.haomeiwen.com/subject/zpwbwctx.html