美文网首页
使用kubeadm部署kubernetes1.18.2(1.21

使用kubeadm部署kubernetes1.18.2(1.21

作者: 任总 | 来源:发表于2021-06-25 17:51 被阅读0次

    一、所有节点操作

    每个节点主机每个需要2G内存,和双核cup

    • 关闭防火墙和同步时间
    如果有dns服务器添加A记录,如果没有dns服务器则每个节点都设置hosts
    ~]# vim /etc/hosts
    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    199.232.28.133  raw.githubusercontent.com                          #flannel网络插件解析地址
    172.16.16.160   k8s-master-160.kvm.io k8s-master-160
    172.16.16.161   k8s-master-161.kvm.io k8s-master-161
    172.16.16.162   k8s-master-162.kvm.io k8s-master-162
    172.16.16.163   k8s-nodes-163.kvm.io k8s-nodes-163
    172.16.16.164   k8s-nodes-164.kvm.io k8s-nodes-164
    
    #为了方便分发,可以设置ssh免密互通
    ~]# ssh-keygen -t rsa 
    #分发秘钥
    ~]# ssh-copy-id -i .ssh/id_rsa.pub root@(本地主机地址和其他主机地址)
    

    编写脚本设置docker和k8s的yum源

    k8s在github上的地址:https://github.com/kubernetes/kubeadm
          下载地址:https://github.com/kubernetes/kubernetes/releases/
    
    • 阿里云提供了Kubernetes国内yum源来安装kubelet、kubectl和 kubeadm
    ~]# vim k8s-repo.sh
    
    #!/bin/bash
    
    #关闭swap
    swapoff -a
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    #1安装必要的一些系统工具
    yum install -y yum-utils device-mapper-persistent-data lvm2 &> /dev/null
    
    #添加软件源信息
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    #添加k8s源
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    #更新yum源
    yum makecache fast
    
    ~]# . k8s-repo.sh    #执行脚本
    
    

    同步到其他节点上,所有节点执行脚本

    ~]# scp k8s-repo.sh 节点的IP地址:/root
    #检查版本是否较新并且一致
    ~]# yum info kubelet kubeadm kubectl
    
    Name        : kubeadm
    Arch        : x86_64
    Version     : 1.18.2
    Release     : 0
    Size        : 8.8 M
    Repo        : kubernetes
    Summary     : Command-line utility for administering a Kubernetes cluster.
    URL         : https://kubernetes.io
    License     : ASL 2.0
    Description : Command-line utility for administering a Kubernetes cluster.
    
    Name        : kubectl
    Arch        : x86_64
    Version     : 1.18.2
    Release     : 0
    Size        : 9.5 M
    Repo        : kubernetes
    Summary     : Command-line utility for interacting with a Kubernetes cluster.
    URL         : https://kubernetes.io
    License     : ASL 2.0
    Description : Command-line utility for interacting with a Kubernetes cluster.
    
    Name        : kubelet
    Arch        : x86_64
    Version     : 1.18.2
    Release     : 0
    Size        : 21 M
    Repo        : kubernetes
    Summary     : Container cluster management
    URL         : https://kubernetes.io
    License     : ASL 2.0
    Description : The node agent of Kubernetes, the container cluster manager.
    
    
    ....
    
    

    二、第一个master控制平面节点主机操作

    安装kubelet、kubeadm、kubectl、docker

    ~]# yum install docker-ce kubeadm kubectl kubelet  -y
    ......
     kubernetes-cni.x86_64 0:0.6.0-0  #作为依赖cni也被自动安装   
    
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables  #确认iptable是否关闭
    1
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
    1
    
    

    先启动docker

    ~]#  systemctl start docker &&  systemctl enable kubelet  docker
    
    

    使用kubeadm init初始化

    常用参数说明:
    • --kubernetes-version= # 指定要安装的k8s版本

    • --apiserver-bind-port= #指定API服务器绑定到端口。6443(默认)

    • --pod-network-cidr=10.244.0.0/16 pod使用的网络地址范围,10.244.0.0是flannel默认配置网段,如果使用其他请修改https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ,粘贴下来创建kube-flannel.yml 文件

      yml文档中的位置
    • --service-cidr= # 指定service网络地址范围

    • --apiserver-advertise-address= #指定api地址,这里可以配置成master的私网接口IP

    • --service-dns-domain= #指定后缀域名

    • --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers #指定镜像仓库为阿里云地址

    • --ignore-preflight-errors=Swap #跳过swap检测

    • --token-ttl 0 #加入集群秘钥过期时间不设定为2小时,0为永不过期(不安全)

    • --control-plane-endpoint [第一个master主机名] # 注意!!!高可用控制节点必选项,指定一个稳定的IP地址或DNS名称为控制平面主机

     ~]# kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=172.20.0.0/16 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --control-plane-endpoint k8s-master-160.kvm.io --ignore-preflight-errors=Swap
    ..........
    Your Kubernetes control-plane has initialized successfully!   #初始化成功
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube   #建议建立一个用户把配置文件拷贝到用户的home目录下,并赋予属主属组
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join k8s-master-160.kvm.io:6443 --token f2ldwr.ki1xx88oc5g5krh3 \
        --discovery-token-ca-cert-hash sha256:4ba0077ffce327fd988593a768c5951a89aa18995824f781698ef59a15daa0ca \
        --control-plane  
    #master控制平面节点加入这个集群形成高可用,使用这个命令,后面还要加上certificate key
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join k8s-master-160.kvm.io:6443 --token f2ldwr.ki1xx88oc5g5krh3 \
        --discovery-token-ca-cert-hash sha256:4ba0077ffce327fd988593a768c5951a89aa18995824f781698ef59a15daa0ca 
    #node计算节点加入这个集群使用这个命令,token动态域共享令牌,hash秘钥,此命令为动态生成,很重要保存好,24小时后会失效要使用命令生成token
    .........
    ~]# mkdir -p $HOME/.kube
    ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    ~]# chown $(id -u):$(id -g) $HOME/.kube/config
    
    ~]# ss -tnl
    State       Recv-Q Send-Q Local Address:Port               Peer Address:Port              
    LISTEN      0      100    127.0.0.1:25                    *:*                  
    LISTEN      0      128    127.0.0.1:37694                 *:*                  
    LISTEN      0      128    127.0.0.1:10248                 *:*                  
    LISTEN      0      128    127.0.0.1:10249                 *:*                  
    LISTEN      0      128    172.16.16.160:2379                  *:*                  
    LISTEN      0      128    127.0.0.1:2379                  *:*                  
    LISTEN      0      128    172.16.16.160:2380                  *:*                  
    LISTEN      0      128    127.0.0.1:2381                  *:*                  
    LISTEN      0      128    127.0.0.1:10257                 *:*                  
    LISTEN      0      128    127.0.0.1:10259                 *:*                  
    LISTEN      0      128       *:22                    *:*                  
    LISTEN      0      100       [::1]:25                 [::]:*                  
    LISTEN      0      128    [::]:10250              [::]:*                  
    LISTEN      0      128    [::]:10251              [::]:*                  
    LISTEN      0      128    [::]:6443  #确认已启用   [::]:*                  
    LISTEN      0      128    [::]:10252              [::]:*                  
    LISTEN      0      128    [::]:10256              [::]:*                  
    LISTEN      0      128    [::]:22                 [::]:*     
    #查询组件健康信息
    ~]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    
    

    * kubeadm init初始化遇到的问题

    * 1.初始化时候kubelet报错
    #报错内容:
    .....
    [kubelet-check] It seems like the kubelet isn't running or healthy.
    [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
    #原因:因为docker默认的Cgroup Driver是cgroupfs,Kubernetes 推荐使用 systemd 来代替 cgroupfs。
    

    解决:

    #创建daemon.json文件,加入以下内容:
    ls /etc/docker/daemon.json
    {"exec-opts": ["native.cgroupdriver=systemd"]}
    #重启docker
    sudo systemctl restart docker
    #重启kubelet
    sudo systemctl restart kubelet
    sudo systemctl status kubelet
    
    * 2.初始化时候拉取镜像报错
    ~]#  kubeadm init --kubernetes-version=v1.21.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=172.20.0.0/16 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --control-plane-endpoint k8s-master-160.kvm.io --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.21.2
    .....
    报错:
    [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0: output: Error response from daemon: manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 not found: manifest unknown: manifest unknown
    原因kubernetes v1.21.1 安装时无法从 k8s.gcr.io 拉取coredns镜像,导致没法正常进行kubernetes正常安装。
    

    解决:

    下载镜像
    ~]# docker pull coredns/coredns  
    修改镜像名称
    ~]# docker tag coredns/coredns:latest registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 
    删除多余镜像
    ~]# docker rmi coredns/coredns:latest 
    查询镜像
     ~]# docker images
    REPOSITORY                                                                    TAG        IMAGE ID       CREATED         SIZE
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.21.2    106ff58d4308   8 days ago      126MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.21.2    a6ebd1c1ad98   8 days ago      131MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.21.2    ae24db9aa2cc   8 days ago      120MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.21.2    f917b8c8f55b   8 days ago      50.6MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.8.0     8d147537fb7d   3 weeks ago     47.6MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   5 months ago    683kB
    registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   10 months ago   253MB
    
    * 3.master集群初始化完成后,scheduler和controller-manager端口起不来
    ~]# kubectl get cs 
    Warning: v1 ComponentStatus is deprecated in v1.19+
    NAME                 STATUS      MESSAGE                                                                                       ERROR
    scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
    controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
    etcd-0               Healthy     {"health":"true"} 
    
    

    解决:

    ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
    .....
    spec:
      containers:
      - command:
        - kube-scheduler
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --bind-address=127.0.0.1
        - --kubeconfig=/etc/kubernetes/scheduler.conf
        - --leader-elect=true
    #    - --port=0  #注释此行
    .....
    ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml 
    .....
    spec:
      containers:
      - command:
        - kube-controller-manager
        - --allocate-node-cidrs=true
        - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --bind-address=127.0.0.1
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --cluster-cidr=10.244.0.0/16
        - --cluster-name=kubernetes
        - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
        - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
        - --controllers=*,bootstrapsigner,tokencleaner
        - --kubeconfig=/etc/kubernetes/controller-manager.conf
        - --leader-elect=true
    #    - --port=0 #注释此行
    重启服务
    ~]# systemctl restart kubelet.service
    
     ~]# kubectl get cs
    Warning: v1 ComponentStatus is deprecated in v1.19+
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    
    
    

    部署网络插件flannle,网址https://github.com/coreos/flannel

    ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    ~]# docker image ls
    #几分钟后自动下载flannel镜像
    .............
    quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        10 months ago       52.6MB
    .............
    ]# kubectl get pods -n kube-system   #查看启动的系统名称空间的pod
    NAME                                            READY   STATUS    RESTARTS   AGE
    coredns-6955765f44-sg5c7                        1/1     Running   0          105m
    coredns-6955765f44-tgkcj                        1/1     Running   0          105m
    etcd-k8s-master-160.kvm.io                      1/1     Running   0          105m
    kube-apiserver-k8s-master-160.kvm.io            1/1     Running   0          105m
    kube-controller-manager-k8s-master-160.kvm.io   1/1     Running   0          105m
    kube-flannel-ds-amd64-h88jw                     1/1     Running   0          6m8s
    kube-proxy-vxqn5                                1/1     Running   0          105m
    kube-scheduler-k8s-master-160.kvm.io            1/1     Running   0          105m
    
    ~]# kubectl get node
    NAME                    STATUS   ROLES    AGE    VERSION
    k8s-master-160.kvm.io   Ready    master   103m   v1.18.2   #变成Ready表示第一个master部署完成
    
    
    

    加入集群certificate key获取

    #把所需要的证书打包上传到集群,便于其他控制平面节点master加入控制集群时候下载使用
    ~]# kubeadm init phase upload-certs --upload-certs
    W0507 11:45:19.755885   31833 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:             #生成了加入控制平面集群时候所需要的key
    a21087e33294749ddd92fc6089abe7cb3dcd9a49524f9a4c841b44429001e817
    

    加入集群token秘钥的重新获取,在过期后使用此命令

    #重新获取token
    ~]# kubeadm token create
    
    #重新获取discovery-token-ca-cert-hash
    ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
       openssl dgst -sha256 -hex | sed 's/^.* //'
    
    
    

    三、其他master控制平面节点主机操作和加入集群

    #执行更新yum源
    ~]# ls
     k8s-repo.sh
    ~]# . k8s-repo.sh 
    
    #安装docker-ce kubelet kubeadm
    ~]# yum install  docker-ce kubeadm kubectl kubelet  -y
    
    #确认iptable是否关闭
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables  
    1
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
    1
    #启动docker
     ~]# systemctl start docker &&  systemctl enable kubelet  docker
    
     
    #加入集群
    ~]# kubeadm join k8s-master-160.kvm.io:6443 --token f2ldwr.ki1xx88oc5g5krh3     --discovery-token-ca-cert-hash sha256:4ba0077ffce327fd988593a768c5951a89aa18995824f781698ef59a15daa0ca     --control-plane --certificate-key a21087e33294749ddd92fc6089abe7cb3dcd9a49524f9a4c841b44429001e817
    
    
    
    

    四、node节点主机操作和加入集群

    #执行更新yum源
    ~]# ls
     k8s-repo.sh
    ~]# . k8s-repo.sh 
    
    #安装docker-ce kubelet kubeadm
    ~]# yum install docker-ce kubelet kubeadm -y
    
    #确认iptable是否关闭
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables  
    1
    ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
    1
    #启动docker
     ~]# systemctl start docker &&  systemctl enable kubelet  docker
    
    
    #加入集群
    ~]#kubeadm join k8s-master-160.kvm.io:6443 --token f2ldwr.ki1xx88oc5g5krh3 \
        --discovery-token-ca-cert-hash sha256:4ba0077ffce327fd988593a768c5951a89aa18995824f781698ef59a15daa0ca 
     --ignore-preflight-errors=Swap
    
    #几分钟后自动下载docker镜像并启动
    ~]# docker image ls
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.18.2             0d40868643c6        2 weeks ago         117MB
    quay.io/coreos/flannel                                           v0.12.0-amd64       4e9f801d2217        7 weeks ago         52.8MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        2 months ago        683kB
    
    
    

    回到第一个master主节点,查看集群

     ~]# kubectl get nodes
    NAME                    STATUS   ROLES    AGE     VERSION
    k8s-master-160.kvm.io   Ready    master   31m     v1.18.2                   #控制平面节点加入完成
    k8s-master-161.kvm.io   Ready    master   8m46s   v1.18.2
    k8s-master-162.kvm.io   Ready    master   5m40s   v1.18.2
    k8s-nodes-163.kvm.io    Ready    <none>   4m1s    v1.18.2                #pod计算节点加入完成
    k8s-nodes-164.kvm.io    Ready    <none>   3m50s   v1.18.2
    ~]# kubectl get pods -n kube-system 
    NAME                                            READY   STATUS    RESTARTS   AGE
    coredns-546565776c-btl5c                        1/1     Running   0          133m
    coredns-546565776c-h2jb5                        1/1     Running   0          133m
    etcd-k8s-master-160.kvm.io                      1/1     Running   0          133m
    etcd-k8s-master-161.kvm.io                      1/1     Running   0          111m
    etcd-k8s-master-162.kvm.io                      1/1     Running   0          108m
    kube-apiserver-k8s-master-160.kvm.io            1/1     Running   0          133m
    kube-apiserver-k8s-master-161.kvm.io            1/1     Running   0          111m
    kube-apiserver-k8s-master-162.kvm.io            1/1     Running   0          108m
    kube-controller-manager-k8s-master-160.kvm.io   1/1     Running   1          133m
    kube-controller-manager-k8s-master-161.kvm.io   1/1     Running   0          111m
    kube-controller-manager-k8s-master-162.kvm.io   1/1     Running   0          108m
    kube-flannel-ds-amd64-ptxwn                     1/1     Running   0          106m
    kube-flannel-ds-amd64-pvm4c                     1/1     Running   0          118m
    kube-flannel-ds-amd64-qqbng                     1/1     Running   0          108m
    kube-flannel-ds-amd64-shj2s                     1/1     Running   0          111m
    kube-flannel-ds-amd64-zlhmd                     1/1     Running   0          106m
    kube-proxy-bjwtp                                1/1     Running   0          108m
    kube-proxy-d7hqh                                1/1     Running   0          111m
    kube-proxy-gz7vr                                1/1     Running   0          106m
    kube-proxy-pxdb9                                1/1     Running   0          106m
    kube-proxy-vs96m                                1/1     Running   0          133m
    kube-scheduler-k8s-master-160.kvm.io            1/1     Running   1          133m
    kube-scheduler-k8s-master-161.kvm.io            1/1     Running   0          111m
    kube-scheduler-k8s-master-162.kvm.io            1/1     Running   0          108m
    
    
    

    五、节点主机的重置和删除

    使用kubeadm 命令 删除节点 。

    首先:
    kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
    kubectl delete node <node name>
    
    然后使用
    kubeadm reset
    
    若需要重新加入 ,则再次执行 kubeadm init or kubeadm join
    
    如果重新初始化 kubeadm init ,则先要 rm -rf $HOME/.kube   #删除原有.kube文件
    
    

    六、关于k8s镜像拉取

    方案一:推荐!使用阿里云的k8s镜像,在初始化时候使用--image-repository指定镜像库;

    方案二:下载google镜像到本地,由于使用pause镜像存放在谷歌上,所以编辑脚本拉取,使用阿里云仓库,下载后自动更改tag;

    #查询需要的镜像
    ~]# kubeadm config images list
    .......
    k8s.gcr.io/kube-apiserver:v1.17.0
    k8s.gcr.io/kube-controller-manager:v1.17.0
    k8s.gcr.io/kube-scheduler:v1.17.0
    k8s.gcr.io/kube-proxy:v1.17.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.4.3-0
    k8s.gcr.io/coredns:1.6.5
    
    #创建拉取镜像脚本
    ~]# vim dockerimages_pull.sh
    #!/bin/bash
     # 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成上面获取到的版本
    images=(  
       kube-apiserver:v1.17.0
      kube-controller-manager:v1.17.0
      kube-scheduler:v1.17.0
      kube-proxy:v1.17.0
      pause:3.1
      etcd:3.4.3-0
      coredns:1.6.5
    )
    
    for imageName in ${images[@]} ; do
        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    
    #执行拉取镜像脚本
    ~]# bash -s dockerimages_pull.sh 
    ~]# . dockerimages_pull.sh 
    
    ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy                v1.17.0             7d54289267dc        5 days ago          116MB
    k8s.gcr.io/kube-apiserver            v1.17.0             0cae8d5cc64c        5 days ago          171MB
    k8s.gcr.io/kube-controller-manager   v1.17.0             5eb3b7486872        5 days ago          161MB
    k8s.gcr.io/kube-scheduler            v1.17.0             78c190f736b1        5 days ago          94.4MB
    k8s.gcr.io/coredns                   1.6.5               70f311871ae1        5 weeks ago         41.6MB
    k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        6 weeks ago         288MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        24 months ago       742kB
    

    方案三:如果有k8s镜像代理服务器,修改代理镜像地址,初始化k8s时候就可以无须指定镜像库

    ~]# vim /usr/lib/systemd/system/docker.service 
    
    ......
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    Environment="HTTPS_PROXY=http://代理网站:端口" 
    Environment="NO_PROXY=127.0.0.0/8,172.16.0.0/16"
      #添加代理访问地址,NO_PROXY是本地访问docker库不用代理
    ~]# systemctl daemon-reload
    

    七、kubernetes-dashboard的安装使用

    #安装kubernetes-dashboard
    ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created
    
    ~]# kubectl get svc -n kubernetes-dashboard     #查询是否启动
    NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    dashboard-metrics-scraper   ClusterIP   172.20.244.60   <none>        8000/TCP   23m
    kubernetes-dashboard        ClusterIP   172.20.250.78   <none>        443/TCP    23m
    
    ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubernetes-dashboard    #修改为NodePort模式
    service/kubernetes-dashboard patched
    
     ~]# kubectl get svc -n kubernetes-dashboard
    NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
    dashboard-metrics-scraper   ClusterIP   172.20.244.60   <none>        8000/TCP        41m
    kubernetes-dashboard        NodePort    172.20.250.78   <none>        443:31691/TCP   41m   #已经分配了一个31691端口
    
    然后我们就可以通过web访问dashboard:https://本机IP:31619 
    

    创建一个登录用户

    参考文档:
    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

    ~]# cat > dashboard-adminuser.yaml << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard  
    EOF
    
    
    创建登录用户
    
    ~]# kubectl apply -f dashboard-adminuser.yaml
    创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard 命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可。
    
    查看admin-user账户的token
    
    ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
    
    把获取到的Token复制到登录界面的Token输入框中,成功登陆dashboard:
    
    
    使用谷歌浏览器访问

    web页面创建资源,有三种方式

    web页面创建资源

    管理资源

    管理资源

    管理pod

    管理pod

    kubernetes-dashboard参数

    参考文档
    https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md
    默认dashboard登录超时时间是900秒,可以为dashboard容器增加-- token-ttl参数自定义超时时间,0代表永久有效。

    文档中的说明
    下载dashboard配置文件到本地
    ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
    修改
    ~]# vim recommended.yaml
    ....
      spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.0.0
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                - --token-ttl=28800                          #添加,登录token有效时长8小时
    ....
    更新dashboard配置
    ~]# kubectl apply -f recommended.yaml
    
    

    参考链接:https://zhuanlan.zhihu.com/p/53439586
    https://www.cnblogs.com/tianshifu/p/8127831.html
    https://blog.csdn.net/weixin_44723434/article/details/94583457(添加节点遇到问题解决)
    https://blog.csdn.net/networken/article/details/85607593 (dashboard安装使用)

    相关文章

      网友评论

          本文标题:使用kubeadm部署kubernetes1.18.2(1.21

          本文链接:https://www.haomeiwen.com/subject/ohtmeqtx.html