美文网首页
k8s部署kubernetes-dashboard

k8s部署kubernetes-dashboard

作者: 运维大湿兄 | 来源:发表于2019-04-23 12:51 被阅读0次

    部署环境参考centos7搭建k8s集群 2019最新版

    Dashboard安装

    # 安装dashboard,国内可以使用别的yaml源
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    
    # 修改node为NodePort模式
    kubectl patch svc -n kube-system kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
    #里面的镜像地址可以改为
    registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    # 查看服务(得知dashboard运行在443:32383/TCP端口)
    kubectl get svc -n kube-system 
    # --- 输出 ---
    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
    kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   7h40m
    kubernetes-dashboard   NodePort    10.111.77.210   <none>        443:32383/TCP            3h42m
    # --- 输出 ---
    
    # 查看dashboard运行在哪个node(得知dashboard运行在192.168.20.4)
    kubectl get pods -A -o wide
    # --- 输出 ---
    NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
    kube-system   coredns-fb8b8dccf-rn8kd                 1/1     Running   0          7h43m   10.244.0.2     master   <none>           <none>
    kube-system   coredns-fb8b8dccf-slwr4                 1/1     Running   0          7h43m   10.244.0.3     master   <none>           <none>
    kube-system   etcd-master                             1/1     Running   0          7h42m   192.168.20.5   master   <none>           <none>
    kube-system   kube-apiserver-master                   1/1     Running   0          7h42m   192.168.20.5   master   <none>           <none>
    kube-system   kube-controller-manager-master          1/1     Running   0          7h42m   192.168.20.5   master   <none>           <none>
    kube-system   kube-flannel-ds-amd64-l8c7c             1/1     Running   0          7h3m    192.168.20.5   master   <none>           <none>
    kube-system   kube-flannel-ds-amd64-lcmxw             1/1     Running   1          6h50m   192.168.20.4   node1    <none>           <none>
    kube-system   kube-flannel-ds-amd64-pqnln             1/1     Running   1          6h5m    192.168.20.3   node2    <none>           <none>
    kube-system   kube-proxy-4kcqb                        1/1     Running   0          7h43m   192.168.20.5   master   <none>           <none>
    kube-system   kube-proxy-jcqjd                        1/1     Running   0          6h5m    192.168.20.3   node2    <none>           <none>
    kube-system   kube-proxy-vm9sj                        1/1     Running   0          6h50m   192.168.20.4   node1    <none>           <none>
    kube-system   kube-scheduler-master                   1/1     Running   0          7h42m   192.168.20.5   master   <none>           <none>
    kube-system   kubernetes-dashboard-5f7b999d65-2ltmv   1/1     Running   0          3h45m   10.244.1.2     node1    <none>           <none>
    # --- 输出 ---
    # 如果无法变成Running状态,可以使用以下命令排错
    journalctl -f -u kubelet  # 只看当前的kubelet进程日志
    ### 提示拉取镜像失败,无法翻墙的可以使用以下方法预先拉取镜像
    ### 请在kubernetes-dashboard的节点上操作:
    docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
    docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    docker rmi  mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
    

    根据上面的信息可以得知dashboard的ip和端口,使用火狐浏览器访问https://192.168.200.25:32383(必须使用https,所以会提示不安全,火狐浏览器可以添加例外,谷歌浏览器不行。)

    报错处理

    kubectl get pods -A -o wide
    ## 查看结果:
    kube-system   kubernetes-dashboard-5f7b999d65-rdwqt             0/1     CrashLoopBackOff 
    状态不正常,
    kubectl logs kubernetes-dashboard-5f7b999d65-rdwqt  --namespace=kube-system 
    ----输出-----
    2019/04/23 03:04:59 Starting overwatch
    2019/04/23 03:04:59 Using in-cluster config to connect to apiserver
    2019/04/23 03:04:59 Using service account token for csrf signing
    2019/04/23 03:05:29 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
    Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
    ----输出-----
    google后,网友给的建议:
    1. I started fooling around and I saw (using the cmd: kubectl get pods -a -o wide --all-namespaces) that the kubernetes-dashboard was actually being set up on a slave node, and not on master (not sure if that's now it should be done)
    2. I started removing all the slave node one by one and eventually, the dashboard ended up getting deployed on the master node itself (it happened automatically, all hail kubernetes!)
    3. As soon as the dashboard was on the master node, the 'authentication to the API Server' problem got resolved since the api-server 'service' was also running on the master node.
    大概意思是将 kubernetes-dashboard部署在master节点。
    
    我的操作:
    #mater节点操作:
    kubectl drain  node_name
    ---输出---
    node/iz23sfrk7n5z cordoned
    error: unable to drain node "iz23sfrk7n5z", aborting command...
    
    There are pending nodes to be drained:
     iz23sfrk7n5z
    cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-amd64-z4ggk, kube-system/kube-proxy-8d5kh
    cannot delete Pods with local storage (use --delete-local-data to override): kube-system/kubernetes-dashboard-5f7b999d65-rdwqt
    ----输出----
    #再次执行
    [root@iZbp1izzt8ihoenra1iscfZ ~]# kubectl drain  iz23sfrk7n5z --ignore-daemonsets  --delete-local-data
    ---输出----
    node/iz23sfrk7n5z already cordoned
    WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-z4ggk, kube-system/kube-proxy-8d5kh
    evicting pod "kubernetes-dashboard-5f7b999d65-rdwqt"
    pod/kubernetes-dashboard-5f7b999d65-rdwqt evicted
    node/iz23sfrk7n5z evicted
    ---输出---
    这样 kubernetes-dashboard会从原先节点删除。 我就两个节点master和node。 所以会跑到master上面重启。如果有多个节点的,可能要多次删除。
    到此,错误解决!!
    
    k8s登录.png

    选择令牌登录

    # 创建dashboard管理用户
    kubectl create serviceaccount dashboard-admin -n kube-system
    
    # 绑定用户为集群管理用户
    kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    
    # 生成tocken
    kubectl describe secret -n kube-system dashboard-admin-token
    # --- 输出如下 ---
    Name:         dashboard-admin-token-pb78x
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: 166aeb8d-604e-11e9-80d6-080027d8332b
    
    Type:  kubernetes.io/service-account-token
    
    
    Data(qxl:done)
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbHBzc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOGUxNzM3YjUtNjE3OC0xMWU5LWJlMTktMDAwYzI5M2YxNDg2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.KHTf4_3DJu0liKeoOIoCssmIRXSHM_A4w9XVJKQ44jqEfPSbpwohqKnHxOspWAWsjwRrc3kSQyC9KEDCfTYl91ZY_PzUSqPG8XY58ab1p9q1xUxdDYu3qCyaSHWTQ2dATl1G5nNZQLfrarwWIPurm0BLBLsR1crIQj1P8VGafJJXz-TCQZgiw1OHqB8w89IBUhGrn8vuaIdspNLNZmrl-icjFS4eAevBREwlxqxX0-3-mzTFE8xqCHyfJ7pKpK-Jv1jSpuHjb0CfDPvNBuAGp5jQG44Ya6wq1BcqQO4RiQ07hjfIrnwmfWyZWmBn9YLvBVByupLv872kUUSSxjxxbg
    # ------
    使用生成的tocken就可以登录dashboard了。
    
    k8s登录后情况.png

    至此搭建完毕!!

    相关文章

      网友评论

          本文标题:k8s部署kubernetes-dashboard

          本文链接:https://www.haomeiwen.com/subject/qbcvgqtx.html