美文网首页K8s
K8S 使用kubeadm在Centos8.4上部署kubern

K8S 使用kubeadm在Centos8.4上部署kubern

作者: vicezz | 来源:发表于2022-04-07 13:54 被阅读0次

    Centos8系统发布已有一段时间。kubernetes1.22.1也发布了,今天使用kubeadm在Centos8系统上部署kubernetes。

    本次测试环境在ESXI下的虚拟机

    系统:CentOS8.4

    Kube:kubernetes1.22.1

    ip规划表:

    hostname角色IP

    node-master01master0110.255.10.140

    node-worker01node-worker0110.255.10.141

    node-worker02node-worker0210.255.10.142

    1. 系统准备(以下部署步骤master与node节点相同)

    查看系统版本

    查看系统版本

    1

    2

    [root@localhost]# cat /etc/centos-release 

    CentOS Linux release 8.4.2105

    配置网络

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens160 

    TYPE=Ethernet 

    PROXY_METHOD=none 

    BROWSER_ONLY=no 

    BOOTPROTO=static 

    DEFROUTE=yes

    IPV4_FAILURE_FATAL=no 

    IPV6INIT=yes

    IPV6_AUTOCONF=yes

    IPV6_DEFROUTE=yes

    IPV6_FAILURE_FATAL=no 

    IPV6_ADDR_GEN_MODE=stable-privacy 

    NAME=enp0s3 

    UUID=039303a5-c70d-4973-8c91-97eaa071c23d 

    DEVICE=enp0s3 

    ONBOOT=yes

    IPADDR=10.255.10.140 

    NETMASK=255.255.255.0 

    GATEWAY=10.255.10.2 

    DNS1=119.29.29.29 

    DNS1=223.5.5.5

    添加阿里源

    [root@localhost ~]# rm -rfv /etc/yum.repos.d/* 

    [root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

    配置主机名

    [root@master01 ~]# cat /etc/hosts 

    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 

    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 

    10.255.10.140 node-master01 

    10.255.10.141 node-worker01 

    10.255.10.142 node-worker02 

    [root@master01 ~]# hostnamectl set-hostname node-master01

    关闭swap分区,在fdisk注释swap分区

    [root@master01 ~]# swapoff -a 

    [root@master01 ~]# cat /etc/fstab 

    # /etc/fstab 

    # Created by anaconda on Tue Mar 31 22:44:34 2020

    # Accessible filesystems, by reference, are maintained under '/dev/disk/'. 

    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. 

    # After editing this file, run 'systemctl daemon-reload' to update systemd 

    # units generated from this file. 

    /dev/mapper/cl-root / xfs defaults 0 0

    UUID=5fecb240-379b-4331-ba04-f41338e81a6e /boot ext4 defaults 1 2

    /dev/mapper/cl-home /home xfs defaults 0 0

    #/dev/mapper/cl-swap swap swap defaults 0 0

    配置内核参数,将桥接的IPv4流量传递到iptables的链

    [root@master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf 

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

    EOF 

    [root@master01 ~]# sysctl --system

    2. 安装常用包

    [root@master01 ~]# yum install vim bash-completion net-tools gcc -y

    3. 使用aliyun源安装docker-ce

    [root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 

    [root@master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

    [root@master01 ~]# yum -y install docker-ce containerd.io

    安装docker-ce如果出现以下错

    [root@master01 ~]# yum -y install docker-ce 

    CentOS-8 - Base - mirrors.aliyun.com 

    14 kB/s | 3.8 kB 00:00

    CentOS-8 - Extras - mirrors.aliyun.com 

    6.4 kB/s | 1.5 kB 00:00

    CentOS-8 - AppStream - mirrors.aliyun.com 

    16 kB/s | 4.3 kB 00:00

    Docker CE Stable - x86_64 

    40 kB/s | 22 kB 00:00

    Error: 

        Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed 

    - cannot install the best candidate for the job 

    - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded 

    - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded 

    - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded 

    - package containerd.io-1.2.2-3.el7.x86_64 is excluded 

    - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded 

    - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded 

    - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded 

    (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

    解决方法

    [root@master01 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/7.9/x86_64/stable/Packages/containerd.io-1.4.9-3.1.el7.x86_64.rpm 

    [root@master01 ~]# yum install containerd.io-1.4.9-3.1.el7.x86_64.rpm

    然后再安装docker-ce即可成功 

    添加aliyundocker仓库加速器

    [root@master01 ~]# sudo mkdir -p /etc/docker 

    [root@master01 ~]# sudo tee /etc/docker/daemon.json <<-'EOF'

        "registry-mirrors": ["https://jsfcoj0r.mirror.aliyuncs.com"], 

        "exec-opts": ["native.cgroupdriver=systemd"] 

    } EOF 

    [root@master01 ~]# sudo systemctl daemon-reload 

    [root@master01 ~]# sudo systemctl restart docker

    docker info | grep Cgroup查看输出:Cgroup Driver: systemd 表示成功

    4. 安装kubectl、kubelet、kubeadm

    添加阿里kubernetes源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo 

    [kubernetes] 

    name=Kubernetes 

    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 

    enabled=1

    gpgcheck=1

    repo_gpgcheck=1

    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 

    EOF

    安装

    [root@master01 ~]# yum makecache 

    [root@master01 ~]# yum list kubeadm --showduplicates | sort -r #查看可安装版本 

    [root@master01 ~]# yum install -y kubelet-1.22.1-0 kubeadm-1.22.1-0 kubectl-1.22.1-0

    [root@master01 ~]# systemctl enable kubelet && systemctl start kubelet

    5. 初始化k8s集群(master节点)

    因为k8s的容器下载是境外的网,解决方式有两种(1.挂梯子 2.换国内源)

    [root@master01 ~]# kubeadm init \ 

    --apiserver-advertise-address=10.255.10.140 \ 

    --image-repository registry.aliyuncs.com/google_containers \ 

    --kubernetes-version v1.22.1 \ 

    --service-cidr=10.1.0.0/16 \ 

    --pod-network-cidr=10.50.0.0/16

    POD的网段为: 10.50.0.0/16, api server地址就是master本机IP。

    这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。

    集群初始化成功后返回如下信息:

    [init] Using Kubernetes version: v1.22.1

    [preflight] Running pre-flight checks

    [preflight] Pulling images required for setting up a Kubernetes cluster

    [preflight] This might take a minute or two, depending on the speed of your internet connection

    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

    [certs] Using certificateDir folder "/etc/kubernetes/pki"

    [certs] Generating "ca" certificate and key

    [certs] Generating "apiserver" certificate and key

    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.1.0.1 10.255.10.140]

    [certs] Generating "apiserver-kubelet-client" certificate and key

    [certs] Generating "front-proxy-ca" certificate and key

    [certs] Generating "front-proxy-client" certificate and key

    [certs] Generating "etcd/ca" certificate and key

    [certs] Generating "etcd/server" certificate and key

    [certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [10.255.10.140 127.0.0.1 ::1]

    [certs] Generating "etcd/peer" certificate and key

    [certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [10.255.10.140 127.0.0.1 ::1]

    [certs] Generating "etcd/healthcheck-client" certificate and key

    [certs] Generating "apiserver-etcd-client" certificate and key

    [certs] Generating "sa" key and public key

    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"

    [kubeconfig] Writing "admin.conf" kubeconfig file

    [kubeconfig] Writing "kubelet.conf" kubeconfig file

    [kubeconfig] Writing "controller-manager.conf" kubeconfig file

    [kubeconfig] Writing "scheduler.conf" kubeconfig file

    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

    [kubelet-start] Starting the kubelet

    [control-plane] Using manifest folder "/etc/kubernetes/manifests"

    [control-plane] Creating static Pod manifest for "kube-apiserver"

    [control-plane] Creating static Pod manifest for "kube-controller-manager"

    [control-plane] Creating static Pod manifest for "kube-scheduler"

    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

    [apiclient] All control plane components are healthy after 6.003919 seconds

    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

    [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster

    [upload-certs] Skipping phase. Please see --upload-certs

    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

    [bootstrap-token] Using token: lccux7.6pkxkidaxj2l6uzq

    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

    [addons] Applied essential addon: CoreDNS

    [addons] Applied essential addon: kube-proxy

    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

      mkdir -p $HOME/.kube

      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

      sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

      export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.

    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

      <https://kubernetes.io/docs/concepts/cluster-administration/addons/>

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.255.10.140:6443 --token lccux7.6pkxkidaxj2l6uzq \\

        --discovery-token-ca-cert-hash sha256:ebdf5006d7a0033e5ae77587da03855236b47d4afd98c89ec88fcce59d14e086

    记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。 

    根据提示创建kubectl

    [root@master01 ~]#  mkdir -p $HOME/.kube

    [root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    [root@master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

    执行下面命令,使kubectl可以自动补充

    [root@master01 ~]# source <(kubectl completion bash)

    查看node和pod

    [root@master01 ~]# kubectl get node

    NAME            STATUS   ROLES                  AGE   VERSION

    node-master01   Ready    control-plane,master   10m   v1.22.1

    [root@master01 ~]# kubectl get pod --all-namespaces

    NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE

    kube-system   coredns-7f6cbbb7b8-h2zd4 1/1 Running 0 10m

    kube-system   coredns-7f6cbbb7b8-w7rt9 1/1 Running 0 10m

    kube-system   etcd-node-master01 1/1 Running 4 (6m33s ago)   10m

    kube-system   kube-apiserver-node-master01 1/1 Running 4 (6m23s ago)   10m

    kube-system   kube-controller-manager-node-master01 1/1 Running 4 (6m33s ago)   10m

    kube-system   kube-proxy-qfvq4 1/1 Running 0 2m14s

    kube-system   kube-proxy-svkhj 1/1 Running 0 54s

    kube-system   kube-proxy-zvh7p 1/1 Running 1 (6m33s ago)   10m

    kube-system   kube-scheduler-node-master01 1/1 Running 3 (6m28s ago)   10m

    6. 安装calico网络(这里使用的是calico,后面会上传一篇Flannel和Calico的工作流程对比)

    [root@master01 ~]# kubectl apply -f <https://docs.projectcalico.org/manifests/calico.yaml>

    configmap/calico-config created

    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

    clusterrole.rbac.authorization.k8s.io/calico-node created

    clusterrolebinding.rbac.authorization.k8s.io/calico-node created

    daemonset.apps/calico-node created

    serviceaccount/calico-node created

    deployment.apps/calico-kube-controllers created

    serviceaccount/calico-kube-controllers created

    poddisruptionbudget.policy/calico-kube-controllers created

    查看pod和node

    [root@master01 ~]# kubectl get node

    NAME            STATUS   ROLES                  AGE   VERSION

    node-master01   Ready    control-plane,master   10m   v1.22.1

    [root@master01 ~]# kubectl get pod --all-namespaces

    NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE

    kube-system   calico-kube-controllers-75f8f6cc59-dpznm 1/1 Running 0 5m45s

    kube-system   calico-node-7bbpc 1/1 Running 0 2m14s

    kube-system   calico-node-tjx6c 1/1 Running 0 5m45s

    kube-system   calico-node-xzgtg 1/1 Running 0 54s

    kube-system   coredns-7f6cbbb7b8-h2zd4 1/1 Running 0 10m

    kube-system   coredns-7f6cbbb7b8-w7rt9 1/1 Running 0 10m

    kube-system   etcd-node-master01 1/1 Running 4 (6m33s ago)   10m

    kube-system   kube-apiserver-node-master01 1/1 Running 4 (6m23s ago)   10m

    kube-system   kube-controller-manager-node-master01 1/1 Running 4 (6m33s ago)   10m

    kube-system   kube-proxy-qfvq4 1/1 Running 0 2m14s

    kube-system   kube-proxy-svkhj 1/1 Running 0 54s

    kube-system   kube-proxy-zvh7p 1/1 Running 1 (6m33s ago)   10m

    kube-system   kube-scheduler-node-master01 1/1 Running 3 (6m28s ago)   10m

    此时集群状态正常

    7. 修改kube-apiserver端口释放

    默认情况下k8s对外暴露端口范围只能是30000-32767,这个范围其实是可以通过设置改变的。

    编辑配置文件vim /etc/kubernetes/manifests/kube-apiserver.yaml

    spec: containers: - command:后面添加- --service-node-port-range=1-65535

    [root@master01 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

    apiVersion: v1

    kind: Pod

    metadata:

      annotations:

        kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.255.10.140:6443

      creationTimestamp: null

      labels:

        component: kube-apiserver

        tier: control-plane

      name: kube-apiserver

      namespace: kube-system

    spec:

      containers:

      - command:

        - kube-apiserver

        - --advertise-address=10.255.10.140

        - --allow-privileged=true

        - --authorization-mode=Node,RBAC

        - --client-ca-file=/etc/kubernetes/pki/ca.crt

        - --enable-admission-plugins=NodeRestriction

        - --enable-bootstrap-token-auth=true

        - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt

        - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

        - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key

        - --etcd-servers=https://127.0.0.1:2379

        - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt

        - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

        - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt

        - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key

        - --requestheader-allowed-names=front-proxy-client

        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt

        - --requestheader-extra-headers-prefix=X-Remote-Extra-

        - --requestheader-group-headers=X-Remote-Group

        - --requestheader-username-headers=X-Remote-User

        - --secure-port=6443

        - --service-account-issuer=https://kubernetes.default.svc.cluster.local

        - --service-account-key-file=/etc/kubernetes/pki/sa.pub

        - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key

        - --service-cluster-ip-range=10.1.0.0/16

        - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt

        - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

        - --service-node-port-range=1-65535

        image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1

        imagePullPolicy: IfNotPresent

        livenessProbe:

          failureThreshold: 8

          httpGet:

            host: 10.255.10.140

            path: /livez

            port: 6443

            scheme: HTTPS

          initialDelaySeconds: 10

          periodSeconds: 10

          timeoutSeconds: 15

    ......................................................

    然后对每台机器执行

    [root@master01 ~]# systemctl restart docker.service && systemctl restart kubelet.service

    [root@node-worker01 ~]# systemctl restart docker.service && systemctl restart kubelet.service

    [root@node-worker02 ~]# systemctl restart docker.service && systemctl restart kubelet.service

    查看pod状态

    [root@master01 ~]# kubectl get pod --all-namespaces

    NAMESPACE              NAME                                        READY   STATUS    RESTARTS      AGE

    kube-system            calico-kube-controllers-75f8f6cc59-dpznm 1/1 Running 6 (52s ago)   3h27m

    kube-system            calico-node-7bbpc 1/1 Running 2 (35s ago)   3h23m

    kube-system            calico-node-tjx6c 1/1 Running 2 (52s ago)   3h27m

    kube-system            calico-node-xzgtg 1/1 Running 2 (29s ago)   3h22m

    kube-system            coredns-7f6cbbb7b8-h2zd4 1/1 Running 2 (47s ago)   3h31m

    kube-system            coredns-7f6cbbb7b8-w7rt9 1/1 Running 2 (47s ago)   3h31m

    kube-system            etcd-node-master01 1/1 Running 6 (52s ago)   3h31m

    kube-system            kube-apiserver-node-master01 1/1 Running 1 (36s ago)   29s

    kube-system            kube-controller-manager-node-master01 1/1 Running 7 (52s ago)   3h31m

    kube-system            kube-proxy-qfvq4 1/1 Running 2 (35s ago)   3h23m

    kube-system            kube-proxy-svkhj 1/1 Running 2 (29s ago)   3h22m

    kube-system            kube-proxy-zvh7p 1/1 Running 3 (52s ago)   3h31m

    kube-system            kube-scheduler-node-master01 1/1 Running 6 (52s ago)   3h31m

    8. 安装kubernetes-dashboard

    官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

    [root@master01 ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml

    [root@master01 ~]# vim recommended.yaml

    kind: ServiceapiVersion: v1metadata:

      labels:

        k8s-app: kubernetes-dashboard

      name: kubernetes-dashboard

      namespace: kubernetes-dashboard

    spec:

      type: NodePort

      ports:

        - port: 443

          targetPort: 8443

          nodePort: 30000

      selector:

        k8s-app: kubernetes-dashboard

    [root@master01 ~]# kubectl create -f recommended.yaml

    namespace/kubernetes-dashboard created

    serviceaccount/kubernetes-dashboard created

    service/kubernetes-dashboard created

    secret/kubernetes-dashboard-certs created

    secret/kubernetes-dashboard-csrf created

    secret/kubernetes-dashboard-key-holder created

    configmap/kubernetes-dashboard-settings created

    role.rbac.authorization.k8s.io/kubernetes-dashboard created

    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

    deployment.apps/kubernetes-dashboard created

    service/dashboard-metrics-scraper created

    deployment.apps/dashboard-metrics-scraper created

    查看pod,service

    [root@master01 ~]# kubectl get pod --all-namespaces | grep kubernetes-dashboard

    kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-fzrsr 1/1 Running 0 54s

    kubernetes-dashboard   kubernetes-dashboard-576cb95f94-dl8dr 1/1 Running 0 54s

    [root@master01 ~]# kubectl get svc -n kubernetes-dashboard

    NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE

    dashboard-metrics-scraper   ClusterIP 10.1.128.40  8000/TCP        84s

    kubernetes-dashboard        NodePort 10.1.131.243  443:30000/TCP   85s

    通过页面访问,推荐使用firefox浏览器(Chrome和其他浏览器会出现“证书不信任”导致的无法打开页面)

    使用token进行登录,执行下面命令获取token

    [root@master01 ~]# kubectl create serviceaccount dashboard -n kubernetes-dashboard

    [root@master01 ~]# kubectl create rolebinding def-ns-admin --clusterrole=admin --serviceaccount=default:def-ns-admin

    [root@master01 ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard

    [root@master01 ~]# kubectl describe sa dashboard -n kubernetes-dashboard

    Name:                dashboard

    Namespace:           kubernetes-dashboard

    Labels:              <none>

    Annotations:         <none>

    Image pull secrets:  <none>

    Mountable secrets:   dashboard-token-2bwzt

    Tokens:              dashboard-token-2bwzt

    Events:              <none>

    [root@master01 ~]# kubectl describe secret dashboard-token-2bwzt -n kubernetes-dashboard

    Name:         dashboard-token-2bwzt

    Namespace:    kubernetes-dashboard

    Labels:       <none>

    Annotations:  kubernetes.io/service-account.name: dashboard

                  kubernetes.io/service-account.uid: 3c0a442f-06ee-4b81-b056-11f9b12ca0f5

    Type:  kubernetes.io/service-account-token

    Data

    ====

    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InlwSUFzN1AwbEM4NmgyTERZd19NeVZkMWJaMXh2dHNmQUR5aHg3aE5BYW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tMmJ3enQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2MwYTQ0MmYtMDZlZS00YjgxLWIwNTYtMTFmOWIxMmNhMGY1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZCJ9.eABZfsBDw-jow3lBX2DGt4O0kebiWKc4S23HIngLmyW4A_kkKNAkfDEzvozo3DFTQ0WRIJy3KFhDvPBcpttqRjxj2_DCFFrOTFJkgpYRd9E_8aaB_cSeCbANPp0bVVaZhor50fRgRQBiXe_8Bq_rYpjvlJD3TB87H8M1OdSCZ5kZWmt6aKsu-g8N_dFIf8rxyyHn-TB9aOQq6_v7Vv50UN7LfJj3HY3Bm1Kb66kS8vmp4X2QDwlyOn4zw11EkrHNcZ1Er0wRtsau3IFx_bydJXJ9GvDFa-vUs85cshyoRtPIcnaWAeOCvRbcX1X8UwOm6_QT0PaQDTORBAhiBcDNKg

    ca.crt: 1099 bytes

    namespace: 20 bytes

    登陆界面如下

    按照以上步骤可在CentOS8成功安装k8s 1.22.1版本

    相关文章

      网友评论

        本文标题:K8S 使用kubeadm在Centos8.4上部署kubern

        本文链接:https://www.haomeiwen.com/subject/ypwvdrtx.html