k8s

作者: Maple_JW | 来源:发表于2021-04-08 17:51 被阅读0次

    k8s组件功能介绍

    image.png
    • etcd保存了整个集群的状态
    • apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制
    • controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等
    • scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
    • kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
    • kube-proxy负责为Service提供cluster内部的服务发现和负载均衡
    • Ingress Controller为服务提供外网入口

    k8s的概念介绍

    Namespace

    Namespace是对一组资源和对象的抽象集合,比如可以用来将系统内部的对象划分为不同的项目组或用户组。常见的pods, services, replication controllers和deployments等都是属于某一个namespace的(默认是default),而node, persistentVolumes等则不属于任何namespace。

    Namespace常用来隔离不同的用户,比如Kubernetes自带的服务一般运行在kube-system namespace中。

    (1) 命令行直接创建
    $ kubectl create namespace new-namespace
    
    (2) 通过文件创建
    $ cat my-namespace.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      name: new-namespace
    
    $ kubectl create -f ./my-namespace.yaml
    -----------------------------------------------------------
    删除命名空间
    kubectl delete namespaces new-namespace
    1.删除一个namespace会自动删除所有属于该namespace的资源。
    2.default和kube-system命名空间不可删除。
    

    yum 安装k8s软件

    yum install -y kubelet-1.19.0 kubectl-1.19.0 kubeadm-1.19.0
    
    1. 在master执行初始化节点
    kubeadm init --kubernetes-version=1.19.0  --apiserver-advertise-address=10.0.3.166 --image-repository registry.aliyuncs.com/google_containers --service-cidr=192.168.0.1/16 --pod-network-cidr=10.10.0.0/16 --ignore-preflight-errors=Swap,NumCPU
    
    • 初始化过程会遇到提示Cgroup必须为systemd的问题,修改方法如下:
    通过 docker info | grep Cgroup查看当前Cgroup
    通过修改/etc/docker/daemon.json 文件,加入以下内容
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    然后执行systemctl restart docker

    • 初始化过程遇到提示未关闭交换内存
      running with swap on is not supported. Please disable swap,修改方法如下:
    swapoff -a && sed -i '/swap/d' /etc/fstab
    

    初始化成功会出现如下提示:

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      1. mkdir -p $HOME/.kube
      2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    4. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.0.3.166:6443 --token 1laxen.38bzx8hzul2ikbfk \
        --discovery-token-ca-cert-hash sha256:2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab
    
    1. 按上述成功的步骤执行上述1-3步骤的语句,其中第4句是进行k8s的网络设置
      1-3步骤执行完毕后,查看k8s的master节点的组件运行情况
    2. 执行kubectl get cs
    NAME                 STATUS      MESSAGE                                                                                     ERROR
    controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
    scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
    etcd-0               Healthy     {"health":"true"}
    会发现controller-manager和scheduler连接有问题,排查是否本地10252,10251端口未启动。
    修改/etc/kubernetes/manifests/下的kube-controller-manager.yaml,kube-scheduler.yaml配置文件
    去掉配置中的 --port=0的配置
    然后再重启systemctl restart kubelet
    
    image.png

    重启完毕后,再执行kubectl get cs,会显示如下:


    image.png

    这里使用flannel的进行K8S网络的设置

    flannal的安装步骤如下

    1. 获取kube-flannel.yml
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    2.kubectl apply -f kube-flannel.yml
    3.查看kubectl get pod -n kube-system


    image.png

    节点加入

    1. 在node1和node2节点执行
    kubeadm join 10.0.3.166:6443 --token 1laxen.38bzx8hzul2ikbfk    --discovery-token-ca-cert-hash sha256:2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab
    

    如果master节点的token过期

    #得到token
    [root@master k8s]# kubeadm token create 
    bjjq4p.4c8ntpy20aoqptmi
    #得到discovery-token-ca-cert-hash
    [root@master k8s]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    2df1e08577b6f8671bb19a7eaa2bdb9142040d370dae282d94b3001cf61619ab
    
    1. 加入完毕后查看节点
    kubectl get nodes
    [root@master k8s]# kubectl get nodes
    NAME     STATUS     ROLES    AGE     VERSION
    cdh2     NotReady   <none>   38s     v1.19.0
    cdh3     Ready      <none>   20m     v1.19.0
    master   Ready      master   7d21h   v1.19.0
    

    ingress-controller部署

    1. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml .
    2. 修改mandatory.yaml中的部分配置
      展示其中部分配置,添加hostNetwork: true 让端口暴露出来
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-ingress-controller
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
          annotations:
            prometheus.io/port: "10254"
            prometheus.io/scrape: "true"
        spec:
          # wait up to five minutes for the drain of connections
          hostNetwork: true #添加此配置将80 443端口暴露到宿主机去
          terminationGracePeriodSeconds: 300
          serviceAccountName: nginx-ingress-serviceaccount
          nodeSelector:
            kubernetes.io/os: linux
          containers:
            - name: nginx-ingress-controller
              image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
              args:
                - /nginx-ingress-controller
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
                - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
                - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
                - --publish-service=$(POD_NAMESPACE)/ingress-nginx
                - --annotations-prefix=nginx.ingress.kubernetes.io
              securityContext:
                allowPrivilegeEscalation: true
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                # www-data -> 101
                runAsUser: 101
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
                - name: https
                  containerPort: 443
                  protocol: TCP
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 10
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 10
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /wait-shutdown
    
    1. 执行kubectl apply -f mandatory.yaml
    2. 检查服务部署情况
    [root@master k8s]# kubectl get pod -o wide -n ingress-nginx
    NAME                                        READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
    nginx-ingress-controller-7d4544b644-4d974   1/1     Running   0          6m52s   10.0.3.164   cdh2   <none>           <none>
    
    1. 在部署的节点上查看端口映射
    [root@cdh2 nginx]# netstat -tnlp | grep nginx
    tcp        0      0 127.0.0.1:10246         0.0.0.0:*               LISTEN      115141/nginx: maste 
    tcp        0      0 127.0.0.1:10247         0.0.0.0:*               LISTEN      115141/nginx: maste 
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      115141/nginx: maste 
    tcp        0      0 0.0.0.0:8181            0.0.0.0:*               LISTEN      115141/nginx: maste 
    tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      115141/nginx: maste 
    tcp        0      0 127.0.0.1:10245         0.0.0.0:*               LISTEN      115108/nginx-ingres 
    tcp6       0      0 :::10254                :::*                    LISTEN      115108/nginx-ingres 
    tcp6       0      0 :::80                   :::*                    LISTEN      115141/nginx: maste 
    tcp6       0      0 :::8181                 :::*                    LISTEN      115141/nginx: maste 
    tcp6       0      0 :::443                  :::*                    LISTEN      115141/nginx: maste
    

    由于配置了hostnetwork,nginx已经在node主机本地监听80/443/8181端口。其中8181是nginx-controller默认配置的一个default backend。这样,只要访问node主机有公网IP,就可以直接映射域名来对外网暴露服务了。如果要nginx高可用的话,可以在多个node上部署,并在前面再搭建一套LVS+keepalive做负载均衡。用hostnetwork的另一个好处是,如果lvs用DR模式的话,是不支持端口映射的,这时候如果用nodeport,暴露非标准的端口,管理起来会很麻烦。

    1. 这里可以使用daemonset将ingress-controller部署到特定node,需要修改部分配置:先给要部署nginx-ingress的node打上特定标签,这里测试部署在"node-1"这个节点。
    给node-1节点打上  isIngress="true"的标签
    kubectl label node node-1 isIngress="true"
    然后在daemonset中指定nodeSelector为isIngress=true
    spec:
      nodeSelector:
        isIngress: true
    
    1. 修改mandatory.yaml中的部分配置,找到kind:Deployment的配置,将其进行修改为如下
    apiVersion: apps/v1
    kind: DaemonSet #将Deployment改为DaemonSet
    metadata:
      name: nginx-ingress-controller
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      
    #  replicas: 1 # 删除replicas
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/part-of: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
          annotations:
            prometheus.io/port: "10254"
            prometheus.io/scrape: "true"
        spec:
          # wait up to five minutes for the drain of connections
          hostNetwork: true #添加此配置将80 443端口暴露到宿主机去
          terminationGracePeriodSeconds: 300
          serviceAccountName: nginx-ingress-serviceaccount
          #选择对应的标签
          nodeSelector:
            isIngress: true  #kubernetes.io/os: linux 修改为 isIngress:true
          containers:
            - name: nginx-ingress-controller
              image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
              args:
                - /nginx-ingress-controller
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
                - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
                - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
                - --publish-service=$(POD_NAMESPACE)/ingress-nginx
                - --annotations-prefix=nginx.ingress.kubernetes.io
              securityContext:
                allowPrivilegeEscalation: true
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                # www-data -> 101
                runAsUser: 101
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
                - name: https
                  containerPort: 443
                  protocol: TCP
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 10
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 10
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /wait-shutdown
    

    metrics-server部署

    用于k8s可以通过监控pod的CPU等指标的使用率,进行水平扩容

    1. 创建metric-rbac.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      namespace: kube-system
      name: metrics-server
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:aggregated-metrics-reader
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-view: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rules:
    - apiGroups: ["metrics.k8s.io"]
      resources: ["pods","nodes"]
      verbs: ["get","list","watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:metrics-server
      labels:
        k8s-app: metrics-server
    rules:
    - apiGroups: [""]
      resources: ["pods","nodes","nodes/stats","namespaces","configmaps"]
      verbs: ["get","list","watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: metrics-server:system:auth-delegator
      labels:
        k8s-app: metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: metrics-server-auth-reader
      namespace: kube-system
      labels:
        k8s-app: metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:metrics-server
      labels:
        k8s-app: metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    
    1. 创建metric-api.yaml
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      versionPriority: 100
    
    1. 创建metric-deploy.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    
    ---
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: metrics-server
      namespace: kube-system
      labels:
        k8s-app: metrics-server
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          name: metrics-server
          labels:
            k8s-app: metrics-server
        spec:
          hostNetwork: true
          serviceAccountName: metrics-server
          containers:
          - name: metrics-server
            image: bitnami/metrics-server:0.4.1
            imagePullPolicy: IfNotPresent
            args:
              - --cert-dir=/tmp
              - --secure-port=4443
              - --kubelet-insecure-tls
              - --kubelet-use-node-status-port
              - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              periodSeconds: 10
            ports:
            - name: https
              containerPort: 4443
              protocol: TCP
            securityContext:
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            resources:
              limits:
                memory: 1Gi
                cpu: 1000m
              requests:
                memory: 1Gi
                cpu: 1000m
            volumeMounts:
            - name: tmp-dir
              mountPath: /tmp
            - name: localtime
              readOnly: true
              mountPath: /etc/localtime
          volumes:
          - name: tmp-dir
            emptyDir: {}
          - name: localtime
            hostPath:
              type: File
              path: /etc/localtime
          nodeSelector:
            kubernetes.io/os: linux
    
    1. 分别执行kubectl apply -f metric-rbac.yaml, kubectl apply -f metric-api.yaml,kubectl apply -f metric-deploy.yaml
    2. 执行kubectl top po,出现以下输出即成功
    [root@master metrics-server]# kubectl top po
    NAME                                CPU(cores)   MEMORY(bytes)   
    nginx-deployment-77c6777f7b-qvpx2   0m           1Mi             
    nginx-node                          0m           3Mi
    

    dashbord安装

    1. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.5/aio/deploy/recommended.yaml
      这里注意对应的版本号
      2.kubectl apply -f recommended.yaml

    3.查看是否安装成功,kubectl get pods --all-namespaces

    [root@master k8s]# kubectl get pods --all-namespaces
    NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
    kube-system            coredns-6d56c8448f-tq4sb                     1/1     Running            0          7d22h
    kube-system            coredns-6d56c8448f-x4prb                     1/1     Running            0          7d22h
    kube-system            etcd-master                                  1/1     Running            0          7d22h
    kube-system            kube-apiserver-master                        1/1     Running            0          7d22h
    kube-system            kube-controller-manager-master               1/1     Running            8          7d20h
    kube-system            kube-flannel-ds-f2m7q                        1/1     Running            0          118m
    kube-system            kube-flannel-ds-pxhz6                        1/1     Running            0          98m
    kube-system            kube-flannel-ds-r95k6                        1/1     Running            0          6d5h
    kube-system            kube-proxy-k6jpw                             1/1     Running            0          7d22h
    kube-system            kube-proxy-nxdbf                             1/1     Running            0          118m
    kube-system            kube-proxy-v2vfg                             1/1     Running            0          98m
    kube-system            kube-scheduler-master                        1/1     Running            8          7d20h
    kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-rvj7x   1/1     Running            0          12m
    kubernetes-dashboard   kubernetes-dashboard-6f65cb5c64-r27w6        0/1     CrashLoopBackOff   7          12m
    发现是失败的,然后通过查看日志
    kubectl logs kubernetes-dashboard-6f65cb5c64-r27w6 --tail=100 -n kubernetes-dashboard
    提示如下错误:
    2021/03/31 08:41:18 Starting overwatch
    2021/03/31 08:41:18 Using namespace: kubernetes-dashboard
    2021/03/31 08:41:18 Using in-cluster config to connect to apiserver
    2021/03/31 08:41:18 Using secret token for csrf signing
    2021/03/31 08:41:18 Initializing csrf token from kubernetes-dashboard-csrf secret
    panic: Get "https://192.168.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 192.168.0.1:443: connect: connection timed out
    
    goroutine 1 [running]:
    github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003b7840)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413
    github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
    github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000468100)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6
    github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000468100)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47
    github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
    main.main()
        /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x21c
    由于无法访问api-server,因此将dashbord部署在master上,需要修改配置文件
    spec:
          nodeName: master #添加指定到master
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.6
    
    spec:
          nodeName: master #添加指定到master
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.0.5
              imagePullPolicy: Always
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 32500 #宿主机对外的端口
      type: NodePort #修改为nodeport访问,便于浏览器访问,否则无法访问
      selector:
        k8s-app: kubernetes-dashboard
    
    修改完配置后,卸载dashbord,重新安装
    kubectl -n kubernetes-dashboard delete $(sudo kubectl -n kubernetes-dashboard get pod -o name | grep dashboard)
    也可以通过以下命令查看服务
     kubectl get svc --all-namespaces
    [root@master k8s]# kubectl get svc --all-namespaces
    NAMESPACE              NAME                        TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                  AGE
    default                kubernetes                  ClusterIP   192.168.0.1       <none>        443/TCP                  7d23h
    kube-system            kube-dns                    ClusterIP   192.168.0.10      <none>        53/UDP,53/TCP,9153/TCP   7d23h
    kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   192.168.99.230    <none>        8000/TCP                 42m
    kubernetes-dashboard   kubernetes-dashboard        NodePort    192.168.253.207   <none>        443:32500/TCP            42m
    
    1. 我们需要使用daemonset部署到特定node,需要修改部分配置:先给要部署nginx-ingress的node打上特定标签,这里假设测试部署在"cdh2"这个节点。
    kubectl label master cdh2 isIngress="true"
    这里就能强制指定nginx-ingress部署在哪个节点上面,方便后期在nginx-ingress前面加一层lvs的负载均衡
    

    相关文章

      网友评论

          本文标题:k8s

          本文链接:https://www.haomeiwen.com/subject/gzulhltx.html