美文网首页Kubernetes
Kubernetes | 部署 metrics-server 组

Kubernetes | 部署 metrics-server 组

作者: 奶茶不要奶不要茶 | 来源:发表于2022-06-07 20:25 被阅读0次
    kube-apiserver 启动参数要求
    --requestheader-allowed-names=aggregator
    --requestheader-extra-headers-prefix=X-Remote-Extra-
    --requestheader-group-headers=X-Remote-Group
    --requestheader-username-headers=X-Remote-User
    --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem
    --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.pem
    --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client-key.pem
    --enable-aggregator-routing=true
    

    --requestheader-client-ca-file 与 --client-ca-file 必须使用不同的CA证书。

    如果 kube-apiserver 服务器未运行 kube-proxy 组件,则必须加上 --enable-aggregator-routing=true 参数。

    参考:

    PKI 证书和要求 | Kubernetes

    配置聚合层 | Kubernetes

    生成 APIServer 聚合层证书
    cat > front-proxy-ca-csr.json <<EOF
    {
      "CN": "kubernetes",
      "key": {
        "algo": "ecdsa",
        "size": 256
      },
      "ca": {
        "expiry": "87600h"
      }
    }
    EOF
    
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
            "expiry": "87600h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
            ]
          },
          "etcd": {
            "expiry": "87600h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > front-proxy-client-csr.json <<EOF
    {
      "CN": "aggregator",
      "key": {
        "algo": "ecdsa",
        "size": 256
      }
    }
    EOF
    
    cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
    
    cfssl gencert \
    -ca=front-proxy-ca.pem \
    -ca-key=front-proxy-ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client
    
    部署 metrics-server 组件
    #wget -O metrics-server.yaml https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
    cat > metrics-server.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - nodes/metrics
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            - --metric-resolution=15s
            image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            name: metrics-server
            ports:
            - containerPort: 4443
              name: https
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              initialDelaySeconds: 20
              periodSeconds: 10
            resources:
              requests:
                cpu: 100m
                memory: 200Mi
            securityContext:
              allowPrivilegeEscalation: false
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
          - emptyDir: {}
            name: tmp-dir
    ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100
    EOF
    
    kubectl apply -f ./metrics-server.yaml
    
    相关报错
    6月 07 15:22:12 k8s-master-01 kubelet[659]: I0607 15:22:12.994633     659 prober.go:121] "Probe failed" probeType="Readiness" pod="kube-system/metrics-server-58cb87bb55-b5gfd" podUID=990f5b5c-ab3e-4ca4-a605-599adba926ec containerName="metrics-server" probeResult=failure output="HTTP probe failed with statuscode: 500"
    6月 07 15:22:17 k8s-master-01 kubelet[659]: I0607 15:22:17.953698     659 prober.go:121] "Probe failed" probeType="Readiness" pod="kube-system/metrics-server-58cb87bb55-b5gfd" podUID=990f5b5c-ab3e-4ca4-a605-599adba926ec containerName="metrics-server" probeResult=failure output="HTTP probe failed with statuscode: 500"
    6月 07 15:22:24 k8s-master-01 kubelet[659]: I0607 15:22:24.125028     659 log.go:184] http: TLS handshake error from 172.30.107.65:57164: remote error: tls: bad certificate
    

    原因是 kubelet 10250 端口使用的证书是自签的,metres-server 不信任。

    1.kubelet 配置文件加入 serverTLSBootstrap: true,重启服务然后通过 kubectl 审批证书(推荐)。

    2.metrics-server.yaml 加入 --kubelet-insecure-tls 参数。

    Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "kubernetes" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
    

    front-proxy-client.pem 证书的CN未在 --requestheader-allowed-names 范围内。

    1.可以重新申请证书,填写正确的CN(推荐)。

    2.加上 --requestheader-allowed-names=kubernetes 参数。

    5900 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.14.232.221:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.14.232.221:443/apis/metrics.k8s.io/v1beta1: 401
    

    front-proxy-client.pem 证书的CN未在 --requestheader-allowed-names 范围内。

    1.可以重新申请证书,填写正确的CN(推荐)。

    2.加上 --requestheader-allowed-names=<CN> 参数。

    查看证书信息
    openssl x509 -noout -text -in front-proxy-client.pem
    
    # 查看证书的CN信息
    [root@k8s-master-1 ssl-csr]# openssl x509 -noout -text -in front-proxy-client.pem | grep Subject:
            Subject: CN=aggregator
    [root@k8s-master-1 ssl-csr]#
    

    相关文章

      网友评论

        本文标题:Kubernetes | 部署 metrics-server 组

        本文链接:https://www.haomeiwen.com/subject/xreemrtx.html