美文网首页
使用kubeadm部署k8s

使用kubeadm部署k8s

作者: TomGui | 来源:发表于2020-04-19 23:02 被阅读0次

    1.Docker 安装

    1.1CentOS Docker安装

    https://www.runoob.com/docker/centos-docker-install.html

    1.2Docker 镜像加速

    https://www.runoob.com/docker/docker-mirror-acceleration.html

    1.3Docker Cgroup Driver改为systemd

    命令:sudo vi /etc/docker/daemon.json

    配置如下:

    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    
    # 如果使用了加速器配置格式如下
    {
      "registry-mirrors": ["https://sv2jerob.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    1.4启动Docker

    sudo systemctl restart docker

    可以查看docker的Cgroup Driver已修改为systemd,未修改前的值为cgroupfs,命令:sudo docker info

    2.kubeadm 安装

    2.1使用阿里镜像安装

    # 配置源
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    # 安装
    yum install -y kubeadm
    

    3.k8s master init

    sudo kubeadm init

    如果没有翻墙,上面的命令会报错:

    [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver:v1.18.1]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager:v1.18.1]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler:v1.18.1]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy:v1.18.1]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/pause:3.2]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/etcd:3.4.3-0]: exit status 1
        [ERROR ImagePull]: failed to pull image [k8s.gcr.io/coredns:1.6.7]: exit status 1
    

    如果不能翻墙,可以手动下载这些镜像:

    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/kube-apiserver:v1.18.1
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/kube-controller-manager:v1.18.1
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/kube-scheduler:v1.18.1
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/kube-proxy:v1.18.1
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/pause:3.2
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/etcd:3.4.3-0
    curl -s https://www.zhangguanzhang.com/pull | bash -s k8s.gcr.io/coredns:1.6.7
    

    4.k8s node join

    重复1、2两步,安装好Docker、kubeadm后,执行命令:

    sudo kubeadm join 10.0.0.6:6443 --token sh3723.182lsru545rta8qc --discovery-token-ca-cert-hash sha256:d079daadd145a5f5badfcfc8cfed0cb1fa47f734f7e05edb985243aeda202b75

    5.第一次使用 Kubernetes 集群所需要的配置命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认会使用这个目录下的授权信息访问 Kubernetes 集群。

    如果不这么做的话,我们每次都需要通过 export KUBECONFIG 环境变量告诉 kubectl 这个安全配置文件的位置。

    6.部署网络插件

    以Weave Net为例:

    export kubever=$(kubectl version | base64 | tr -d '\n')
    kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
    

    7.部署 Dashboard 可视化插件

    7.1安装

    还是墙的问题,使用私人的镜像image: kubernetesui/dashboard:v2.0.0-rc7
    sudo vi dashboard.yaml

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.0.0-rc7
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.4
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
          serviceAccountName: kubernetes-dashboard
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    

    运行pod

    kubectl apply -f dashboard.yaml

    查看pod的运行情况

    kubectl get pods -n kubernetes-dashboard

    如果处于running状态,查看一下dashboard的log日志

    kubectl logs  -f  kubernetes-dashboard-cdbc9547c-7sb2n -n kubernetes-dashboard
    
    kubectl logs  -f  dashboard-metrics-scraper-fb986f88d-nd9d6  -n kubernetes-dashboard
    

    7.2允许外部访问

    kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'

    7.3浏览器中访问

    浏览器输入https://IP:30001,通过token访问

    7.3获取token

    Dashboard支持Kubeconfig和Token两种认证方式,为了简化配置,我们通过配置文件 dashboard-admin.yaml为Dashboard默认用户赋予admin权限

    sudo vi dashboard-admin.yaml

    文件内容:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dashboard-admin
      namespace: kube-system
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: dashboard-admin
    subjects:
      - kind: ServiceAccount
        name: dashboard-admin
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    

    执行kubectl apply使之生效

    kubectl apply -f dashboard-admin.yaml

    获取token,其中dashboard-admin-token-nhrx9通过命令kubectl get secret -n kube-system |grep dashboard-admin获取

    kubectl get secret -n kube-system |grep dashboard-admin
    kubectl describe secret dashboard-admin-token-nhrx9 -n kube-system
    

    8.安装存储插件

    cd /usr/local/src
    yum -y install git
    git clone https://github.com/rook/rook.git
    cd /usr/local/src/rook/cluster/examples/kubernetes/ceph
    kubectl apply -f common.yaml
    kubectl apply -f operator.yaml
    kubectl apply -f cluster.yaml 
    

    配置系统相关参数

    # 临时禁用selinux
    # 永久关闭 修改/etc/sysconfig/selinux文件设置
    sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
    setenforce 0
    
    # 临时关闭swap
    # 永久关闭 注释/etc/fstab文件里swap相关的行
    swapoff -a
    
    
    
    # 开启forward
    # Docker从1.13版本开始调整了默认的防火墙规则
    # 禁用了iptables filter表中FOWARD链
    # 这样会引起Kubernetes集群中跨Node的Pod无法通信
    
    iptables -P FORWARD ACCEPT
    
    # 配置转发相关参数,否则可能会出错
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    EOF
    sysctl --system
    

    相关文章

      网友评论

          本文标题:使用kubeadm部署k8s

          本文链接:https://www.haomeiwen.com/subject/hvzwbhtx.html