美文网首页kubesphere落地实践
kubesphere生产环境落地实践(二)kubekey本地化

kubesphere生产环境落地实践(二)kubekey本地化

作者: 微凉哇 | 来源:发表于2022-02-11 09:40 被阅读0次

    我们基于kubekey 1.0.1做了本地化修改,那为什么要做本地化修改呢?

    实际上kubekey 1.0.1 部署kubesphere流程如下:

    1. 检测依赖安装情况
    2. 部署kubernetes集群
    3. 部署存储插件(默认openebs localPV)
    4. 部署kubesphere

    如下

    createTasks := []manager.Task{
            {Task: preinstall.Precheck, ErrMsg: "Failed to precheck"},
            {Task: preinstall.DownloadBinaries, ErrMsg: "Failed to download kube binaries"},
            {Task: preinstall.InitOS, ErrMsg: "Failed to init OS"},
            {Task: docker.InstallerDocker, ErrMsg: "Failed to install docker"},
            {Task: preinstall.PrePullImages, ErrMsg: "Failed to pre-pull images"},
            {Task: etcd.GenerateEtcdCerts, ErrMsg: "Failed to generate etcd certs"},
            {Task: etcd.SyncEtcdCertsToMaster, ErrMsg: "Failed to sync etcd certs"},
            {Task: etcd.GenerateEtcdService, ErrMsg: "Failed to create etcd service"},
            {Task: etcd.SetupEtcdCluster, ErrMsg: "Failed to start etcd cluster"},
            {Task: etcd.RefreshEtcdConfig, ErrMsg: "Failed to refresh etcd configuration"},
            {Task: etcd.BackupEtcd, ErrMsg: "Failed to backup etcd data"},
            {Task: kubernetes.GetClusterStatus, ErrMsg: "Failed to get cluster status"},
            {Task: kubernetes.InstallKubeBinaries, ErrMsg: "Failed to install kube binaries"},
            {Task: kubernetes.InitKubernetesCluster, ErrMsg: "Failed to init kubernetes cluster"},
            {Task: network.DeployNetworkPlugin, ErrMsg: "Failed to deploy network plugin"},
            {Task: kubernetes.JoinNodesToCluster, ErrMsg: "Failed to join node"},
            {Task: addons.InstallAddons, ErrMsg: "Failed to deploy addons"},
            {Task: kubesphere.DeployLocalVolume, ErrMsg: "Failed to deploy localVolume"},
            {Task: kubesphere.DeployKubeSphere, ErrMsg: "Failed to deploy kubesphere"},
        }
    

    1.部署分离

    我们修改了pkg/install/install.go部分内容, 首先将部署存储与kubesphere流程摘除,此时kubekey只是用来部署kubernetes集群

    createTasks := []manager.Task{
            {Task: preinstall.CheckOfflineBinaries, ErrMsg: "Failed to find kube offline binaries"},
            //{Task: preinstall.DetectKernel, ErrMsg: "Failed to check kernel"},
            {Task: preinstall.InitYum, ErrMsg: "Failed to config yum"},
            {Task: preinstall.InitTime, ErrMsg: "Failed to config time"},
            {Task: preinstall.Precheck, ErrMsg: "Failed to precheck"},
            {Task: preinstall.InitOS, ErrMsg: "Failed to init OS"},
            {Task: docker.InstallerDocker, ErrMsg: "Failed to install docker"},
            {Task: docker.ConfigDocker, ErrMsg: "Failed to config docker"},
            {Task: preinstall.PrePullImages, ErrMsg: "Failed to pre-pull images"},
            {Task: etcd.GenerateEtcdCerts, ErrMsg: "Failed to generate etcd certs"},
            {Task: etcd.SyncEtcdCertsToMaster, ErrMsg: "Failed to sync etcd certs"},
            {Task: etcd.GenerateEtcdService, ErrMsg: "Failed to create etcd service"},
            {Task: etcd.SetupEtcdCluster, ErrMsg: "Failed to start etcd cluster"},
            {Task: etcd.RefreshEtcdConfig, ErrMsg: "Failed to refresh etcd configuration"},
            {Task: etcd.BackupEtcd, ErrMsg: "Failed to backup etcd data"},
            {Task: kubernetes.GetClusterStatus, ErrMsg: "Failed to get cluster status"},
            {Task: kubernetes.InstallKubeBinaries, ErrMsg: "Failed to install kube binaries"},
            {Task: kubernetes.InitKubernetesCluster, ErrMsg: "Failed to init kubernetes cluster"},
            {Task: network.DeployNetworkPlugin, ErrMsg: "Failed to deploy network plugin"},
            {Task: kubernetes.JoinNodesToCluster, ErrMsg: "Failed to join node"},
            {Task: addons.InstallAddons, ErrMsg: "Failed to deploy addons"},
            // 重写配置
            //{Task: kubernetes.OverwriteKubeletConfig, ErrMsg: "Failed to overwrite config"},
            //{Task: kubesphere.DeployCephVolume, ErrMsg: "Failed to deploy cephVolume"},
            //{Task: kubesphere.DeployLocalVolume, ErrMsg: "Failed to deploy localVolume"},
            //{Task: kubesphere.DeployKubeSphere, ErrMsg: "Failed to deploy kubesphere"},
        }
    
    

    2.自定义运行时配置

    这里说的运行时配置是指/etc/docker/daemon.json,实际使用过程中我们对该配置进行了一些变更。

    {
      "log-opts": {
        "max-size": "500m",
        "max-file":"3"
      },
      "userland-proxy": false,
      "live-restore": true,
      "default-ulimits": {
        "nofile": {
          "Hard": 65535,
          "Name": "nofile",
          "Soft": 65535
        }
      },
      "default-address-pools": [
        {
          "base": "172.80.0.0/16",
          "size": 24
        },
        {
          "base": "172.90.0.0/16",
          "size": 24
        }
      ],
      "default-gateway": "",
      "default-gateway-v6": "",
      "default-runtime": "runc",
      "default-shm-size": "64M",
      {{- if .DataPath }}
      "data-root": "{{ .DataPath }}",
      {{- end}}
      {{- if .Mirrors }}
      "registry-mirrors": [{{ .Mirrors }}],
      {{- end}}
      {{- if .InsecureRegistries }}
      "insecure-registries": [{{ .InsecureRegistries }}],
      {{- end}}
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    主要通过修改ConfigDocker函数实现

    func ExecTasks(mgr *manager.Manager) error {
        createTasks := []manager.Task{
    ...
            {Task: docker.ConfigDocker, ErrMsg: "Failed to config docker"},
        }
    ...
    }
    
    

    3.镜像添加私有域名

    kubesphere所使用的镜像tag列表如下(部分镜像)

    kubesphere/kube-apiserver:v1.20.4
    kubesphere/kube-scheduler:v1.20.4
    kubesphere/kube-proxy:v1.20.4
    kubesphere/kube-controller-manager:v1.20.4
    kubesphere/kube-apiserver:v1.19.8
    kubesphere/kube-scheduler:v1.19.8
    kubesphere/kube-proxy:v1.19.8
    kubesphere/kube-controller-manager:v1.19.8
    kubesphere/kube-apiserver:v1.19.9
    kubesphere/kube-scheduler:v1.19.9
    kubesphere/kube-proxy:v1.19.9
    kubesphere/kube-controller-manager:v1.19.9
    kubesphere/kube-apiserver:v1.18.6
    kubesphere/kube-scheduler:v1.18.6
    

    由于实际投产环境为离线环境,这些镜像在该场景下无法正常拉取到,所以我们对这些镜像添加了自定义域名,以私有镜像库(harbor)的方式进行管理。

    修改后的镜像tag如下

    harbor.wl.io/kubesphere/kube-apiserver:v1.20.4
    harbor.wl.io/kubesphere/kube-scheduler:v1.20.4
    harbor.wl.io/kubesphere/kube-proxy:v1.20.4
    harbor.wl.io/kubesphere/kube-controller-manager:v1.20.4
    harbor.wl.io/kubesphere/kube-apiserver:v1.19.8
    harbor.wl.io/kubesphere/kube-scheduler:v1.19.8
    harbor.wl.io/kubesphere/kube-proxy:v1.19.8
    harbor.wl.io/kubesphere/kube-controller-manager:v1.19.8
    harbor.wl.io/kubesphere/kube-apiserver:v1.19.9
    harbor.wl.io/kubesphere/kube-scheduler:v1.19.9
    harbor.wl.io/kubesphere/kube-proxy:v1.19.9
    harbor.wl.io/kubesphere/kube-controller-manager:v1.19.9
    harbor.wl.io/kubesphere/kube-apiserver:v1.18.6
    harbor.wl.io/kubesphere/kube-scheduler:v1.18.6
    

    并通过kubekey在部署阶段,添加自定义host解析(/etc/hosts)

    我们在对kubekey本地化后的配置样例如下

    apiVersion: kubekey.kubesphere.io/v1alpha1
    kind: Cluster
    metadata:
      name: sample
    spec:
      hosts:
      - {name: node1, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: 123456}
      - {name: node2, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: 123456}
      - {name: node3, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: 123456}
      roleGroups:
        etcd:
        - node1-3
        master: 
        - node1-3
        worker:
        - node1-3
      controlPlaneEndpoint:
        domain: lb.kubesphere.local
        address: "192.168.1.111"
        port: "6443"
      externalHarbor:
        domain: 123456
        address: 192.168.1.114
        user: admin
        password: 123456
      kubernetes:
        version: v1.18.6
        imageRepo: kubesphere
        clusterName: cluster.local
      network:
        plugin: calico
        kubePodsCIDR: 10.233.64.0/18
        kubeServiceCIDR: 10.233.0.0/18
      docker:
        DataPath: /data
      storage:
        type: local
        ceph:
          id: 90140a86-58c9-41ce-8825-c4123bc52edd
          monitors:
            - 192.168.1.69:6789
            - 192.168.1.70:6789
            - 192.168.1.71:6789
          userID: kubernetes
          userKey: AQB1dFFgVJSnBhAAtJOOKOVU78aWN2iudY8QDw==
          adminKey: AQDlcFFges49BxAA0xAYat3tzyMHRZ4LNJqqIw==
          fsName: cephfs-paas
          pools:
            rbdDelete: rbd-paas-pool
            rbdRetain: rbd-paas-pool-retain
            fs: cephfs-paas-pool
      registry:
        registryMirrors:
        - http://harbor.wl.io
        insecureRegistries:
        - harbor.wl.io
        privateRegistry: harbor.wl.io
      addons: []
    

    相关文章

      网友评论

        本文标题:kubesphere生产环境落地实践(二)kubekey本地化

        本文链接:https://www.haomeiwen.com/subject/tbinkrtx.html