美文网首页
在Linux上ALL in one 安装KubeSphere

在Linux上ALL in one 安装KubeSphere

作者: TEYmL | 来源:发表于2021-05-19 17:14 被阅读0次

    在Linux上ALL in one 安装KubeSphere

    硬件配置

    • Ubuntu16.04或以上
    • 2 CPUs
    • 4G memory
    • 40G 硬盘

    前提准备

    关闭Swap

    执行以下命令临时关闭

    swapoff -a
    

    之后修改/etc/fstab文件,将swap这一行注释掉,永久关闭

    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/sda1 during installation
    UUID=df90a489-532a-4222-9535-fec53dcbd12b /               ext4    errors=remount-ro 0       1
    #/swapfile                                 none            swap    sw              0       0
    

    安装docker

    执行以下命令

    curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
    

    结果

    root@ubuntu:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
    # Executing docker install script, commit: 7cae5f8b0decc17d6571f9f52eb840fbc13b2737
    + sh -c 'apt-get update -qq >/dev/null'
    + sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
    + sh -c 'curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null'
    Warning: apt-key output should not be parsed (stdout is not a terminal)
    + sh -c 'echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list'
    + sh -c 'apt-get update -qq >/dev/null'
    + '[' -n '' ']'
    + sh -c 'apt-get install -y -qq --no-install-recommends docker-ce >/dev/null'
    + '[' -n 1 ']'
    + sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null'
    + sh -c 'docker version'
    Client: Docker Engine - Community
     Version:           20.10.6
     API version:       1.41
     Go version:        go1.13.15
     Git commit:        370c289
     Built:             Fri Apr  9 22:46:01 2021
     OS/Arch:           linux/amd64
     Context:           default
     Experimental:      true
    
    Server: Docker Engine - Community
     Engine:
      Version:          20.10.6
      API version:      1.41 (minimum version 1.12)
      Go version:       go1.13.15
      Git commit:       8728dd2
      Built:            Fri Apr  9 22:44:13 2021
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          1.4.4
      GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
     runc:
      Version:          1.0.0-rc93
      GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    
    ================================================================================
    
    To run Docker as a non-privileged user, consider setting up the
    Docker daemon in rootless mode for your user:
    
        dockerd-rootless-setuptool.sh install
    
    Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
    
    
    To run the Docker daemon as a fully privileged service, but granting non-root
    users access, refer to https://docs.docker.com/go/daemon-access/
    
    WARNING: Access to the remote API on a privileged Docker daemon is equivalent
             to root access on the host. Refer to the 'Docker daemon attack surface'
             documentation for details: https://docs.docker.com/go/attack-surface/
    
    ================================================================================
    

    安装辅助软件

    执行以下命令

    apt install socat conntrack
    

    下载安装工具

    执行以下命令

    export KKZONE=cn
    curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.0 sh -
    chmod +x kk
    

    执行安装

    执行以下命令,键入“y”

    ./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.0
    

    执行结果

    +--------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
    | name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker  | nfs client | ceph client | glusterfs client | time         |
    +--------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
    | ubuntu | y    | y    | y       | y        | y     |       | y         | 20.10.6 |            |             |                  | CST 16:38:50 |
    +--------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
    
    This is a simple check of your environment.
    Before installation, you should ensure that your machines meet all requirements specified at
    https://github.com/kubesphere/kubekey#requirements-and-recommendations
    
    Continue this installation? [yes/no]: Continue this installation? [yes/no]: Continue this installation? [yes/no]: yes
    INFO[16:39:04 CST] Downloading Installation Files               
    INFO[16:39:04 CST] Downloading kubeadm ...                      
    INFO[16:39:41 CST] Downloading kubelet ...                      
    INFO[16:41:33 CST] Downloading kubectl ...                      
    INFO[16:42:12 CST] Downloading helm ...                         
    INFO[16:42:51 CST] Downloading kubecni ...                      
    INFO[16:43:26 CST] Configuring operating system ...             
    [ubuntu 10.203.1.100] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    vm.max_map_count = 262144
    vm.swappiness = 1
    fs.inotify.max_user_instances = 524288
    no crontab for root
    INFO[16:43:29 CST] Installing docker ...                        
    INFO[16:43:31 CST] Start to download images on all nodes        
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
    [ubuntu] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
    INFO[16:44:53 CST] Generating etcd certs                        
    INFO[16:44:56 CST] Synchronizing etcd certs                     
    INFO[16:44:56 CST] Creating etcd service                        
    [ubuntu 10.203.1.100] MSG:
    etcd will be installed
    INFO[16:44:59 CST] Starting etcd cluster                        
    [ubuntu 10.203.1.100] MSG:
    Configuration file will be created
    INFO[16:45:00 CST] Refreshing etcd configuration                
    [ubuntu 10.203.1.100] MSG:
    Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
    Waiting for etcd to start
    INFO[16:45:07 CST] Backup etcd data regularly                   
    INFO[16:45:14 CST] Get cluster status                           
    [ubuntu 10.203.1.100] MSG:
    Cluster will be created.
    INFO[16:45:15 CST] Installing kube binaries                     
    Push /root/kubekey/v1.20.4/amd64/kubeadm to 10.203.1.100:/tmp/kubekey/kubeadm   Done
    Push /root/kubekey/v1.20.4/amd64/kubelet to 10.203.1.100:/tmp/kubekey/kubelet   Done
    Push /root/kubekey/v1.20.4/amd64/kubectl to 10.203.1.100:/tmp/kubekey/kubectl   Done
    Push /root/kubekey/v1.20.4/amd64/helm to 10.203.1.100:/tmp/kubekey/helm   Done
    Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.203.1.100:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
    INFO[16:45:19 CST] Initializing kubernetes cluster              
    [ubuntu 10.203.1.100] MSG:
    W0519 16:45:20.739592   13819 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
    [init] Using Kubernetes version: v1.20.4
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
            [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost ubuntu ubuntu.cluster.local] and IPs [10.233.0.1 10.203.1.100 127.0.0.1]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] External etcd mode: Skipping etcd/ca certificate authority generation
    [certs] External etcd mode: Skipping etcd/server certificate generation
    [certs] External etcd mode: Skipping etcd/peer certificate generation
    [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
    [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 79.504468 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node ubuntu as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
    [mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: odm7r2.uuer24st6ee69kpk
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join lb.kubesphere.local:6443 --token odm7r2.uuer24st6ee69kpk \
        --discovery-token-ca-cert-hash sha256:13af9dcfbe91a8945889de13802e95d706f8dac0f47f89caff6b0ae04321648f \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join lb.kubesphere.local:6443 --token odm7r2.uuer24st6ee69kpk \
        --discovery-token-ca-cert-hash sha256:13af9dcfbe91a8945889de13802e95d706f8dac0f47f89caff6b0ae04321648f
    [ubuntu 10.203.1.100] MSG:
    node/ubuntu untainted
    [ubuntu 10.203.1.100] MSG:
    node/ubuntu labeled
    [ubuntu 10.203.1.100] MSG:
    service "kube-dns" deleted
    [ubuntu 10.203.1.100] MSG:
    service/coredns created
    [ubuntu 10.203.1.100] MSG:
    serviceaccount/nodelocaldns created
    daemonset.apps/nodelocaldns created
    [ubuntu 10.203.1.100] MSG:
    configmap/nodelocaldns created
    [ubuntu 10.203.1.100] MSG:
    I0519 16:47:13.795908   16157 version.go:254] remote version is much newer: v1.21.1; falling back to: stable-1.20
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    6661834990af1a4dbb23b41ff6745dd13b3a7c3982eb97541a714f35a3640b8a
    [ubuntu 10.203.1.100] MSG:
    secret/kubeadm-certs patched
    [ubuntu 10.203.1.100] MSG:
    secret/kubeadm-certs patched
    [ubuntu 10.203.1.100] MSG:
    secret/kubeadm-certs patched
    [ubuntu 10.203.1.100] MSG:
    kubeadm join lb.kubesphere.local:6443 --token k6yfo5.1kpdj20pibqyxs17     --discovery-token-ca-cert-hash sha256:13af9dcfbe91a8945889de13802e95d706f8dac0f47f89caff6b0ae04321648f
    [ubuntu 10.203.1.100] MSG:
    ubuntu   v1.20.4   [map[address:10.203.1.100 type:InternalIP] map[address:ubuntu type:Hostname]]
    INFO[16:47:16 CST] Joining nodes to cluster                     
    INFO[16:47:16 CST] Deploying network plugin ...                 
    [ubuntu 10.203.1.100] MSG:
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    [ubuntu 10.203.1.100] MSG:
    storageclass.storage.k8s.io/local created
    serviceaccount/openebs-maya-operator created
    Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
    clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
    Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
    clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
    deployment.apps/openebs-localpv-provisioner created
    INFO[16:47:20 CST] Deploying KubeSphere ...                     
    v3.1.0
    [ubuntu 10.203.1.100] MSG:
    namespace/kubesphere-system created
    namespace/kubesphere-monitoring-system created
    [ubuntu 10.203.1.100] MSG:
    secret/kube-etcd-client-certs created
    [ubuntu 10.203.1.100] MSG:
    namespace/kubesphere-system unchanged
    serviceaccount/ks-installer unchanged
    Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
    customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
    clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
    clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
    deployment.apps/ks-installer unchanged
    clusterconfiguration.installer.kubesphere.io/ks-installer created
    #####################################################
    ###              Welcome to KubeSphere!           ###
    #####################################################
    
    Console: http://10.203.1.100:30880
    Account: admin
    Password: P@88w0rd
    
    NOTES:
      1. After you log into the console, please check the
         monitoring status of service components in
         "Cluster Management". If any service is not
         ready, please wait patiently until all components 
         are up and running.
      2. Please change the default password after login.
    
    #####################################################
    https://kubesphere.io             2021-05-19 16:54:15
    #####################################################
    INFO[16:54:24 CST] Installation is complete.
    
    Please check the result using the command:
    
           kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    

    相关文章

      网友评论

          本文标题:在Linux上ALL in one 安装KubeSphere

          本文链接:https://www.haomeiwen.com/subject/vogpjltx.html