美文网首页
CentOS 7.6安装rancher V2.2.8 高可用集群

CentOS 7.6安装rancher V2.2.8 高可用集群

作者: 柚子net | 来源:发表于2019-08-25 15:06 被阅读0次

    CentOS 7.6安装rancher V2.2.8 高可用集群

    • 使用GCP主机作为安装示例
    [rke@rancher-1 ~]$ cat /etc/redhat-release
    CentOS Linux release 7.6.1810 (Core)
    
    用户名 主机名 内网IP 公网IP SSH端口
    rke rancher-1 10.128.0.2 xxx 22
    rke rancher-2 10.128.0.3 xxx 22
    rke rancher-3 10.128.0.4 xxx 22
    1. 临时关闭selinux
    sudo setenforce 0
    
    1. 永久关闭selinux
    sudo  sed -i 's/SELINUX=enforcing/SELINUX=disabled/'  /etc/sysconfig/selinux
    
    1. 安装docker
    sudo yum install -y yum-utils wget device-mapper-persistent-data lvm2
    
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    sudo yum install -y docker-ce-18.09.8 docker-ce-18.09.8 containerd.io
    
    sudo usermod -aG docker ${USER}
    
    sudo systemctl start docker
    
    sudo systemctl enable docker
    
    
    1. 下载镜像
    wget https://github.com/rancher/rke/releases/download/v0.2.8/rke_linux-amd64
    sudo chmod 777 rke_linux-amd64 && sudo cp rke_linux-amd64 /usr/local/bin/rke
    wget https://github.com/rancher/rancher/releases/download/v2.2.8/rancher-images.txt
    wget https://github.com/rancher/rancher/releases/download/v2.2.8/rancher-save-images.sh
    chmod 755 rancher-save-images.sh
    #不运行docker save
    sed -i 's/docker save/#docker save/g' rancher-save-images.sh
    ./rancher-save-images.sh
    
    1. 配置hosts,并测试ssh免密访问各个主机(rancher-1上安装的话,包括免密访问自己)
    10.128.0.2  rancher-1
    10.128.0.3  rancher-2
    10.128.0.4  rancher-3
    
    ssh rancher-1
    
    1. 下载helm

      wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
      tar zxf helm-v2.14.3-linux-amd64.tar.gz
      chmod +x ./linux-amd64/helm && sudo cp ./linux-amd64/helm /usr/local/bin/
      
    2. 下载kubectl

      # 有墙哦
      # https://storage.googleapis.com/kubernetes-release/release/v1.14.5/bin/linux/amd64/kubectl
      curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
      chmod +x ./kubectl && sudo cp ./kubectl /usr/local/bin/
      
    3. 初始化rke配置文件

    nodes:
    - address: 10.128.0.2
      #internal_address: 104.198.227.73
      user: rke
      role: [controlplane, etcd, worker]
      ssh_key_path: /home/rke/.ssh/id_rsa
    - address: 10.128.0.3
      #internal_address: 35.232.38.212
      user: rke
      role: [controlplane, etcd, worker]
      ssh_key_path: /home/rke/.ssh/id_rsa
    - address: 10.128.0.4
      #internal_address: 35.232.158.150
      user: rke
      role: [controlplane, etcd, worker]
      ssh_key_path: /home/rke/.ssh/id_rsa
    
    services:
      etcd:
        snapshot: true
        creation: 6h
        retention: 24h
    
    1. 配置解析到各个节点, (如果节点有公网IP或者只打算内网访问,这样就可以了,否则就像官方文档那样,配置nginx代理吧), 例如:
    rancher.xxx.top A 104.177.227.73
    rancher.xxx.top A 35.111.38.212
    rancher.xxx.top A 35.111.158.150
    
    1. rke安装集群
    rke up --config ./rancher-cluster.yml
    
    [rke@rancher-1 ~]$ rke up --config ./rancher-cluster.yml
    INFO[0000] Initiating Kubernetes cluster
    INFO[0000] [certificates] Generating Kubernetes API server certificates
    INFO[0000] [certificates] Generating admin certificates and kubeconfig
    INFO[0000] [certificates] Generating etcd-10.128.0.2 certificate and key
    INFO[0000] [certificates] Generating etcd-10.128.0.3 certificate and key
    INFO[0000] [certificates] Generating etcd-10.128.0.4 certificate and key
    INFO[0000] Successfully Deployed state file at [./rancher-cluster.rkestate]
    INFO[0000] Building Kubernetes cluster
    INFO[0000] [dialer] Setup tunnel for host [10.128.0.2]
    INFO[0000] [dialer] Setup tunnel for host [10.128.0.4]
    INFO[0000] [dialer] Setup tunnel for host [10.128.0.3]
    INFO[0000] [network] Deploying port listener containers
    INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.4]
    INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.3]
    INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [10.128.0.2]
    INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.2]
    INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.3]
    INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [10.128.0.4]
    INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.4]
    INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.2]
    INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [10.128.0.3]
    INFO[0002] [network] Port listener containers deployed successfully
    INFO[0002] [network] Running etcd <-> etcd port checks
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
    INFO[0002] [network] Running control plane -> etcd port checks
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
    INFO[0002] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
    INFO[0002] [network] Running control plane -> worker port checks
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
    INFO[0003] [network] Running workers -> control plane port checks
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.2]
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.4]
    INFO[0003] [network] Successfully started [rke-port-checker] container on host [10.128.0.3]
    INFO[0003] [network] Checking KubeAPI port Control Plane hosts
    INFO[0003] [network] Removing port listener containers
    INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.4]
    INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.2]
    INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [10.128.0.3]
    INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.4]
    INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.2]
    INFO[0004] [remove/rke-cp-port-listener] Successfully removed container on host [10.128.0.3]
    INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.4]
    INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.2]
    INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [10.128.0.3]
    INFO[0004] [network] Port listener containers removed successfully
    INFO[0004] [certificates] Deploying kubernetes certificates to Cluster nodes
    INFO[0009] [reconcile] Rebuilding and updating local kube config
    INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
    INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
    INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
    INFO[0009] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
    INFO[0009] [reconcile] Reconciling cluster state
    INFO[0009] [reconcile] This is newly generated cluster
    INFO[0009] Pre-pulling kubernetes images
    INFO[0009] Kubernetes images pulled successfully
    INFO[0009] [etcd] Building up etcd plane..
    INFO[0010] Waiting for [etcd] container to exit on host [10.128.0.2]
    INFO[0010] [etcd] Successfully updated [etcd] container on host [10.128.0.2]
    INFO[0010] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.2]
    INFO[0010] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.2]
    INFO[0010] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.2]
    INFO[0016] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.2]
    INFO[0016] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.2]
    INFO[0016] Container [rke-bundle-cert] is still running on host [10.128.0.2]
    INFO[0017] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.2]
    INFO[0017] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.2]
    INFO[0017] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0017] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0018] Waiting for [etcd] container to exit on host [10.128.0.3]
    INFO[0018] [etcd] Successfully updated [etcd] container on host [10.128.0.3]
    INFO[0018] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.3]
    INFO[0018] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.3]
    INFO[0018] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.3]
    INFO[0024] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.3]
    INFO[0024] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.3]
    INFO[0024] Container [rke-bundle-cert] is still running on host [10.128.0.3]
    INFO[0025] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.3]
    INFO[0025] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.3]
    INFO[0025] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0025] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0026] Waiting for [etcd] container to exit on host [10.128.0.4]
    INFO[0026] [etcd] Successfully updated [etcd] container on host [10.128.0.4]
    INFO[0026] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.128.0.4]
    INFO[0026] [remove/etcd-rolling-snapshots] Successfully removed container on host [10.128.0.4]
    INFO[0026] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.128.0.4]
    INFO[0032] [certificates] Successfully started [rke-bundle-cert] container on host [10.128.0.4]
    INFO[0032] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.4]
    INFO[0032] Container [rke-bundle-cert] is still running on host [10.128.0.4]
    INFO[0033] Waiting for [rke-bundle-cert] container to exit on host [10.128.0.4]
    INFO[0033] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.128.0.4]
    INFO[0033] [etcd] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0033] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0033] [etcd] Successfully started etcd plane.. Checking etcd cluster health
    INFO[0034] [controlplane] Building up Controller Plane..
    INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.3]
    INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.3]
    INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.4]
    INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.4]
    INFO[0034] [controlplane] Successfully started [kube-apiserver] container on host [10.128.0.2]
    INFO[0034] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.128.0.2]
    INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.3] is healthy
    INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.4] is healthy
    INFO[0044] [healthcheck] service [kube-apiserver] on host [10.128.0.2] is healthy
    INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0045] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0045] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.2]
    INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.2]
    INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.4]
    INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.4]
    INFO[0045] [controlplane] Successfully started [kube-controller-manager] container on host [10.128.0.3]
    INFO[0045] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.128.0.3]
    INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.2] is healthy
    INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.4] is healthy
    INFO[0051] [healthcheck] service [kube-controller-manager] on host [10.128.0.3] is healthy
    INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0051] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0051] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0051] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0052] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.2]
    INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.2]
    INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.4]
    INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.4]
    INFO[0052] [controlplane] Successfully started [kube-scheduler] container on host [10.128.0.3]
    INFO[0052] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.128.0.3]
    INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.2] is healthy
    INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.4] is healthy
    INFO[0057] [healthcheck] service [kube-scheduler] on host [10.128.0.3] is healthy
    INFO[0057] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0058] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0058] [controlplane] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0058] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0058] [controlplane] Successfully started Controller Plane..
    INFO[0058] [authz] Creating rke-job-deployer ServiceAccount
    INFO[0058] [authz] rke-job-deployer ServiceAccount created successfully
    INFO[0058] [authz] Creating system:node ClusterRoleBinding
    INFO[0058] [authz] system:node ClusterRoleBinding created successfully
    INFO[0058] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
    INFO[0058] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
    INFO[0058] Successfully Deployed state file at [./rancher-cluster.rkestate]
    INFO[0058] [state] Saving full cluster state to Kubernetes
    INFO[0058] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state
    INFO[0058] [worker] Building up Worker Plane..
    INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.3]
    INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.2]
    INFO[0058] [sidekick] Sidekick container already created on host [10.128.0.4]
    INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.2]
    INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.2]
    INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.4]
    INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.4]
    INFO[0058] [worker] Successfully started [kubelet] container on host [10.128.0.3]
    INFO[0058] [healthcheck] Start Healthcheck on service [kubelet] on host [10.128.0.3]
    INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.2] is healthy
    INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.4] is healthy
    INFO[0064] [healthcheck] service [kubelet] on host [10.128.0.3] is healthy
    INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0064] [worker] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0064] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0064] [worker] Successfully started [kube-proxy] container on host [10.128.0.2]
    INFO[0064] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.2]
    INFO[0065] [worker] Successfully started [kube-proxy] container on host [10.128.0.3]
    INFO[0065] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.3]
    INFO[0065] [worker] Successfully started [kube-proxy] container on host [10.128.0.4]
    INFO[0065] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.128.0.4]
    INFO[0065] [healthcheck] service [kube-proxy] on host [10.128.0.3] is healthy
    INFO[0065] [healthcheck] service [kube-proxy] on host [10.128.0.4] is healthy
    INFO[0065] [worker] Successfully started [rke-log-linker] container on host [10.128.0.4]
    INFO[0065] [worker] Successfully started [rke-log-linker] container on host [10.128.0.3]
    INFO[0065] [remove/rke-log-linker] Successfully removed container on host [10.128.0.4]
    INFO[0065] [remove/rke-log-linker] Successfully removed container on host [10.128.0.3]
    INFO[0070] [healthcheck] service [kube-proxy] on host [10.128.0.2] is healthy
    INFO[0070] [worker] Successfully started [rke-log-linker] container on host [10.128.0.2]
    INFO[0071] [remove/rke-log-linker] Successfully removed container on host [10.128.0.2]
    INFO[0071] [worker] Successfully started Worker Plane..
    INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.2]
    INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.4]
    INFO[0071] [cleanup] Successfully started [rke-log-cleaner] container on host [10.128.0.3]
    INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.2]
    INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.4]
    INFO[0071] [remove/rke-log-cleaner] Successfully removed container on host [10.128.0.3]
    INFO[0071] [sync] Syncing nodes Labels and Taints
    INFO[0071] [sync] Successfully synced nodes Labels and Taints
    INFO[0071] [network] Setting up network plugin: canal
    INFO[0071] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
    INFO[0071] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
    INFO[0071] [addons] Executing deploy job rke-network-plugin
    INFO[0076] [addons] Setting up coredns
    INFO[0076] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
    INFO[0076] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
    INFO[0076] [addons] Executing deploy job rke-coredns-addon
    INFO[0081] [addons] CoreDNS deployed successfully..
    INFO[0081] [dns] DNS provider coredns deployed successfully
    INFO[0081] [addons] Setting up Metrics Server
    INFO[0081] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
    INFO[0082] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
    INFO[0082] [addons] Executing deploy job rke-metrics-addon
    INFO[0087] [addons] Metrics Server deployed successfully
    INFO[0087] [ingress] Setting up nginx ingress controller
    INFO[0087] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
    INFO[0087] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
    INFO[0087] [addons] Executing deploy job rke-ingress-controller
    INFO[0092] [ingress] ingress controller nginx deployed successfully
    INFO[0092] [addons] Setting up user addons
    INFO[0092] [addons] no user addons defined
    INFO[0092] Finished building Kubernetes cluster successfully
    
    1. 测试安装好的集群
        export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
        kubectl get nodes
    
    NAME         STATUS   ROLES                      AGE   VERSION
    10.128.0.2   Ready    controlplane,etcd,worker   64s   v1.14.6
    10.128.0.3   Ready    controlplane,etcd,worker   64s   v1.14.6
    10.128.0.4   Ready    controlplane,etcd,worker   64s   v1.14.6
    
    1. 安装helm tiller
    
    kubectl -n kube-system create serviceaccount tiller
    
    kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
    
    helm init --service-account tiller
    
    #Users in China: You will need to specify a specific tiller-image in order to initialize tiller. 
    
    #The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085. 
    
    #When initializing tiller, you'll need to pass in --tiller-image
    
    #tag v2.14.3 20190824
    helm init --service-account tiller \
    --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3
    
    
    [rke@rancher-1 ~]$ kubectl -n kube-system get pod
    NAME                                      READY   STATUS      RESTARTS   AGE
    canal-bp9hw                               2/2     Running     0          4m22s
    canal-d84cx                               2/2     Running     0          4m22s
    canal-kftk2                               2/2     Running     0          4m22s
    coredns-autoscaler-5d5d49b8ff-4h8f5       1/1     Running     0          4m17s
    coredns-bdffbc666-mlc44                   1/1     Running     0          4m18s
    metrics-server-7f6bd4c888-cqw2k           1/1     Running     0          4m13s
    rke-coredns-addon-deploy-job-s78h6        0/1     Completed   0          4m19s
    rke-ingress-controller-deploy-job-f69k8   0/1     Completed   0          4m9s
    rke-metrics-addon-deploy-job-8ncvb        0/1     Completed   0          4m14s
    rke-network-plugin-deploy-job-sntgn       0/1     Completed   0          4m25s
    tiller-deploy-7f4d76c4b6-qdckk            1/1     Running     0          79s
    [rke@rancher-1 ~]$ helm version
    Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
    
    1. 安装rancher chart
    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    
    1. 安装 cert manager
    helm install stable/cert-manager \
    --name cert-manager \
    --namespace kube-system \
    --version v0.5.2
    
    1. 安装rancher
    helm install rancher-stable/rancher --name rancher --namespace cattle-system --set hostname=rancher.xxx.top
    
    kubectl -n cattle-system get pod
    
    1. 安装完成啦,访问 Rancher-UI
    (资料找到,但此问题会自行消失,admin用户已经被添加)
    访问: https://rancher.xxx.top/ 为 admin账户设置初始密码,并登入系统。提示设置server-url,确保你的地址无误,确认即可。随后稍等一会儿,待系统完成初始化。
    如果出现local集群一直停留在等待状态,并提示 Waiting for server-url setting to be set,可以尝试点击 全局->local->升级->添加一个成员角色(admin/ClusterOwner)->保存即可。
    
    1. 问题排查
    • 启动后local集群存在(v2.2.8)

      
      Exit status 1, Error from server (Forbidden)... balabala 权限问题
      
      #检查发现 cattle-system 下面就只有 rancher
      [rke@rancher-1 ~]$ kubectl get pod -n cattle-system
      NAME                       READY   STATUS    RESTARTS   AGE
      rancher-797f8646f6-4bcr9   1/1     Running   1          19m
      rancher-797f8646f6-fph2c   1/1     Running   2          19m
      rancher-797f8646f6-fvtnf   1/1     Running   2          19m
      
      #尝试重新启动rancher容器...(此期间,UI无法访问)
      kubectl -n cattle-system scale --replicas=0 deploy rancher
      kubectl -n cattle-system scale --replicas=3 deploy rancher
      

    相关文章

      网友评论

          本文标题:CentOS 7.6安装rancher V2.2.8 高可用集群

          本文链接:https://www.haomeiwen.com/subject/pknhectx.html