美文网首页
Kubernetes 版本升级

Kubernetes 版本升级

作者: 济南打工人 | 来源:发表于2019-03-26 11:52 被阅读0次

    升级kubeadm

    注:apt升级kubeadm,提示是否覆盖10-kubeadm.conf文件时选择N

    $ apt install kubeadm=1.11.0-00
    

    查看版本

    $ kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0"......
    

    检查哪些版本可用于升级并验证当前群集是否可升级
    kubeadm upgrade plan 将检查您的集群是否处于可升级状态,并以用户友好的方式获取可升级的版本。

    $ kubeadm upgrade plan
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.10.0
    [upgrade/versions] kubeadm version: v1.11.5
    [upgrade/versions] Latest stable version: v1.12.3
    [upgrade/versions] WARNING: Couldn't fetch latest version in the v1.10 series from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.10.txt": Get https://dl.k8s.io/release/stable-1.10.txt: net/http: TLS handshake timeout
    
    External components that should be upgraded manually before you upgrade the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT   AVAILABLE
    Etcd        3.3.5     3.2.18
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     4 x v1.10.4   v1.12.3
    
    Upgrade to the latest stable version:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.10.0   v1.12.3
    Controller Manager   v1.10.0   v1.12.3
    Scheduler            v1.10.0   v1.12.3
    Kube Proxy           v1.10.0   v1.12.3
    CoreDNS              1.0.6     1.1.3
    
    You can now apply the upgrade by executing the following command:
    
        kubeadm upgrade apply v1.12.3
    
    Note: Before you can perform this upgrade, you have to update kubeadm to v1.12.3.
    
    _____________________________________________________________________
    
    

    升级集群

    在master1执行

    $ kubeadm upgrade apply v1.11.0
    

    kubeadm upgrade apply 将执行下列步骤:

    • 检查集群是否处于可升级状态,包括:
      • API Server 是否可达,
      • 所有节点是否均处于 Ready 状态,并且
      • 控制平面处于健康状态
    • 强制启用版本偏移策略(version skew policy)。
    • 保证控制平面镜像可用或可以拉取到机器上。
    • 升级控制平面组件,当任何一个组件启动失败时对升级操作进行回退。
    • 应用新的 kube-dns 和 kube-proxy 清单文件并强制启用所有创建的必要 RBAC 规则。

    这时可能会在卡住不动,现象为:

    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-controller-manager-k8s-m2 hash: 799efd5d6916140baa665448a5c7ce99
    Static pod: kube-controller-manager-k8s-m2 hash: dee9d596b80547c79554ef14e49b7fa0
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-30-16-37-37/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-scheduler-k8s-m2 hash: 9f44c71763212b724704defdd28f5d97
    

    这时,需要等到时间超时,这是再下载镜像,如果没有找个私有仓库没有这个镜像,请换个版本试试.

    Static pod: kube-apiserver-k8s-m2 hash: fbbbd4e61695d1751f89dd8d4f7eb206
    Static pod: kube-apiserver-k8s-m2 hash: fbbbd4e61695d1751f89dd8d4f7eb206
    Static pod: kube-apiserver-k8s-m2 hash: fbbbd4e61695d1751f89dd8d4f7eb206
    Static pod: kube-apiserver-k8s-m2 hash: fbbbd4e61695d1751f89dd8d4f7eb206
    Static pod: kube-apiserver-k8s-m2 hash: 188fd88cb9c5b7fb5a364ef8961213e1
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-30-16-37-37/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-controller-manager-k8s-m2 hash: 799efd5d6916140baa665448a5c7ce99
    Static pod: kube-controller-manager-k8s-m2 hash: dee9d596b80547c79554ef14e49b7fa0
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-30-16-37-37/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    Static pod: kube-scheduler-k8s-m2 hash: 9f44c71763212b724704defdd28f5d97
    Static pod: kube-scheduler-k8s-m2 hash: ccdbecd66d9f0ad8d51e1fefd81f6526
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-m2" as an annotation
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
    
    
    • 为自动证书轮换添加必要的 RBAC 权限。将来 kubeadm 将自动执行这个步骤。(未验证)
    $ kubectl create clusterrolebinding kubeadm:node-autoapprove-certificate-rotation --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
    

    再升级其他两个master和node节点即可,网上说升级节点时,需要把当前节点设置为不可调度并移除工作负载,我在测试环境升级时并未影响测试组测试,当然还是建议网上说法:

    $ kubectl drain $HOST --ignore-daemonsets
    

    在 master 节点执行这个命令时,预计会出现这个错误,并且可以安全地将其忽略(因为 master 节点上有 static pod 运行):

    $ node "master" already cordoned
    error: pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): etcd-kubeadm, kube-apiserver-kubeadm, kube-controller-manager-kubeadm, kube-scheduler-kubeadm
    

    升级kubectl、kubelet

    apt install kubelet=1.11.0-00
    apt install kubectl=1.11.0-00
    

    升级完成以后,将节点标记为可调度(schedulable)以使其上线:

    $ kubectl uncordon $HOST
    

    在对所有集群节点的 kubelet 进行升级之后,请执行以下命令以确认所有节点又重新变为可用状态(从任何地方,例如集群外部):

    $ kubectl get nodes
    

    如果上述命令结果中所有节点的 STATUS 列都显示为 Ready,升级工作就已成功完成。

    kubernetes v1.11 升级v1.12一样操作。

    从损坏状态恢复

    如果 kubeadm upgrade 因某些原因失败并且不能回退(可能因为执行过程中意外的关闭了节点实例),您可以再次运行 kubeadm upgrade,因为其具有幂等性,所以最终应该能够保证集群的实际状态和您所定义的理想状态一致。

    您可以使用 kubeadm upgrade 命令和 x.x.x –> x.x.x 及 –force 参数,以从损坏状态恢复。

    相关文章

      网友评论

          本文标题:Kubernetes 版本升级

          本文链接:https://www.haomeiwen.com/subject/bkkgvqtx.html