美文网首页Amazing Arch
kubeadm安装kubernetes

kubeadm安装kubernetes

作者: 0fd7c75a65b1 | 来源:发表于2018-11-25 09:57 被阅读364次

    由于网上大部分kubeadm安装资料都存在需要科学上网才能安装k8,故特地写了份不需要翻墙的版本。

    1.准备


    1.1系统环境

    • CentOS Linux release 7.5.1804
    • Docker 18.06
    • Kubernetes v1.12.2
    cat /etc/hosts
    192.168.199.143 master
    192.168.199.144 node2
    
    主机 ip 角色
    master 192.168.199.143 master
    node2 192.168.199.144 node

    关闭防火墙,可查看Installing kubeadm关闭指定的端口。

    systemctl stop firewalld
    systemctl disable firewalld
    

    关闭selinux

    setenforce 0
    
    vi /etc/selinux/config
    SELINUX=disabled
    

    创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    vm.swappiness=0
    

    执行命令使修改生效

    modprobe br_netfilter
    sysctl -p /etc/sysctl.d/k8s.conf
    

    2.安装


    2.1安装docker

    安装docker的yum源:

    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo
    

    安装docker

    yum install -y --setopt=obsoletes=0 \
      docker-ce-18.06.1.ce-3.el7
    

    Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,需要注意Kubernetes 1.12最低支持的Docker版本是1.11.1。 我们这里在各节点安装docker的18.06.1版本

    可通过如下指令查看docker的版本:
    yum list docker-ce.x86_64 --showduplicates |sort -r

    2.2安装kubeadm, kubelet, kubectl

    • kubeadm: 引导启动k8s集群的命令行工具。
    • kubelet: 在群集中所有节点上运行的核心组件, 用来执行如启动pods和containers等操作。
    • kubectl: 操作集群的命令行工具。

    添加k8s源(解决需要科学上网才能安装的麻烦):

    cat > /etc/yum.repos.d/k8s.repo <<EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
    

    执行安装:

    yum install -y kubelet kubeadm kubectl
    

    Kubernetes 1.8要求关闭Swap,否则kubelet无法启动。

    swapoff -a
    

    2.3使用kubeadm创建一个单Master集群

    在kubeadm v1.11+版本中,增加了一个kubeadm config print-default命令,可以让我们方便的将kubeadm的默认配置打印到文件中:

    kubeadm config print-default > kubeadm.conf 
    

    如果无法科学上网可修改kubeadm.conf中的镜像仓储地址:

    sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf
    

    指定我们要的版本号,避免初始化时从https://dl.k8s.io/release/stable-1.12.txt读取,可使用如下命令来设置:

    sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.12.2/g" kubeadm.conf
    

    现在我们可以使用--config参数指定kubeadm.conf文件来运行kubeadm的images pull的命令:

    kubeadm config images pull --config kubeadm.conf
    

    可以看到成功拉取了镜像,如果出现:

     The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    

    可通过如下方法解决:

    docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    

    接下来修改kubeadm.conf中advertiseAddress参数指定master的ip地址:

    sed -i "s/advertiseAddress: .*/advertiseAddress: 192.168.199.143/g" kubeadm.conf
    

    在本示例中,我使用的是Canal网络插件,因此需要将--pod-network-cid设置为10.244.0.0/16,修改如下:

    sed -i "s/podSubnet: .*/podSubnet: \"10.244.0.0\/16\"/g" kubeadm.conf
    

    接下来执行master初始化命令:

    kubeadm init --config kubeadm.conf
    

    如果看到代表初始化成功:

    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.199.143:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:67ea537411822fe684d1ddb984802da62a4f22aa1c32fefe7c3404bb8f3f52e0
    

    记得执行如下指令,否则无法使用:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    集群状态:

    kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health": "true"}
    

    如果初始化碰到问题,可通过如下清理:

    kubedm reset
    ifconfig cni0 down
    ip link delete cni0
    ifconfig flannel.1 down
    ip link delete flannel.1
    rm -rf /var/lib/cni/
    rm -rf .kube/
    

    2.4安装Pod Network

    mkdir -p ~/k8s/
    cd ~/k8s
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    kube-flannel.yml指定对应的网卡iface=<iface-name>,本例中的网卡为ens33:

    containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.10.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            - --iface=ens33
    

    查看master的信息:

    kubectl describe node master
    Name:               master
    Roles:              master
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/hostname=master
                        node-role.kubernetes.io/master=
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 03 Nov2018 23:15:14 +0800
    Taints:             node-role.kubernetes.io/master:NoSchedule
                        node.kubernetes.io/not-ready:NoSchedule
    Unschedulable:      false
    

    可以看到1.12版本的kubeadm额外给master节点设置了一个污点(Taint):node.kubernetes.io/not-ready:NoSchedule,很容易理解,即如果节点还没有ready之前,是不接受调度的。可是如果Kubernetes的网络插件还没有部署的话,节点是不会进入ready状态的。因此我们修改以下kube-flannel.yaml的内容,加入对node.kubernetes.io/not-ready:NoSchedule这个污点的容忍:

    tolerations:
          - key: node-role.kubernetes.io/master
            operator: Exists
            effect: NoSchedule
          - key: node.kubernetes.io/not-ready
            operator: Exists
            effect: NoSchedule
    

    部署flannel:

    kubectl apply -f kube-flannel.yml
    

    可以看到部署正常

    kubectl get ds -l app=flannel -n kube-system
    NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
    kube-flannel-ds-amd64     1         1         1       1            1           beta.kubernetes.i/oarch=amd64     45s
    kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       45s
    kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     45s
    kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   45s
    kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     45s
    

    可以看到所有的Pod都处于Running状态。

    kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。
    

    2.5使用kubeadm向集群添加node节点

    2.5.1添加node

    进入node2宿主机,如上同样关闭swap并安装好docker,kubelet,kubeadm,bubectl

    将node添加进集群:

     kubeadm join 192.168.199.143:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:460e8d95ca6f97ea6b110fe4faf24a8c8e1d588a3b827b8d96b789a3bd12ce89
    

    可以看到:

    .....
    .....
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    

    在master中:

    kubectl get nodes
    NAME      STATUS    ROLES     AGE       VERSION
    node1     Ready     master    26m       v1.12.0
    node2     Ready     <none>    2m        v1.12.0
    

    2.5.2移除node
    master中:

    kubectl drain node2 --delete-local-data --force --ignore-daemonsets
    kubectl delete node node2
    

    node2中:

    kubeadm reset
    ifconfig cni0 down
    ip link delete cni0
    ifconfig flannel.1 down
    ip link delete flannel.1
    rm -rf /var/lib/cni/
    

    master中:

    kubectl delete node node2
    

    到此kubeadm安装k8集群完成。
    参考:https://www.kubernetes.org.cn/4619.html

    相关文章

      网友评论

        本文标题:kubeadm安装kubernetes

        本文链接:https://www.haomeiwen.com/subject/eysuqqtx.html