美文网首页
从零搭建K8S集群(Kubeadmin)

从零搭建K8S集群(Kubeadmin)

作者: liuhaonan00 | 来源:发表于2021-02-01 21:22 被阅读0次

    这个工作,其实应该是在我刚进入IBM就应该学到了,可是我太贪玩了。工作中只停留在会用的阶段。希望这几年交的学费,能让我接下来受益
    ---- 题记

    一、准备工作

    这里边的所有操作,需要对每台虚拟机分别操作

    1 准备五台虚拟机,需要提前配置好hostname

    [root@haonan1 ~]# cat /etc/hosts
    #127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    #::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.11.64.192 haonan1.fyre.ibm.com
    10.11.64.219 haonan2.fyre.ibm.com
    10.11.66.144 haonan3.fyre.ibm.com
    10.11.66.148 haonan4.fyre.ibm.com
    10.11.66.149 haonan5.fyre.ibm.com
    

    2 禁用防火墙

    systemctl stop firewalld
    systemctl disable firewalld
    

    3 禁用SELinux

    修改/etc/selinux/config, 设置SELINUX=disabled. 重启机器.

    [root@haonan1 ~]# sestatus
    SELinux status:                 disabled
    

    4 禁用交换分区

    编辑/etc/fstab, 将swap注释掉(最后一行). 重启机器.

    [root@haonan1 ~]# cat /etc/fstab
    
    #
    # /etc/fstab
    # Created by anaconda on Mon Apr  6 15:18:09 2020
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    ## Please take a snapshot of your VM before modifying this file.  Modify this file incorrectly most likely will corrupt your system or stop your system from booting up
    
    /dev/mapper/rhel-root   /                       xfs     defaults        0 0
    UUID=4f3976c1-1696-4787-8618-f52bb1c0c86a /boot                   xfs     defaults        0 0
    #/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
    

    二、安装Docker

    这里边每一步操作也需要对每一台机器执行

    1下载Docker

    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.2.ce-3.el7.x86_64.rpm
    

    2 安装Docker

    yum install -y docker-ce-18.06.2.ce-3.el7.x86_64.rpm
    

    3 启动Docker

    systemctl start docker
    systemctl enable docker
    

    效果如图:

    [root@haonan1 ~]# systemctl status docker
    ● docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
       Active: active (running) since Mon 2021-02-01 03:11:29 PST; 1h 51min ago
    
    

    三、安装K8s组件

    这里边每一步操作也需要对每一台机器执行

    1 添加源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    

    2 安装

    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    systemctl enable kubelet && systemctl start kubelet
    

    此时kubelet还无法正常启动,但是请忽略这个问题,先往下进行

    3 修改网络配置

    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    sysctl --system
    

    4 初始化节点

    1 配置Master节点

    首先需要获取本机的内部IP, 然后初始化master节点

    kubeadm init --apiserver-advertise-address 10.11.64.192 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
    

    当配置成功后,我们需要执行一下的命令:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    初次之外,下面的信息需要保存下来,或者让master节点再次运行:
    kubeadm token create --print-join-command 获取其他worker节点加入集群的指令

    ...
    Your Kubernetes control-plane has initialized successfully!
    ...
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.11.64.192:6443 --token zco47o.c324opr97yvt4f4q \
        --discovery-token-ca-cert-hash sha256:71082a3113fe3e1df8706ba4219957acbc4837d4a9fc93f77046ba43899d8a32 
    

    2 Master节点添加CNI网络

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml)
    kubectl apply -f kube-flannel.yml
    

    此时节点就会变成Ready状态

    3 让Work节点加入集群

    在worker节点上,运行上边的命令。

    4 大功告成

    [root@haonan1 ~]# kc get no
    NAME                   STATUS   ROLES                  AGE    VERSION
    haonan1.fyre.ibm.com   Ready    control-plane,master   122m   v1.20.2
    haonan2.fyre.ibm.com   Ready    <none>                 117m   v1.20.2
    haonan3.fyre.ibm.com   Ready    <none>                 113m   v1.20.2
    haonan4.fyre.ibm.com   Ready    <none>                 113m   v1.20.2
    haonan5.fyre.ibm.com   Ready    <none>                 113m   v1.20.2
    

    5 后记

    其实还遇到了一些坑,有的没有解决:

    1 在初始化集群时,可能会遇到端口不能转发的报错:

    /proc/sys/net/ipv4/ip_forward
    配置Linux系统的ip转发功能,首先保证硬件连通,然后打开系统的转发功能

    less /proc/sys/net/ipv4/ip_forward,该文件内容为0,表示禁止数据包转发,1表示允许,将其修改为1。

    可使用命令echo "1" > /proc/sys/net/ipv4/ip_forward 修改文件内容,重启网络服务或主机后效果不再。若要其自动执行,可将命令echo "1" > /proc/sys/net/ipv4/ip_forward 写入脚本/etc/rc.d/rc.local 或者 在/etc/sysconfig/network脚本中添加 FORWARD_IPV4="YES"

    2 如果使用Calico的CNI网络,master节点的node无法启动

    参考链接
    https://github.com/projectcalico/calico/issues/2561

    $ kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-5cbcccc885-xtxck   1/1     Running   0          40m
    kube-system   calico-node-n4tcf                          0/1     Running   0          40m
    kube-system   calico-node-sjqsr                          0/1     Running   0          36m
    kube-system   coredns-fb8b8dccf-x8thj                    1/1     Running   0          43m
    kube-system   coredns-fb8b8dccf-zvmsp                    1/1     Running   0          43m
    kube-system   etcd-k8s-master                            1/1     Running   0          42m
    kube-system   kube-apiserver-k8s-master                  1/1     Running   0          42m
    kube-system   kube-controller-manager-k8s-master         1/1     Running   0          42m
    kube-system   kube-proxy-5ck6g                           1/1     Running   0          43m
    kube-system   kube-proxy-ds549                           1/1     Running   0          36m
    kube-system   kube-scheduler-k8s-master                  1/1     Running   0          42m
    
    $ kubectl describe pod -n kube-system calico-node-n4tcf
    ...
    (skipped)
    ...
    Warning  Unhealthy  10s (x2 over 20s)  kubelet, k8s-master  (combined from similar events): Readiness probe failed: Threshold time for bird readiness check:  30s
    calico/node is not ready: BIRD is not ready: BGP not established with 10.0.0.112019-04-18 16:59:27.462 [INFO][607] readiness.go 88: Number of node(s) with BGP peering established = 0
    
    $ kubectl describe pod -n kube-system calico-node-sjqsr
    ...
    (skipped)
    ...
    Warning  Unhealthy  6s (x4 over 36s)  kubelet, k8s-worker1  (combined from similar events): Readiness probe failed: Threshold time for bird readiness check:  30s
    calico/node is not ready: BIRD is not ready: BGP not established with 192.168.56.1012019-04-18 16:59:49.812 [INFO][300] readiness.go 88: Number of node(s) with BGP peering established = 0
    

    暂时无解,所以我选择安装flannel
    未来会研究这两种网络的异同,进一步探究这个问题的解决方案~

    一些教程:https://segmentfault.com/a/1190000018698263

    相关文章

      网友评论

          本文标题:从零搭建K8S集群(Kubeadmin)

          本文链接:https://www.haomeiwen.com/subject/somstltx.html