美文网首页
傻瓜式搭建k8s集群(Kubernetes v1.27.3 +

傻瓜式搭建k8s集群(Kubernetes v1.27.3 +

作者: yuanzicheng | 来源:发表于2023-07-13 23:07 被阅读0次

    1.服务器准备

    准备三台主机

    IP OS Hostname
    172.16.1.101 Ubuntu 22.04 k8s-master
    172.16.1.102 Ubuntu 22.04 k8s-worker1
    172.16.1.103 Ubuntu 22.04 k8s-worker2
    1.1 设置/etc/hosts及各主机的hostname
    cat << EOF | sudo tee -a /etc/hosts
    172.16.1.101 k8s-master
    172.16.1.102 k8s-worker1
    172.16.1.103 k8s-worker2
    EOF
    
    # 设置主节点hostname,如果是worker节点,按上面的名称进行替换
    sudo hostnamectl hostname k8s-master
    
    1.2 主机时间同步
    sudo apt install -y chrony
    sudo systemctl start chrony
    sudo systemctl enable chrony
    
    1.3 各节点防火墙设定
    sudo ufw disable  && sudo ufw status
    
    1.4 禁用Swap设备
    sudo swapoff -a
    sudo sed -ri 's/.*swap.*/#&/' /etc/fstab
    
    1.5 Forwarding IPv4 and letting iptables see bridged traffic
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    # sysctl params required by setup, params persist across reboots
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    
    # Apply sysctl params without reboot
    sudo sysctl --system
    

    2.安装containerd

    sudo apt update
    sudo apt-get install ca-certificates curl gnupg lsb-release
    sudo mkdir -p /etc/apt/keyrings
    sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt update
    sudo apt install -y containerd.io
    
    containerd config default | sudo tee /etc/containerd/config.toml > /dev/null 2>&1
    sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
    sudo sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml
    
    sudo systemctl restart containerd
    sudo systemctl enable containerd
    

    参考:
    https://github.com/containerd/containerd/blob/main/docs/getting-started.md
    https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

    3.安装kubeadm、kubelet、kubectl

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

    sudo apt update
    sudo apt install -y apt-transport-https ca-certificates curl
    sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
    sudo echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    sudo apt update
    sudo apt install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
    

    如果主机都是通过虚拟机创建的,那么到了这一步,可以采用克隆方式创建其余的主机,再修改IP和hostname,省去很多麻烦。

    4.初始化主节点

    sudo kubeadm init \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.27.3 \
    --control-plane-endpoint=k8s-master \
    --pod-network-cidr=10.10.0.0/16
    

    完成后结果如下,根据提示完成进一步的操作。

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
            --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f \
            --control-plane
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
            --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f
    

    5.加入worker节点

    $ sudo kubeadm join k8s-master:6443 --token y24d2k.3prnyxd9ltafe01b \
            --discovery-token-ca-cert-hash sha256:f056a04a1105b98929a005322971bb2060fcfa5c29a04a39bfc9d3d6a5a6523f
    [preflight] Running pre-flight checks
            [WARNING SystemVerification]: missing optional cgroups: blkio
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    回到主节点,使用kubectl查看集群中的节点

    $ kubectl get nodes
    NAME          STATUS      ROLES           AGE   VERSION
    k8s-master    NotReady    control-plane   45m   v1.27.3
    k8s-worker1   NotReady    <none>          44m   v1.27.3
    k8s-worker2   NotReady    <none>          44m   v1.27.3
    

    STATUS为NotReady状态,这是由于没有为集群配置网络插件。

    6.配置网络插件

    curl https://docs.tigera.io/archive/v3.25/manifests/calico.yaml -O
    sed -i "s#192\.168\.0\.0/16#10\.10\.0\.0/16#" calico.yaml
    kubectl apply -f calico.yaml
    

    calico.yamlCALICO_IPV4POOL_CIDR需要与k8s集群初始化时指定的pod-network-cidr一致,如果pod-network-cidr刚好为calico中的默认值192.168.0.0/16,则无需调整calico.yml。

    此外,需要注意一下,如果calico.yaml文件中以下两行被注释掉了,那么需要手动取消注释。

    - name: CALICO_IPV4POOL_CIDR
      value: "10.10.0.0/16"
    

    安装完成后,查看kube-system命名空间下的pod,结果如下

    $ kubectl get pod -n kube-system
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-6c99c8747f-wmz9s   1/1     Running   0          17m
    calico-node-65kc4                          1/1     Running   0          17m
    calico-node-kk75c                          1/1     Running   0          17m
    calico-node-qqm8b                          1/1     Running   0          17m
    coredns-7bdc4cb885-8d9bq                   1/1     Running   0          48m
    coredns-7bdc4cb885-dhxz2                   1/1     Running   0          48m
    etcd-master                                1/1     Running   3          48m
    kube-apiserver-master                      1/1     Running   3          49m
    kube-controller-manager-master             1/1     Running   3          48m
    kube-proxy-7pdx5                           1/1     Running   0          47m
    kube-proxy-g7h9c                           1/1     Running   0          47m
    kube-proxy-l2kqh                           1/1     Running   0          48m
    kube-scheduler-master                      1/1     Running   3          48m
    

    再次查看集群中的节点,都已经是Ready状态了

    $ kubectl get nodes
    NAME      STATUS   ROLES           AGE   VERSION
    master    Ready    control-plane   50m   v1.27.3
    worker1   Ready    <none>          49m   v1.27.3
    worker2   Ready    <none>          48m   v1.27.3
    

    这里有个注意点,如果使用的是云主机,可能云厂商默认只开放了部分端口,那么会出现如下情况

    $ kubectl get pod -n kube-system
    NAME                                      READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-7bdbfc669-bxn9g   1/1     Running   0          7m53s
    calico-node-5m5x8                         0/1     Running   0          7m53s
    calico-node-bpphv                         0/1     Running   0          7m53s
    calico-node-lbvq8                         0/1     Running   0          7m53s
    coredns-5bbd96d687-8xvvq                  1/1     Running   0          16m
    coredns-5bbd96d687-pjwrc                  1/1     Running   0          16m
    etcd-master.test.com                      1/1     Running   0          16m
    kube-apiserver-master.test.com            1/1     Running   0          16m
    kube-controller-manager-master.test.com   1/1     Running   0          16m
    kube-proxy-5qjvp                          1/1     Running   0          14m
    kube-proxy-87bpn                          1/1     Running   0          16m
    kube-proxy-bp6zz                          1/1     Running   0          14m
    kube-scheduler-master.test.com            1/1     Running   0          16m
    

    那么只要在所有节点开放179端口即可。


    至此,k8s集群基本搭建完成了,除了后续的Ingress Controller。

    相关文章

      网友评论

          本文标题:傻瓜式搭建k8s集群(Kubernetes v1.27.3 +

          本文链接:https://www.haomeiwen.com/subject/zplqcdtx.html