美文网首页K8s
Centos7在线安装k8s

Centos7在线安装k8s

作者: Pasca | 来源:发表于2021-12-12 17:32 被阅读0次
    系统与版本

    Centos7Docker 20.10.9kubernates 1.23.0

    安装kubeadm
    按照[官方的指引](https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/ ),我们选用`kubeadm`进行集群部署.
    
    1. 硬件条件

      内存2G,CPU2核

    2. 关闭firewall,iptables

      具体可以查看网上方法

    3. 允许 iptables 检查桥接流量

      cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
      br_netfilter
      EOF
      
      cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      sudo sysctl --system
      
    4. 关闭swap

      # 临时关闭,重启以后会重新开启
      swapoff -a
      # 检查是否关闭
      free -m
      # 如下关闭成功
      [root@localhost ~]# free -m
                    total        used        free      shared  buff/cache   available
      Mem:           1819         260        1268           9         290        1406
      Swap:             0           0           0
      
      vi /etc/fstab
      #
      # /etc/fstab
      # Created by anaconda on Wed Sep  1 15:01:27 2021
      #
      # Accessible filesystems, by reference, are maintained under '/dev/disk'
      # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
      #
      UUID=e95686df-e97a-409f-968e-df388b13a239 /                       xfs     defaults        0 0
      UUID=3f5292fa-ab79-4e86-b036-27be00d2d84e /boot                   xfs     defaults        0 0
      # 注释掉swap
      #UUID=e2fd5a6a-0422-43b4-b3d3-c57c50c89b84 swap                    swap    defaults        0 0
      # 保存
      wq
      # 重启主机
      # 检查是否关闭
      free -m
      # 如下固化关闭成功
      [root@localhost ~]# free -m
                    total        used        free      shared  buff/cache   available
      Mem:           1819         260        1268           9         290        1406
      Swap:             0           0           0
      
    5. 安装kubeadm

      cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      # 注意该处我们使用阿里云的镜像源
      #baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
      baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      #gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      exclude=kubelet kubeadm kubectl
      EOF
      
      # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
      sudo setenforce 0
      sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
      
      sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
      
      sudo systemctl enable --now kubelet
      
    6. 配置host(选做,如果不做,以后的操作使用IP操作)

      $ vi /etc/hosts
      192.168.234.129     k8s-master
      192.168.234.130     k8s-node1
      192.168.234.131     k8s-node2
      #保存
      $ wq
      
    7. 初始化k8s集群

      1. 集群资源分配

        角色 IP
        master 192.168.234.129
        node1 192.168.234.130
        node2 192.168.234.131
      2. 确认各主机中kubelet处于正常运行(选做,kubeadm init初始化失败时排错 )

        $ systemctl status kubelet # 查看状态,如果没开启就要开启
        ● kubelet.service - kubelet: The Kubernetes Node Agent
           Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
          Drop-In: /usr/lib/systemd/system/kubelet.service.d
                   └─10-kubeadm.conf
           Active: active (running) since 六 2021-12-11 14:30:05 CST; 2s ago
             Docs: https://kubernetes.io/docs/
         Main PID: 10469 (kubelet)
            Tasks: 11
           Memory: 26.3M
           CGroup: /system.slice/kubelet.service
                   └─10469 /usr/bin/kubelet
        

        kubelet不是启动状态,使用journalctl -xeu kubelet命令检查启动失败原因

        1. kubelet cgroup driver: "systemd" is different from docker cgroup

          上述原因是由于kubelet cgroup driver与docker cgroup driver不匹配,可以修改docker的cgroup driver与k8s的一致。

          编辑/etc/docker/daemon.json,加入以下配置

          {
              ...
              "exec-opts": ["native.cgroupdriver=systemd"]
              ...
          }
          
          # 刷新配置
          systemctl daemon-reload
          # 重启docker
          systemctl restart docker
          # 重启kubelet
          systemctl restart kubelet
          
      3. 在master节点运行

        # 注意,其中--image-repository指定初始化下载的镜像,--v=6开启详细日志打印
        $ kubeadm init \
        --image-repository=registry.aliyuncs.com/google_containers \
        --apiserver-advertise-address=192.168.234.129 \
        --kubernetes-version v1.23.0 \
        --service-cidr=192.166.0.0/16 \
        --pod-network-cidr=192.167.0.0/16 \
        --v=6
        
        # --image-repository镜像加速
        # --apiserver-advertise-address 直接使用当前master主机地址
        # --kubernetes-version k8s版本,可以不指定,缺省情况会用最新的
        # --service-cidr k8s中的service网络地址,不可与主机,pod网络地址重复
        # --pod-network-cidr pod网络地址,不可与主机,service网络地址重读,与后面的Calico相关
        # --v,日志等级,5级以上会打印更详细的日志,方便差错
        
        #以下输出表示初始化完成
        
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        Alternatively, if you are the root user, you can run:
        
          export KUBECONFIG=/etc/kubernetes/admin.conf
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.168.234.129:6443 --token pptc23.2kyz3xmu5ehv4j3p \
          --discovery-token-ca-cert-hash sha256:fe9b576d3a52502ef46d09010cfc14cb6dfc4fdb885873ebf09cf8be7950f5b3
        
      4. 配置kubectl

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      5. 安装Calico(CNI)

        # 直接使用Calico远程配置文件,也可将文件上传至服务器安装
        $ curl https://docs.projectcalico.org/manifests/calico.yaml -O
        $ kubectl apply -f calico.yaml
        
      6. node节点加入集群

        $ kubeadm join 192.168.234.129:6443 --token pptc23.2kyz3xmu5ehv4j3p \
          --discovery-token-ca-cert-hash sha256:fe9b576d3a52502ef46d09010cfc14cb6dfc4fdb885873ebf09cf8be7950f5b3
        
        # 表示加入成功
        [preflight] Running pre-flight checks
        [preflight] Reading configuration from the cluster...
        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
        [kubelet-start] Starting the kubelet
        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
        
        This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
        
      7. 配置节点角色

        $ kubectl get nodes 
        NAME         STATUS   ROLES                  AGE     VERSION
        k8s-master   Ready    control-plane,master   23m     v1.23.0
        k8s-node1    Ready    <none>                 9m43s   v1.23.0
        k8s-node2    Ready    <none>                 7m41s   v1.23.0
        # 发现节点ROLES是<none>
        $ kubectl label nodes k8s-node1 node-role.kubernetes.io/node=
        $ kubectl label nodes k8s-node2 node-role.kubernetes.io/node=
        # 再次运行
        $ kubectl get nodes
        NAME         STATUS   ROLES                  AGE   VERSION
        k8s-master   Ready    control-plane,master   25m   v1.23.0
        k8s-node1    Ready    node                   12m   v1.23.0
        k8s-node2    Ready    node                   10m   v1.23.0
        
    8. 验证集群是否正常

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: app-demo
        labels:
          app: app-demo
      spec:
        replicas: 3
        template:
          metadata:
            name: app-demo
            labels:
              app: nginx-demo
          spec:
            containers:
              - name: nginx-demo
                image: nginx:stable-alpine
                imagePullPolicy: IfNotPresent
                ports:
                  - containerPort: 80
            restartPolicy: Always
        selector:
          matchLabels:
            app: nginx-demo
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-svr
      spec:
        selector:
          app: nginx-demo
        ports:
          - protocol: TCP
            port: 10080
            targetPort: 80
            nodePort: 30080
        type: NodePort
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: ns-nginx
        labels:
          name: ns-nginx
      
      $ kubectl apply -f .
      $ kubectl get pods -o wide
      # 已经被分别部署到两个节点上,也能看到pod IP段是正常的
      NAME                        READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
      app-demo-6ffbbc8f85-96fwg   1/1     Running   0          2m44s   192.167.36.65     k8s-node1   <none>           <none>
      app-demo-6ffbbc8f85-pw4fw   1/1     Running   0          2m44s   192.167.169.130   k8s-node2   <none>           <none>
      app-demo-6ffbbc8f85-rc2pb   1/1     Running   0          2m44s   192.167.169.129   k8s-node2   <none>           <none>
      

      浏览器分别访问三台主机,均可出现nginx表示集群部署成功

    相关文章

      网友评论

        本文标题:Centos7在线安装k8s

        本文链接:https://www.haomeiwen.com/subject/wpxsfrtx.html