美文网首页运维
k8s单master集群部署

k8s单master集群部署

作者: 萌褚 | 来源:发表于2022-06-20 11:24 被阅读0次

    镜像下载、域名解析、时间同步请点击 阿里云开源镜像站

    1. 服务器要求:

    • 建议最小硬件配置:2核CPU、2G内存、20G硬盘
    • 服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上网,需要提前下载对应镜像并导入节点

    1.1 软件环境:

    file

    1.2 服务器规划:

    file

    1.3 架构图:

    file

    2. 操作系统初始化配置

    # 关闭防火墙
    systemctl stop firewalld
    systemctl disable firewalld
    
    # 关闭selinux
    sed -i '/^SELINUX/s/enforcing/disabled/' /etc/selinux/config  # 永久
    setenforce 0  # 临时
    
    # 关闭swap
    swapoff -a  # 临时
    sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久
    
    # 根据规划设置主机名
    hostnamectl set-hostname <hostname>
    
    # 在master添加hosts
    cat >> /etc/hosts << EOF
    192.168.137.81 k8s-master1
    192.168.137.82 k8s-node1
    192.168.137.83 k8s-node2
    EOF
    
    # 将桥接的IPv4流量传递到iptables的链
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system  # 生效
    
    # 时间同步
    yum install ntpdate -y
    ntpdate time.windows.com
    

    3. 安装Docker/kubeadm/kubelet [所有节点]

    这里使用Docker作为容器引擎,也可以换成别的,例如containerd

    3.1 安装Docker

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    yum -y install docker-ce
    systemctl enable docker && systemctl start docker
    

    3.2 配置镜像下载加速器

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://v27k018o.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    3.3 添加阿里云YUM软件源

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    3.4 安装kubeadm,kubelet,kubectl

    指定需安装版本号进行部署

    yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
    systemctl enable kubelet
    

    4. 部署Kubernetes Master

    在192.168.137.81(Master)执行

    kubeadm init \
      --apiserver-advertise-address=192.168.137.81 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.20.0 \
      --service-cidr=10.96.0.0/12 \
      --pod-network-cidr=10.244.0.0/16 \
      --ignore-preflight-errors=all
    
    • --apiserver-advertise-address 集群通告地址
    • --image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
    • --kubernetes-version K8s版本,与上面安装的一致
    • --service-cidr 集群内部虚拟网络,Pod统一访问入口
    • --pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致

    或者使用配置文件引导:

    # cat kubeadm.conf
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.20.0
    imageRepository: registry.aliyuncs.com/google_containers 
    networking:
      podSubnet: 10.244.0.0/16 
      serviceSubnet: 10.96.0.0/12 
    
    # kubeadm init --config kubeadm.conf --ignore-preflight-errors=all
    

    初始化完成后,最后会输出一个join命令,先记住,下面用。

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.137.81:6443 --token tix7ff.pnjcwvl6awyaeh8i \
        --discovery-token-ca-cert-hash sha256:617842ea5040ce5e7f971d387b58693cbfa79261763b68c589fe8d124f1a5154
    

    拷贝kubectl使用的连接k8s认证文件到默认路径:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    查看工作节点:

    [root@k8s-master ~]# kubectl get nodes
    NAME          STATUS     ROLES                  AGE   VERSION
    k8s-master1   NotReady   control-plane,master   95s   v1.20.0
    

    注:由于网络插件还没有部署,还没有准备就绪 NotReady

    5. 加入Kubernetes Node节点

    在192.168.137.82/83 (Node) 执行 向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

    kubeadm join 192.168.137.81:6443 --token tix7ff.pnjcwvl6awyaeh8i \
        --discovery-token-ca-cert-hash sha256:617842ea5040ce5e7f971d387b58693cbfa79261763b68c589fe8d124f1a5154
    

    默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接使用命令快捷生成:

    kubeadm token create --print-join-command
    

    参考资料:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/

    6. 部署容器网络(CNI)

    Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。 下载YAML:

    wget https://docs.projectcalico.org/manifests/calico.yaml
    

    下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 --pod-network-cidr指定的一样。 修改完后文件后,部署:

    kubectl apply -f calico.yaml
    kubectl get pods -n kube-system
    

    等Calico Pod都Running,节点也会准备就绪:

    参考资料:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

    [root@k8s-master <sub>]# kubectl get pods -n kube-system
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-5f6cfd688c-4hfmv   1/1     Running   0          2m37s
    calico-node-8swrp                          1/1     Running   0          2m37s
    calico-node-nv96p                          1/1     Running   0          2m37s
    calico-node-s2vl6                          1/1     Running   0          2m37s
    coredns-7f89b7bc75-7nzm8                   0/1     Running   0          22m
    coredns-7f89b7bc75-9c46c                   0/1     Running   0          22m
    etcd-k8s-master1                           1/1     Running   0          22m
    kube-apiserver-k8s-master1                 1/1     Running   0          22m
    kube-controller-manager-k8s-master1        1/1     Running   0          22m
    kube-proxy-clc6k                           1/1     Running   0          18m
    kube-proxy-dvlvr                           1/1     Running   0          22m
    kube-proxy-jr6hm                           1/1     Running   0          18m
    kube-scheduler-k8s-master1                 1/1     Running   0          22m
    [root@k8s-master </sub>]# kubectl get nodes
    NAME          STATUS   ROLES                  AGE   VERSION
    k8s-master1   Ready    control-plane,master   23m   v1.20.0
    k8s-node1     Ready    <none>                 18m   v1.20.0
    k8s-node2     Ready    <none>                 18m   v1.20.0
    

    7. 测试kubernetes集群

    在Kubernetes集群中创建一个pod,验证是否正常运行:

    kubectl create deployment nginx --image=nginx
    kubectl expose deployment nginx --port=80 --type=NodePort
    kubectl get pod,svc
    
    [root@k8s-master ~]# kubectl get pod,svc
    NAME                         READY   STATUS    RESTARTS   AGE
    pod/nginx-6799fc88d8-nkzk6   1/1     Running   0          6m51s
    
    NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        32m
    service/nginx        NodePort    10.108.121.252   <none>        80:30167/TCP   6m2s
    

    访问地址:http://NodeIP:Port

    8. 部署Dashboard

    Dashboard是官方提供的一个UI,可用于基本管理K8s资源。

    wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
    

    默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

    # vi recommended.yaml
    ...
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
      type: NodePort
    ...
    
    # kubectl apply -f recommended.yaml
    # kubectl get pods -n kubernetes-dashboard
    

    访问地址:https://NodeIP:30001 创建service account并绑定默认cluster-admin管理员集群角色:

    # 创建用户
    $ kubectl create serviceaccount dashboard-admin -n kube-system
    # 用户授权
    $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    # 获取用户Token
    $ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
    

    使用输出的token登录Dashboard。

    [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
    Name:         dashboard-admin-token-cgmld
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: 3208e176-5115-4425-bb67-d45863bc05f7
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1066 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlBnN2lzMk8zYndPX3ZONnc0cnFnRjhsVnczOTVlNGxXSDl4c1Z0OGtmNDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY2dtbGQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzIwOGUxNzYtNTExNS00NDI1LWJiNjctZDQ1ODYzYmMwNWY3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DY9SLeakUzVUqUHLJjLuaBtv0EOj6l-zsCfmzKTtsiTkaX39bGpLInToKuSpbXHYAmkpCvoZH22ghmOds3LFgvQBCIt6M83rrL83aPzhDjAKtPtPkz9vJGR7K5LnfrB9AX5dVhiU_AkaVIBIFcqTxVlFpl6W1EzTc0uJDM7K8Gr2XnPvRfUMe8WaEWR7tVxMEEhPhP2waEYmcc5uFz5unI_g6lTMYRJnhZCjfqh7lS9NA_8WgmoQnQjxW4cYAsqrdCzbroTEMCslH_pCj-PZNxf7mKVXZwklYL78t8klU_AytuhdaV88iRR3HEuBMYLbfJjy6RLkyt_ORweaXb8npg
    

    本文转自:https://blog.51cto.com/wemux/5354074

    相关文章

      网友评论

        本文标题:k8s单master集群部署

        本文链接:https://www.haomeiwen.com/subject/xlfjvrtx.html