美文网首页Kubernetes
kubeadm 搭建Kubernetes 1.18集群

kubeadm 搭建Kubernetes 1.18集群

作者: ljcccccccc | 来源:发表于2020-07-26 01:25 被阅读0次

    Kubernetes 1.18主要更新内容

    1、Kubernetes拓扑管理器(Topology Manager ) 升级到Beta版

    拓扑管理器功能 是1.18版中Kubernetes的beta版本功能,它可以使CPU和设备(如SR-IOV VFs)实现NUMA,这将使你的工作负载在针对低延迟而优化的环境中运行。在引入拓扑管理器之前,CPU和设备管理器会做出彼此独立的资源分配决策,那么可能会导致在多套接字( multi-socket )系统上分配不良,从而导致关键型应用程序的性能下降。

    2、Serverside Apply引入Beta 2版本

    Server-side Apply 在1.16中被升级为Beta,在1.18中引入Beta 2版本。这个新版本将跟踪和管理所有新Kubernetes对象的字段更改,从而使你知道更改了什么资源以及何时更改的。

    3、使用IngressClass扩展Ingress,并用IngressClass替换不推荐使用的注释
    在Kubernetes 1.18中,Ingress有两个重要的补充:一个新pathType字段和一个新IngressClass资源。该pathType字段允许指定路径应如何匹配。除了默认ImplementationSpecific类型外,还有new Exact和Prefixpath类型。

    该IngressClass资源用于描述Kubernetes集群中的Ingress类型。入口可以通过ingressClassName在入口上使用新字段来指定与它们关联的类。这个新资源和字段替换了不建议使用的kubernetes.io/ingress.class注释。

    4、SIG-CLI引入kubectl debug命令

    SIG-CLI一直在争论是否需要调试实用程序。随着临时容器(ephemeral containers)的发展,开发人员越来越需要更多类似kubectl exec的命令。该kubectl debug命令的添加(它是Alpha版本,但欢迎你提供反馈),使开发人员可以轻松地在集群中调试其Pod。我们认为这种增加是无价的。此命令允许创建一个临时容器,该容器在要检查的Pod旁边运行,并且还附加到控制台以进行交互式故障排除。

    5、Alpha版本引入Windows CSI

    随着Kubernetes 1.18的发布,用于Windows的CSI代理的Alpha版本也已发布。CSI代理使非特权(预先批准)的容器能够在Windows上执行特权存储操作。现在,可以利用CSI代理在Windows中支持CSI驱动程序。

    更多新特性请查看官网: 1.18新特性

    本次环境说明:
    kubeadm需要的CPU配置最低为2核

    IP 主机名 节点 系统配置
    10.0.0.70 master1 master CentOS7.8 & 2C4G
    10.0.0.71 node1 node CentOS7.8 & 2C4G
    10.0.0.72 node2 node CentOS7.8 & 2C4G

    一、初始化环境

    批量修改主机名,以及免密

    # host绑定
    cat >> /etc/hosts <<EOF
    10.0.0.70  master1
    10.0.0.71  node1
    10.0.0.72  node2
    EOF
    
    # 在master节点分发秘钥
    yum install -y expect
    ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
    for i in master1 node1 node2;do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
            expect {
                    \"*yes/no*\" {send \"yes\r\"; exp_continue}
                    \"*password*\" {send \"123\r\"; exp_continue}
                    \"*Password*\" {send \"123\r\";}
            } "
    done 
    
    

    所有节点关闭Selinux、iptables、swap分区

    systemctl stop firewalld
    systemctl disable firewalld
    iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
    iptables -P FORWARD ACCEPT
    swapoff -a
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    

    所有节点配置yum源

    curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    yum clean all
    yum makecache
    

    由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块:

    #每台节点
    
    modprobe br_netfilter
    modprobe ip_conntrack
    
    

    优化内核参数

    cat > /etc/sysctl.d/kubernetes.conf <<EOF
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
    vm.overcommit_memory=1 # 不检查物理内存是否够用
    vm.panic_on_oom=0 # 开启 OOM
    fs.inotify.max_user_instances=8192
    fs.inotify.max_user_watches=1048576
    fs.file-max=52706963
    fs.nr_open=52706963
    net.ipv6.conf.all.disable_ipv6=1
    net.netfilter.nf_conntrack_max=2310720
    EOF
    
    sysctl -p /etc/sysctl.d/kubernetes.conf
    
    # 分发到所有节点
    for i in master1 node1 node2
    do
        scp kubernetes.conf root@$i:/etc/sysctl.d/
        ssh root@$i sysctl -p /etc/sysctl.d/kubernetes.conf
    done
    

    bridge-nf 使得netfilter可以对Linux网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

    net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
    net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
    net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
    net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。
    

    所有节点安装ipvs

    为什么要使用IPVS,从k8s的1.8版本开始,kube-proxy引入了IPVS模式,IPVS模式与iptables同样基于Netfilter,但是采用的hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。

    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack
    EOF
    
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
    
    #查看是否已经正确加载所需的内核模块
    

    所有节点安装ipset

    yum install ipset -y
    

    ipset介绍

    iptables是Linux服务器上进行网络隔离的核心技术,内核在处理网络请求时会对iptables中的策略进行逐条解析,因此当策略较多时效率较低;而是用IPSet技术可以将策略中的五元组(协议,源地址,源端口,目的地址,目的端口)合并到有限的集合中,可以大大减少iptables策略条目从而提高效率。测试结果显示IPSet方式效率将比iptables提高100倍
    

    为了方面ipvs管理,这里安装一下ipvsadm。

    yum install ipvsadm -y
    

    所有节点设置系统时区

    timedatectl set-timezone Asia/Shanghai
     #将当前的 UTC 时间写入硬件时钟
    timedatectl set-local-rtc 0
     #重启依赖于系统时间的服务
    systemctl restart rsyslog 
    systemctl restart crond
    

    最后一步最好update一下 (可选操作)

    yum update -y
    

    二、 安装docker

    #所有机器
    export VERSION=19.03
    curl -fsSL "https://get.docker.com/" | bash -s -- --mirror Aliyun
    
    
    # 配置镜像加速
    
    mkdir -p /etc/docker/
    cat>/etc/docker/daemon.json<<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "registry-mirrors": [
          "https://fz5yth0r.mirror.aliyuncs.com",
          "https://dockerhub.mirrors.nwafu.edu.cn/",
          "https://mirror.ccs.tencentyun.com",
          "https://docker.mirrors.ustc.edu.cn/",
          "https://reg-mirror.qiniu.com",
          "https://registry.docker-cn.com"
      ],
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m",
        "max-file": "3"
      }
    }
    EOF
    

    启动docker

    [root@master1 ~]# systemctl start docker && systemctl enable docker 
    
    

    三、 安装kubeadm

    默认yum源在国外,这里需要修改为国内阿里源

    cat <<EOF >/etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
    

    安装kubeadmin、kubectl和kubelet服务。

    yum install -y \
        kubeadm-1.18.3 \
        kubectl-1.18.3 \
        kubelet-1.18.3 \
        --disableexcludes=kubernetes && \
        systemctl enable kubelet
    

    查看Kubernetes所需要的镜像版本号。并提前下载好所需的七个镜像。

    kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.18.3
    
    

    初始化Master节点。

    # –-pod-network-cidr:用于指定Pod的网络范围
    # –-service-cidr:用于指定service的网络范围;
    # --image-repository: 镜像仓库的地址,和提前下载的镜像仓库应该对应上。
    
    
    kubeadm init --kubernetes-version=v1.18.3 \
      --pod-network-cidr=10.244.0.0/16 \
      --service-cidr=10.1.0.0/16 \
      --image-repository=registry.aliyuncs.com/google_containers
    
    W0408 10:10:19.704855    9534 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [init] Using Kubernetes version: v1.18.0
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.50]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W0408 10:10:29.102901    9534 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W0408 10:10:29.104505    9534 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 26.007875 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: 4oxcgj.1dqz97nbu4pcf84l
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.0.0.70:6443 --token 4oxcgj.1dqz97nbu4pcf84l \
        --discovery-token-ca-cert-hash sha256:2445a08ab9e210e9d3f82949ae16472d47abbc188a2b28e4d6470b02d5ddce3a
    

    出现Your Kubernetes control-plane has initialized successfully!即可

    初始化完成后,需要按照提示执行以下命令。注意最后的join命令后续会使用到,需要记录。

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    查看Kubernetes状态。

    # 还是NotReady状态
    kubectl get nodes
    NAME         STATUS     ROLES    AGE     VERSION
    k8s-master   NotReady   master   4m45s   v1.18.3
    
    ······
    

    部署flannel网络。

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    
    

    删除kube-flannel.yml中的kube-flannel-ds-arm64kube-flannel-ds-armkube-flannel-ds-ppc64lekube-flannel-ds-s390x这4个daemonset删除,只保留kube-flannel-ds-amd64,因为一般的实验环境都是amd64架构的。

    # 修改镜像地址为:
    [root@master1 ~]# vim flannel.yml 
    ····
    ····
     image: registry.cn-hangzhou.aliyuncs.com/ljcc/flannel:v0.12.0-amd64
    
    
    ···
    ···
    
    kubectl apply -f kube-flannel.yml
    
    kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    master1   Ready    master   14m   v1.18.3
    
    
    # 即可看到全部为running状态
    
    kubectl get all -n kube-system
    NAME                                     READY   STATUS    RESTARTS   AGE
    pod/coredns-7ff77c879f-8flq5             1/1     Running   0          23m
    pod/coredns-7ff77c879f-xth7n             1/1     Running   0          23m
    pod/etcd-k8s-master                      1/1     Running   0          24m
    pod/kube-apiserver-k8s-master            1/1     Running   0          18m
    pod/kube-controller-manager-k8s-master   1/1     Running   1          24m
    pod/kube-flannel-ds-amd64-8nft4          1/1     Running   0          13m
    pod/kube-proxy-fk7tn                     1/1     Running   0          23m
    pod/kube-scheduler-k8s-master            1/1     Running   1          24m
    
    NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    service/kube-dns   ClusterIP   10.1.0.10    <none>        53/UDP,53/TCP,9153/TCP   24m
    
    NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    daemonset.apps/kube-flannel-ds-amd64   1         1         1       1            1           <none>                   13m
    daemonset.apps/kube-proxy              1         1         1       1            1           kubernetes.io/os=linux   24m
    
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/coredns   2/2     2            2           24m
    
    NAME                                 DESIRED   CURRENT   READY   AGE
    replicaset.apps/coredns-7ff77c879f   2         2         2       23m
    

    join node节点到kubernetes集群

    kubeadm join 10.0.0.70:6443 --token 4oxcgj.1dqz97nbu4pcf84l \
        --discovery-token-ca-cert-hash sha256:2445a08ab9e210e9d3f82949ae16472d47abbc188a2b28e4d6470b02d5ddce3a  
    

    若忘记 token和hash指:

    # 查看hash值
    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed  's/^ .* //'
    
    # 查看token
    kubeadm token list
    
    

    查看kubeernetes集群状态

    [root@master1 ~]# kubectl get nodes
    NAME      STATUS   ROLES    AGE   VERSION
    master1   Ready    master   9h    v1.18.3
    node1     Ready    <none>   8h    v1.18.3
    node2     Ready    <none>   8h    v1.18.3
    

    完成!
    不足之处,下次优化。

    文中部分参考来源

    相关文章

      网友评论

        本文标题:kubeadm 搭建Kubernetes 1.18集群

        本文链接:https://www.haomeiwen.com/subject/jbhblktx.html