美文网首页
从零开始搭建k8s(v1.13.1)高可用集群(3master+

从零开始搭建k8s(v1.13.1)高可用集群(3master+

作者: 韩海林666 | 来源:发表于2018-12-21 15:54 被阅读0次

    我们今天的目标是:


    k8s-ha.png

    环境准备

    3台最小化安装后服务器并关闭防火墙和selinux:
    hostname: master1          master2           master3
    IP:             172.18.0.81    172.18.0.82     172.18.0.83
    以下步骤均在三台服务器上执行

    # systemctl stop firewalld
    # systemctl disable firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    # swapoff -a
    

    编辑/etc/fstab,确保swap开机关闭

    #cat /etc/fstab 
    
    #
    # /etc/fstab
    # Created by anaconda on Fri Dec 21 05:19:53 2018
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/centos-root /                       xfs     defaults        0 0
    UUID=a5a945d9-4423-4b00-87db-42dc829b680e /boot                   xfs     defaults        0 0
    #/dev/mapper/centosbn -swap swap                    swap    defaults        0 0
    

    安装基本包:

    # yum -y install epel-release vim tree ntpdate
    

    添加时间同步

    # crontab -l
    5 * * * * ntpdate 0.pool.ntp.org
    

    升级内核

    # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
    Retrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
    Retrieving http://elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:elrepo-release-7.0-3.el7.elrepo  ################################# [100%]
    yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * elrepo-kernel: hkg.mirror.rackspace.com
    elrepo-kernel                                                                                                                                                                        | 2.9 kB  00:00:00     
    elrepo-kernel/primary_db                                                                                                                                                             | 1.8 MB  00:00:05     
    Available Packages
    kernel-lt.x86_64                                                                                      4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-devel.x86_64                                                                                4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-doc.noarch                                                                                  4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-headers.x86_64                                                                              4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-tools.x86_64                                                                                4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-tools-libs.x86_64                                                                           4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-lt-tools-libs-devel.x86_64                                                                     4.4.168-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml.x86_64                                                                                      4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-devel.x86_64                                                                                4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-doc.noarch                                                                                  4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-headers.x86_64                                                                              4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-tools.x86_64                                                                                4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-tools-libs.x86_64                                                                           4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    kernel-ml-tools-libs-devel.x86_64                                                                     4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    perf.x86_64                                                                                           4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    python-perf.x86_64                                                                                    4.19.11-1.el7.elrepo                                                                     elrepo-kernel
    # yum --enablerepo=elrepo-kernel install kernel-ml
    

    编辑/etc/default/grub:

    # cat /etc/default/grub 
    GRUB_DEFAULT=0
    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_DISABLE_SUBMENU=true
    GRUB_TERMINAL_OUTPUT="console"
    GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"
    GRUB_DISABLE_RECOVERY="true"
    # grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    Found linux image: /boot/vmlinuz-4.19.11-1.el7.elrepo.x86_64
    Found initrd image: /boot/initramfs-4.19.11-1.el7.elrepo.x86_64.img
    Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
    Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
    Found linux image: /boot/vmlinuz-0-rescue-eafcd01abd94457a8dd71c8c323e46e7
    Found initrd image: /boot/initramfs-0-rescue-eafcd01abd94457a8dd71c8c323e46e7.img
    done
    # reboot
    

    安装docker17.03.2

    # yum remove docker \
    >                   docker-client \
    >                   docker-client-latest \
    >                   docker-common \
    >                   docker-latest \
    >                   docker-latest-logrotate \
    >                   docker-logrotate \
    >                   docker-selinux \
    >                   docker-engine-selinux \
    >                   docker-engine
    # yum install -y yum-utils device-mapper-persistent-data lvm2
    # yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    # yum install -y --setopt=obsoletes=0  docker-ce-17.03.2.ce-1.el7.centos.x86_64
    
    

    修改/usr/lib/systemd/system/docker.service:

     ExecStart=/usr/bin/dockerd --graph=/data/docker
    

    启动docker并设置开机启动

    # systemctl enable docker
    # systemctl start docker
    

    docker安装完成
    开始安装kubeadm:
    设置阿里云源:

    cat <<EOF> /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes  baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64  enabled=1  gpgcheck=0  repo_gpgcheck=0  gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg         [<u>http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg</u>](http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg)
    EOF
    

    安装kubeadm

    # yum -y install kubelet kubeadm kubectl --disableexcludes=kubernetes
    # systemctl enable kubelet
    # cat  <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    # sysctl --system
    # echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
    # sysctl -p
    

    拉取镜像(请添加阿里云镜像加速)

    # cat pull_mirror.sh 
    #!/bin/sh
    set -x
    
    docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1
    docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1
    docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1
    docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.1
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
    docker pull coredns/coredns:1.2.6
    
    docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
    docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
    docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
    docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
    docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
    docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
    docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
    
    docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.13.1
    docker rmi mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1
    docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1
    docker rmi mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1
    docker rmi mirrorgooglecontainers/etcd-amd64:3.2.24
    docker rmi coredns/coredns:1.2.6
    docker rmi mirrorgooglecontainers/pause:3.1
    # bash pull_mirror.sh 
    

    创建初始化集群文件kubeadm-config.yaml

    # cat kubeadm-config.yaml 
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterConfiguration
    kubernetesVersion: v1.13.0
    apiServer:
      certSANs:
      - "172.18.0.81"
    controlPlaneEndpoint: "172.18.0.81:8443"
    networking:
      podSubnet: 10.244.0.0/16
    

    podSubnet: 我用flannel,这里要填写网络类型
    certSANS: 填写负载均衡器的ip
    controlPlaneEndpoint:填写负载均衡器的ip和端口,比如我的nginx:

    stream {
        server {
            listen 8443;
            proxy_pass kube_apiserver;
        }
    
        upstream kube_apiserver {
            server 172.18.0.81:6443 weight=10 max_fails=3 fail_timeout=5s;
            server 172.18.0.82:6443 weight=10 max_fails=3 fail_timeout=5s;
            server 172.18.0.83:6443 weight=10 max_fails=3 fail_timeout=5s;
        }
    }
    

    初始化master1

    # kubeadm init --config=kubeadm-config.yaml
    ...
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 172.18.0.81:8443 --token wipo2g.wl0is1y9zm7fe7je --discovery-token-ca-cert-hash sha256:15c3869d81037dba2eec8456b9ff7722848586b9df3c16afeac1ac04fe3f3026
    

    创建文件,保存join:

    # echo 'kubeadm join 172.18.0.81:8443 --token wipo2g.wl0is1y9zm7fe7je --discovery-token-ca-cert-hash sha256:15c3869d81037dba2eec8456b9ff7722848586b9df3c16afeac1ac04fe3f3026' > join
    # mkdir -p $HOME/.kube
    # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    # chown $(id -u):$(id -g) $HOME/.kube/config
    

    查看,可知master1已经装好了

    # kubectl get nodes
    NAME      STATUS     ROLES    AGE     VERSION
    master1   NotReady   master   3m46s   v1.13.1
    

    安装flannel网络插件:

    # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
    

    把master1生成的证书,拷贝到master2、master3:

    # USER=root
    # CONTROL_PLANE_IPS="172.18.0.82 172.18.0.83"
    # for host in ${CONTROL_PLANE_IPS}; do
         scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
         scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
         scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
         scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
         scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
         scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
         scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
         scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
         scp /etc/kubernetes/admin.conf "${USER}"@$host:
     done
    

    分别在master2、master3中执行以下操作:

    # mkdir -p /etc/kubernetes/pki/etcd
    # mv ca.crt /etc/kubernetes/pki/
    # mv ca.key /etc/kubernetes/pki/
    # mv sa.pub /etc/kubernetes/pki/
    # mv sa.key /etc/kubernetes/pki/
    # mv front-proxy-ca.crt /etc/kubernetes/pki/
    # mv front-proxy-ca.key /etc/kubernetes/pki/
    # mv etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
    # mv etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
    # mv admin.conf /etc/kubernetes/admin.conf
    

    分别在master2、master3上执行刚才保存的join,后面跟上--experimental-control-plane
    在node节点上执行join即可

    相关文章

      网友评论

          本文标题:从零开始搭建k8s(v1.13.1)高可用集群(3master+

          本文链接:https://www.haomeiwen.com/subject/ylhmkqtx.html