美文网首页
利用Centos7 YUM 快速搭建Kubernetes集群

利用Centos7 YUM 快速搭建Kubernetes集群

作者: KaisLu | 来源:发表于2020-11-12 23:28 被阅读0次

    1. 操作系统准备

    本次实验环境准备的是两台Centos 7作为实验环境,Master节点要求必须至少2C,其余node节点1C即可。

    Tips:人生建议:没事还是不要用Redhat做作为自己的测试环境,YUM真的让人心累。

    2. 部署架构

    Hostname IP ROLE 安装的角色服务
    master 192.168.199.20 Master etcd、kube-apiserver、kube-scheduler、kube-controller-manager
    node1 192.168.199.21 Node1 kube-proxy、kubelet、docker

    3. 设置主机名及相关环境准备

    (1) 在两台主机上执行以下操作

    • 编辑/etc/hostname文件,将其中主机名修改为masternode1,并编辑/etc/hosts文件,修改内容为:
    192.168.199.21  node1
    192.168.199.20  master
    192.168.199.20  etcd
    192.168.199.20  registry
    
    • 关闭防火墙
    [root@master yum.repos.d]# systemctl stop firewalld
    [root@master yum.repos.d]# systemctl disable firewalld
    
    • 关闭swap
      swap,这个当内存不足时,linux会自动使用swap,将部分内存数据存放到磁盘中,这个这样会使性能下降,为了性能考虑推荐关掉
    [root@master yum.repos.d]# swapoff -a
    
    • 关闭selinux
      vi /etc/selinux/config ,关闭SELINUX
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=disabled
    # SELINUXTYPE= can take one of three two values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    
    • 配置yum源,此处配置的是阿里的Centos 7的yum源和阿里的kubernetes源
      Tips:可以在配置之前将/etc/yum.repos.d/ 下的文件都备份到bak目录下
    [root@master ~]# cd /etc/yum.repos.d/  &&  curl -O http://mirrors.aliyun.com/repo/Centos-7.repo
    [root@master yum.repos.d]# vi kubernetes.repo
    
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg" 
    
    [root@master yum.repos.d]yum clean all
    
    [root@master yum.repos.d]#yum makecache
    
    
    • reboot重启操作系统,使hostname、SELINUX配置生效

    4. 部署etcd

    Kubernetes、Flannel都依赖于etcd服务,所以需要先安装etcd。直接使用yum进行安装

    [root@master yum.repos.d]# yum -y install etcd
    [root@master yum.repos.d]# etcdctl --version
    etcdctl version: 3.3.11
    API version: 2
    [root@master yum.repos.d]# 
    
    

    yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,修改以下配置项参数:

    #[Member]
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
    
    #[Clustering]
    ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
    
    

    选项说明:
    --listen-peer-urls :etcd作为分布式节点通信端口,默认指定端口7001,我们这里做的是单节点,这个参数可以不写,需要知道的是v2版本中改变为2380,7001仍可用
    --listen-client-urls :客户端操作etcd API的端口,默认指定端口4001,v2中改变为2379,在k8s中我们要使用4001端口
    --data-dir :指定数据存放目录
    --advertise-client-urls :作为分布式的客户端连接端口,如果不写这个参数会出现以下报错。


    报错截图

    启动并验证状态

    [root@master  ~]# systemctl start etcd
    [root@master ~]#  etcdctl set testdir/testkey0 0
    0
    [root@master  ~]#  etcdctl get testdir/testkey0 
    0
    [root@master ~]# etcdctl -C http://etcd:4001 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
    cluster is healthy
    [root@master ~]# etcdctl -C http://etcd:2379 cluster-health
    member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
    cluster is healthy
    

    5. 部署Master

    5.1 部署Docker

    采取yum的安装方式,安装完成之后修改/etc/sysconfig/docker使其可以在registry上拉取镜像

    [root@master ~]# yum install docker
    
    [root@master ~]# vim /etc/sysconfig/docker
    
    # /etc/sysconfig/docker
    
    # Modify these options if you want to change the way the docker daemon runs
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
    if [ -z "${DOCKER_CERT_PATH}" ]; then
        DOCKER_CERT_PATH=/etc/docker
    fi
    OPTIONS='--insecure-registry registry:5000'
    [root@master ~]#
    

    设置开机自启动并开启服务

    [root@master yum.repos.d]# systemctl start docker
    [root@master yum.repos.d]# systemctl enable docker
    

    5.2 安装kubernetes

    采取yum的安装方式

    [root@master ~]# yum install kubernetes
    

    5.3 配置并启动kubernetes

    在Master上需要运行角色Kubernets API ServerKubernets Controller ManagerKubernets Scheduler,所以需要修改相对应的服务配置

    5.3.1 vi /etc/kubernetes/apiserver

    [root@master ~]# vim /etc/kubernetes/apiserver
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    
    # The port on the local server to listen on.
     KUBE_API_PORT="--port=8080"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS=""
    
    

    5.3.1 vi /etc/kubernetes/config

    [root@k8s-master ~]# vim /etc/kubernetes/config
    
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://master:8080"
    
    

    启动服务并设置开机自启动

    [root@master kubernetes]# vim start_services.sh 
    #!/bin/bash
    for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done
    [root@master kubernetes]# bash start_services.sh
    
    

    6. 部署Node1

    6.1 部署docker

    参考见5.1

    6.2 部署kurbernetes

    参考见5.2

    6.3 配置并启动kubernetes

    在Node1上需要运行角色Kubelet、Kubernets Proxy,所以需要修改相对应的服务配置

    6.3.1 vim /etc/kubernetes/config

    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://master:8080"
    
    

    6.3.2 vim /etc/kubernetes/kubelet

    # kubernetes kubelet (minion) config
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=node1"
    
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://master:8080"
    
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    
    # Add your own!
    KUBELET_ARGS=""
    
    

    启动服务并设置开机自启动

    [root@node1 kubernetes]# vim start_services.sh 
    
    #!/bin/bash
    for SERVICES in kube-proxy kubelet docker; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done
    
    [root@node1 kubernetes]# bash start_services.sh
    

    7. 创建覆盖网络——Flannel

    7.1 安装Flannel

    在master、node上均执行如下命令,进行安装

    root@master ~]# yum install flannel
    

    7.2 配置Flannel

    master、node上均编辑/etc/sysconfig/flanneld,修改相关配置

    [root@master ~]# vi /etc/sysconfig/flanneld
    
    # Flanneld configuration options
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    

    7.3 配置etcd中关于flannel的key

    Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

    [root@master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
    { "Network": "10.0.0.0/16" }
    

    7.4 启动

    启动Flannel之后,需要依次重启docker、kubernete。

    • 在master执行:
    systemctl enable flanneld.service 
    systemctl start flanneld.service 
    systemctl restart docker
    systemctl restart kube-apiserver.service
    systemctl restart kube-controller-manager.service
    systemctl restart kube-scheduler.service
    
    • 在node上执行:
    systemctl enable flanneld.service 
    systemctl start flanneld.service 
    systemctl restart docker
    systemctl restart kubelet.service
    systemctl restart kube-proxy.service
    

    Tips:期间遇到的一些问题

    Q1. 在刚开始用redhat部署的时候,使用yum一直会遇到注册的提示,并且即使删除了yum.repo下的repo文件,再次使用yum也会重新生成官方repo文件

    报错示例

    经过查找,发现是redhat自带的插件subscription-manager给弄得的。而这个插件的作用就是Red Hat Subscription Manager订阅管理器,就是它让你一直register。可以找到/etc/yum/pluginconf.d/subscription-manager.conf的配置文件将其禁用。

    Q2. 在运行etcd的时候,无法启动。运行journalctl -xe查看日志,提示报错“When listening on specific address(es)”

    经过排查是因为未正确的配置/etc/etcd/etcd.conf中的ETCD_ADVERTISE_CLIENT_URLS参数配置导致。具体参考 4.部署etcd

    相关文章

      网友评论

          本文标题:利用Centos7 YUM 快速搭建Kubernetes集群

          本文链接:https://www.haomeiwen.com/subject/wcqmbktx.html