美文网首页
k8s集群部署

k8s集群部署

作者: 蔡欣圻 | 来源:发表于2019-07-26 15:15 被阅读0次

1、环境准备

目前只是使用了2台机器;centos7
(1)k8s-matser :192.168.2.178
(2)k8s-node :192.168.2.207

2、设置节点

(1)设置k8s-master节点
hostnamectl --static set-hostname k8s-master
(2)设置/etc/hosts文件
vim /etc/hosts 添加下面信息:

        192.168.2.178 k8s-master
    192.168.2.178 etcd
    192.168.2.178 registry
    192.168.2.207 k8s-node

(3)关闭防护墙

        systemctl disable firewalld.service
        systemctl stop firewalld.service

3、安装etcd

安装命令:yum install etcd -y

编辑etcd的默认配置文件/etc/etcd/etcd.conf 如下:

            # [member]
        ETCD_NAME=master
        ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
        #ETCD_WAL_DIR=""
        #ETCD_SNAPSHOT_COUNT="10000"
        #ETCD_HEARTBEAT_INTERVAL="100"
        #ETCD_ELECTION_TIMEOUT="1000"
        #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
        ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
        #ETCD_MAX_SNAPSHOTS="5"
        #ETCD_MAX_WALS="5"
        #ETCD_CORS=""
        #
        #[cluster]
        #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
        # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
        #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
        #ETCD_INITIAL_CLUSTER_STATE="new"
        #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
        ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
        #ETCD_DISCOVERY=""
        #ETCD_DISCOVERY_SRV=""
        #ETCD_DISCOVERY_FALLBACK="proxy"
        #ETCD_DISCOVERY_PROXY=""
        #ETCD_STRICT_RECONFIG_CHECK="false"
        #ETCD_AUTO_COMPACTION_RETENTION="0"
        #ETCD_ENABLE_V2="true"
        #
        #[proxy]
        #ETCD_PROXY="off"
        #ETCD_PROXY_FAILURE_WAIT="5000"
        #ETCD_PROXY_REFRESH_INTERVAL="30000"
        #ETCD_PROXY_DIAL_TIMEOUT="1000"
        #ETCD_PROXY_WRITE_TIMEOUT="5000"
        #ETCD_PROXY_READ_TIMEOUT="0"
        #
        #[security]
        #ETCD_CERT_FILE=""
        #ETCD_KEY_FILE=""
        #ETCD_CLIENT_CERT_AUTH="false"
        #ETCD_TRUSTED_CA_FILE=""
        #ETCD_AUTO_TLS="false"
        #ETCD_PEER_CERT_FILE=""
        #ETCD_PEER_KEY_FILE=""
        #ETCD_PEER_CLIENT_CERT_AUTH="false"
        #ETCD_PEER_TRUSTED_CA_FILE=""
        #ETCD_PEER_AUTO_TLS="false"
        #
        #[logging]
        #ETCD_DEBUG="false"
        # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
        #ETCD_LOG_PACKAGE_LEVELS=""
        #
        #[profiling]
        #ETCD_ENABLE_PPROF="false"
        #ETCD_METRICS="basic"
        #
        #[auth]
        #ETCD_AUTH_TOKEN="simple"         

4、启动etcd并且验证

(1)systemctl start etcd // 启动etcd服务
(2)获取etcd的健康指标:

    >$ etcdctl -C http://etcd:2379 cluster-health
    >$ member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
       cluster is healthy
   >$ etcdctl -C http://etcd:4001 cluster-health
   >$ member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
       cluster is healthy

5、flannel安装

(1)安装命令:yum install flannel
(2)配置flannel:/etc/sysconfig/flanneld 如下:

        # Flanneld configuration options  
 
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
                 
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
                 
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
(3)配置etcd中关与flannel的key
         etcdctl mk /atomic.io/network/config '{"Network":"192.0.0.0/16"}'       
            查看是否已经设置成功命令:
         >$ etcdctl get /atomic.io/network/config
         >$ { "Network": "192.0.0.0/16" }   
         其他相关命令如下:
         >$ etcdctl rm /atomic.io/network/config //删除命令

(4) 启动flannel并设置开机自启

      >$ systemctl start flanneld.service
      >$ systemctl enable flanneld.service

6、安装docker

(1)安装命令:yum install docker -y
(2)开启docker服务:service docker start
(3)设置docker开启自启动:chkconfig docker on
(4)验证docker是否安装成功

        >$ docker version
        Client:
                 Version:         1.13.1
                 API version:     1.26
                 Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
                 Go version:      go1.10.3
                 Git commit:      b2f74b2/1.13.1
                 Built:           Wed May  1 14:55:20 2019
                 OS/Arch:         linux/amd64

                Server:
                 Version:         1.13.1
                 API version:     1.26 (minimum version 1.12)
                 Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
                 Go version:      go1.10.3
                 Git commit:      b2f74b2/1.13.1
                 Built:           Wed May  1 14:55:20 2019
                 OS/Arch:         linux/amd64
                 Experimental:    false

7、kubernets安装

k8s需要配置的东西比较多,正如第一节“环境介绍”中提及的,毕竟master上需要运行以下组件,kube-apiserver、kube-scheduler、kube-controller-manager
(1)配置/etc/kubernetes/apiserver文件

                        ###
            # kubernetes system config
            #
            # The following values are used to configure the kube-apiserver
            #
             
            # The address on the local server to listen to.
            KUBE_API_ADDRESS="--address=0.0.0.0"
             
            # The port on the local server to listen on.
            KUBE_API_PORT="--port=8080"
             
            # Port minions listen on
            KUBELET_PORT="--kubelet-port=10250"
             
            # Comma separated list of nodes in the etcd cluster
            KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
             
            # Address range to use for services
            KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
             
            # default admission control policies
            # KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
            KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
             
            # Add your own!
            KUBE_API_ARGS=""
(2)配置/etc/kubernetes/config文件
                            ###
                # kubernetes system config
                #
                # The following values are used to configure various aspects of all
                # kubernetes services, including
                #
                #   kube-apiserver.service
                #   kube-controller-manager.service
                #   kube-scheduler.service
                #   kubelet.service
                #   kube-proxy.service
                # logging to stderr means we get it in the systemd journal
                KUBE_LOGTOSTDERR="--logtostderr=true"
                 
                # journal message level, 0 is debug
                KUBE_LOG_LEVEL="--v=0"
                 
                # Should this cluster be allowed to run privileged docker containers
                KUBE_ALLOW_PRIV="--allow-privileged=false"
                 
                # How the controller-manager, scheduler, and proxy find the apiserver
                KUBE_MASTER="--master=http://k8s-master:8080"   
(3)启动k8s组件
        >$ systemctl start kube-apiserver.service
        >$ systemctl start kube-controller-manager.service
        >$ systemctl start kube-scheduler.service       
(4)查看是否已经启动 
         >$ ps -ef| grep kube
          kube      11651      1  1 7月25 ?       00:13:25 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://etcd:2379 --insecure-bind-address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=192.168.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
                kube      11670      1  4 7月25 ?       00:54:23 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://k8s-master:8080
                kube      11689      1  0 7月25 ?       00:04:52 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://k8s-master:8080
                root      27939  10894  0 14:27 pts/0    00:00:00 grep --color=auto kube    

这里证明已经跑起来

在另外一台机器上设置k8s-node节点

1、环境准备

目前只是使用了2台机器;centos7
(1)k8s-matser :192.168.2.178
(2)k8s-node :192.168.2.207

2、设置节点

(1)设置k8s-node节点
hostnamectl --static set-hostname k8s-node
(2)设置/etc/hosts文件
vim /etc/hosts 添加下面信息:

        192.168.2.178 k8s-master
    192.168.2.178 etcd
    192.168.2.178 registry
    192.168.2.207 k8s-node

(3)关闭防护墙

        systemctl disable firewalld.service
        systemctl stop firewalld.service

3、flannel安装
(1)安装命令:yum install flannel
(2)配置flannel:/etc/sysconfig/flanneld

        # Flanneld configuration options  
 
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
     
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
                 
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
 (3)启动flannel并设置开机自启
     >$ systemctl start flanneld.service
     >$ systemctl enable flanneld.service

     (4) docker安装  
       (1)安装命令:yum install docker -y
           (2)开启docker服务:service docker start
           (3)设置docker开启自启动:chkconfig docker on
           (4)验证docker是否安装成功
                    >$ docker version
                    Client:
                             Version:         1.13.1
                             API version:     1.26
                             Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
                             Go version:      go1.10.3
                             Git commit:      b2f74b2/1.13.1
                             Built:           Wed May  1 14:55:20 2019
                             OS/Arch:         linux/amd64

                            Server:
                             Version:         1.13.1
                             API version:     1.26 (minimum version 1.12)
                             Package version: docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
                             Go version:      go1.10.3
                             Git commit:      b2f74b2/1.13.1
                             Built:           Wed May  1 14:55:20 2019
                             OS/Arch:         linux/amd64
                             Experimental:    false                           

(5)kubernetes安装
(1)安装命令:yum install kubernetes
(2)不同于master节点,node节点上需要运行kubernetes的如下组件:kubelet、kubernets-proxy
(3)配置/etc/kubernetes/config

                                        ###
                    # kubernetes system config
                    #
                    # The following values are used to configure various aspects of all
                    # kubernetes services, including
                    #
                    #   kube-apiserver.service
                    #   kube-controller-manager.service
                    #   kube-scheduler.service
                    #   kubelet.service
                    #   kube-proxy.service
                    # logging to stderr means we get it in the systemd journal
                    KUBE_LOGTOSTDERR="--logtostderr=true"
                     
                    # journal message level, 0 is debug
                    KUBE_LOG_LEVEL="--v=0"
                     
                    # Should this cluster be allowed to run privileged docker containers
                    KUBE_ALLOW_PRIV="--allow-privileged=false"
                     
                    # How the controller-manager, scheduler, and proxy find the apiserver
                    KUBE_MASTER="--master=http://k8s-master:8080"
(4)配置/etc/kubernetes/kubelet
                    ###
                    # kubernetes kubelet (minion) config
                     
                    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
                    KUBELET_ADDRESS="--address=0.0.0.0"
                     
                    # The port for the info server to serve on
                    # KUBELET_PORT="--port=10250"
                     
                    # You may leave this blank to use the actual hostname
                    KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
                     
                    # location of the api-server
                    KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
                     
                    # pod infrastructure container
                    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
                     
                    # Add your own!
                    KUBELET_ARGS=""
 (5)启动kube服务
          >$ systemctl start kubelet.service
          >$ systemctl start kube-proxy.service
          >$ ps -ef|grep kube //验证kube服务是否已经开启
            root      19728      1  1 7月25 ?       00:21:04 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://k8s-master:8080 --address=0.0.0.0 --hostname-override=k8s-node --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
                    root      19814      1  1 7月25 ?       00:14:42 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://k8s-master:8080
                    root      66824  19065  0 14:38 pts/0    00:00:00 grep --color=auto kube
  (6) 验证集群状态
          //在主节点k8s-master中输入
          >$ kubectl get endpoints //查看端点信息
             NAME         ENDPOINTS            AGE
           kubernetes   192.168.2.178:6443   22h
        >$ kubectl cluster-info //集群信息
           Kubernetes master is running at http://localhost:8080
           To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
        >$ kubectl get nodes //获取集群中的节点状态
           NAME       STATUS    AGE
           k8s-node   Ready     21h       
        

以上就是配置k8s集群全部过程,仅供参考

相关文章

  • 部署k8s 1.22.2 集群 && Euler部署k8s 1

    部署k8s 1.22.2 集群 Euler部署k8s 1.22.2 集群 一、基础环境 主机名IP地址角色系统ma...

  • 一文学会 K8S故障处理

    1 集群故障概述 在k8s集群的部署过程中,大家可能会遇到很多问题。这也是本地部署k8s集群遇到的最大挑战,因此本...

  • k8s-访问外网服务的两种方式

    需求 k8s集群内的pod需要访问mysql,由于mysql的性质,不适合部署在k8s集群内,故k8s集群内的应用...

  • k8s部署redis集群

    一、部署方式 k8s 以statefulset方式部署redis集群 二、statefulset简介 Statef...

  • k8s部署zookeeper集群

    一、部署方式 k8s 以statefulset方式部署zookeeper集群 二、statefulset简介 St...

  • Gitlab添加K8S集群

    本文介绍如何在Gitlab项目中添加K8S集群,以便使用K8S集群部署gitlab-runner帮我们运行gitl...

  • kubeadm部署kubernetes集群

    使用kubeadm部署kubernetes集群 kubeadm是k8s官方推出的一个快速部署集群的工具,通过两条命...

  • 二进制部署k8s集群

    部署k8s有多种方式,本章我们采取二进制的部署方式来部署k8s集群,二进制部署麻烦点,但是可以在我们通过部署各个组...

  • 【containerd】RunPodSandbox for XX

    问题背景 工业云部署使用k8s集群 k8s: 01746170708faa599113b027772802bcbb...

  • k8s e2e测试部署和调试

    k8s e2e测试部署和调试 下面简单描述一下e2e测试的部署。 部署k8s all in one 集群 这个国内...

网友评论

      本文标题:k8s集群部署

      本文链接:https://www.haomeiwen.com/subject/ryfdrctx.html