美文网首页DevOpsk8s
手动部署kubernetes集群

手动部署kubernetes集群

作者: 任总 | 来源:发表于2019-02-22 23:52 被阅读0次

    一、部署 要点

    测试环境
    • 可用用单Master节点,单etcd示例;
    • Node主机数量按需而定
    • nfs或glusterfs等存储系统
    生产环境
    • 高可用etcd集群,建立3、5、7节点
    • 高可用Master
    • kube-apiserver是无状态的,可多实例
    • 借助于keepalived进行vip流动实现多实例冗余;
    • 或在多实例前端HAproxy或Nginx反代,并借助于keepalived对代理服务器进行冗余;
    • kube-scheduler及kube-controller-manager各自只能有一个活动实例,但可以有多个备用;
    • 各自自带leader选举的功能,并且默认处于启用状态;
    • 多Node主机,数量越多,冗余能力越强;
      使用存储方案有ceph、glusterfs、iSCSI、FC SAN及各种云存储;

    部署方式一

    • kubernetes相关程序部署成守护程序,使用手动部署,缺点升级更新操作困难。


      守护进程部署

    部署方式二

    • kuberadm方式部署,相关程序部署成pod,运行在k8s上,从1.13版本后可以用在生产环境上了,升级和更新比较方便。


      kuberadm方式部署

    二、集群认证

    1、etcd集群ssl通信

    etcd集群和节点端使用rest协议,为了保证通信安全,所以使用ssl认证通信。etcd有两个端口,一个用于etcd节点间通信端口号2380,一个用于客户端kuber-apiserver通信端口号2379;

    • etcd节点是对等节点Peer Cert,相互通信使用peer类型证书,使用peer证书由etcd-ca颁发;
    • kuber-apiserver和etcd集群通信,使用客户端证书由etcd-ca颁发;


      etcd-ca

    2、Kuber-apiserver组件ssl通信

    kuber-apiserver与各个组件通信使用服务端证书通信,各个组件与kuber-apiserver使用客户端证书通信,由kubernetes-ca颁发。


    kubernetes-ca颁发

    3、前端代理ssl通信

    • 当kube-apiserver内部资源不够用时候,用户可以自定义扩展资源apiserver(extension-apiserver),但是客户只能访问默认的kube-apiserver,而无法访问自定义extension-apiserver;
    • 如果客户需要访问自定义extension-apiserver时,则通过kube-aggregator聚合器做代理访问,此时聚合器可代理kube-apiserver和extension-apiserver。
    • 当kube-aggregator聚合器与自定义extension-apiserver通信时,使用front-proxy-ca颁发的证书。


      front-proxy-ca

    4、node节点ssl通信

    每个node节点中的kubelet,会自动生成kubernetes的二级私有ca,并给节点的api发放ca证书。


    node节点ssl通信

    三、部署

    如果原来有集群,使用kubeadm reset 重置

    实验环境 部署流程图
    master和etcd节点三个:

    192.168.1.64
    192.168.1.65
    192.168.1.66

    node节点两个:

    192.168.1.67
    192.168.1.68

    • 所有集群节点关闭selinux,关闭防火墙,同步时间

    master&etcd节点配置

    1、所有集群节点编辑主机名解析

      [root@master01 ~]# vim /etc/hosts
    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.1.64 master01.hehe.com etcd01.hehe.com  mykube-api.hehe.com  master01 etcd01  mykube-api
    192.168.1.65 master02.hehe.com etcd02.hehe.com master02 etcd02
    192.168.1.66 master03.hehe.com etcd03.hehe.com master03 etcd03
    192.168.1.67 k8s-node01.hehe.com k8s-node01
    192.168.1.68 k8s-node02.hehe.com k8s-node02
    
    
    #分发配置文件
    [root@master01 ~]# scp /etc/hosts master02:/etc/
    
    [root@master01 ~]# scp /etc/hosts master03:/etc/
    
    [root@master01 ~]# scp /etc/hosts k8s-node01:/etc/
    
    [root@master01 ~]# scp /etc/hosts k8s-node02:/etc/
    

    2、配置etcd集群

    #三个master节点安装etcd
    [root@master01 ~]# yum install etcd git -y
    
    #下载etcd证书生成插件
    [root@master01 ~]# git clone https://github.com/iKubernetes/k8s-certs-generator.git
    [root@master01 ~]# cd k8s-certs-generator/
    
    #执行shell脚本,自动生成etcd相关证书
    [root@master01 k8s-certs-generator]# bash gencerts.sh  etcd    
    Enter Domain Name [ilinux.io]: hehe.com    #输入域名
    [root@master01 k8s-certs-generator]# tree etcd
    etcd
    ├── patches
    │   └── etcd-client-cert.patch
    └── pki
       ├── apiserver-etcd-client.crt
       ├── apiserver-etcd-client.key
       ├── ca.crt
       ├── ca.key
       ├── client.crt
       ├── client.key
       ├── peer.crt
       ├── peer.key
       ├── server.crt
       └── server.key
    
    #执行shell脚本,生成kubernetes相关证书
    [root@master01 k8s-certs-generator]# bash gencerts.sh k8s
    Enter Domain Name [ilinux.io]: hehe.com   #输入域名
    Enter Kubernetes Cluster Name [kubernetes]: mykube   #输入集群名称
    Enter the IP Address in default namespace 
     of the Kubernetes API Server[10.96.0.1]:                  #直接回车
    Enter Master servers name[master01 master02 master03]: master01 master02 master03          #输入master各个节点名称
    #查询ca证书
    [root@master01 k8s-certs-generator]# tree kubernetes/
    kubernetes/
    ├── CA
    │   ├── ca.crt
    │   └── ca.key
    ├── front-proxy
    │   ├── front-proxy-ca.crt
    │   ├── front-proxy-ca.key
    │   ├── front-proxy-client.crt
    │   └── front-proxy-client.key
    ├── ingress
    │   ├── ingress-server.crt
    │   ├── ingress-server.key
    │   └── patches
    │       └── ingress-tls.patch
    ├── kubelet
    │   ├── auth
    │   │   ├── bootstrap.conf
    │   │   └── kube-proxy.conf
    │   └── pki
    │       ├── ca.crt
    │       ├── kube-proxy.crt
    │       └── kube-proxy.key
    ├── master01
    │   ├── auth
    │   │   ├── admin.conf
    │   │   ├── controller-manager.conf
    │   │   └── scheduler.conf
    │   ├── pki
    │   │   ├── apiserver.crt
    │   │   ├── apiserver-etcd-client.crt
    │   │   ├── apiserver-etcd-client.key
    │   │   ├── apiserver.key
    │   │   ├── apiserver-kubelet-client.crt
    │   │   ├── apiserver-kubelet-client.key
    │   │   ├── ca.crt
    │   │   ├── ca.key
    │   │   ├── front-proxy-ca.crt
    │   │   ├── front-proxy-ca.key
    │   │   ├── front-proxy-client.crt
    │   │   ├── front-proxy-client.key
    │   │   ├── kube-controller-manager.crt
    │   │   ├── kube-controller-manager.key
    │   │   ├── kube-scheduler.crt
    │   │   ├── kube-scheduler.key
    │   │   ├── sa.key
    │   │   └── sa.pub
    │   └── token.csv
    ├── master02
    │   ├── auth
    │   │   ├── admin.conf
    │   │   ├── controller-manager.conf
    │   │   └── scheduler.conf
    │   ├── pki
    │   │   ├── apiserver.crt
    │   │   ├── apiserver-etcd-client.crt
    │   │   ├── apiserver-etcd-client.key
    │   │   ├── apiserver.key
    │   │   ├── apiserver-kubelet-client.crt
    │   │   ├── apiserver-kubelet-client.key
    │   │   ├── ca.crt
    │   │   ├── ca.key
    │   │   ├── front-proxy-ca.crt
    │   │   ├── front-proxy-ca.key
    │   │   ├── front-proxy-client.crt
    │   │   ├── front-proxy-client.key
    │   │   ├── kube-controller-manager.crt
    │   │   ├── kube-controller-manager.key
    │   │   ├── kube-scheduler.crt
    │   │   ├── kube-scheduler.key
    │   │   ├── sa.key
    │   │   └── sa.pub
    │   └── token.csv
    └── master03
       ├── auth
       │   ├── admin.conf
       │   ├── controller-manager.conf
       │   └── scheduler.conf
       ├── pki
       │   ├── apiserver.crt
       │   ├── apiserver-etcd-client.crt
       │   ├── apiserver-etcd-client.key
       │   ├── apiserver.key
       │   ├── apiserver-kubelet-client.crt
       │   ├── apiserver-kubelet-client.key
       │   ├── ca.crt
       │   ├── ca.key
       │   ├── front-proxy-ca.crt
       │   ├── front-proxy-ca.key
       │   ├── front-proxy-client.crt
       │   ├── front-proxy-client.key
       │   ├── kube-controller-manager.crt
       │   ├── kube-controller-manager.key
       │   ├── kube-scheduler.crt
       │   ├── kube-scheduler.key
       │   ├── sa.key
       │   └── sa.pub
       └── token.csv
    
    16 directories, 80 files
    #拷贝ca证书到各个etcd节点的对应目录
    [root@master01 ~]# cp -rp k8s-certs-generator/etcd/pki /etc/etcd/
    [root@master01 ~]# scp -rp k8s-certs-generator/etcd/pki master02:/etc/etcd/
    [root@master01 ~]# scp -rp k8s-certs-generator/etcd/pki  master03:/etc/etcd/
    

    3、下载etcd配置模板

    #下载配置模板
    [root@master01 ~]# git clone https://github.com/iKubernetes/k8s-bin-inst.git
    [root@master01 ~]# vim k8s-bin-inst/etcd/etcd.conf 
    ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.1.64:2380"  #当前节点
    ETCD_LISTEN_CLIENT_URLS="https://192.168.1.64:2379"    #其他节点需修改
    ETCD_NAME="master01.hehe.com"             #其他节点需修改
    ETCD_SNAPSHOT_COUNT="100000"
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://master01.hehe.com:2380"     #其他节点需修改
    ETCD_ADVERTISE_CLIENT_URLS="https://master01.hehe.com:2379"
    ETCD_INITIAL_CLUSTER="master01.hehe.com=https://master01.hehe.com:2380,master02.hehe.com=https://master02.hehe.com:2380,master03.hehe.com=https://master03.hehe.com:2380"  #集群成员
    ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
    ETCD_KEY_FILE="/etc/etcd/pki/server.key"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
    ETCD_AUTO_TLS="false"
    ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
    ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
    ETCD_PEER_AUTO_TLS="false"   #是否自动生成证书
    
    #复制配置文件到各个节点的etcd
    [root@master01 ~]# cp k8s-bin-inst/etcd/etcd.conf /etc/etcd/
    [root@master01 ~]# scp k8s-bin-inst/etcd/etcd.conf master02:/etc/etcd/
    然后到maser02修改etcd.conf配置文件
    [root@master01 ~]# scp k8s-bin-inst/etcd/etcd.conf master03:/etc/etcd/
    然后到maser03修改etcd.conf配置文件
    

    4、启动三个节点的etcd服务

    [root@master01 ~]# systemctl start etcd   #集群内的三个节点都要启动etcd
    [root@master01 ~]# ss -tnl
    State       Recv-Q Send-Q    Local Address:Port                   Peer Address:Port              
    LISTEN      0      128        192.168.1.64:2379                              *:*                  
    LISTEN      0      128        192.168.1.64:2380                              *:*   
    

    5、使用证书检查集群健康状态

    [root@master01 ~]# etcdctl --key-file=/etc/etcd/pki/client.key  --cert-file=/etc/etcd/pki/client.crt --ca-file=/etc/etcd/pki/ca.crt --endpoints="https://master01.hehe.com:2379" cluster-health
    member 8023c12a8fbbe412 is healthy: got healthy result from https://master03.hehe.com:2379
    member 9f3c9261bfce01a1 is healthy: got healthy result from https://master02.hehe.com:2379
    member d593c5f5c648bc69 is healthy: got healthy result from https://master01.hehe.com:2379
    cluster is healthy
    
    

    6、下载kubernetes1.13(网络下载问题,请自行解决)

    #下载
    [root@master01 ~]# wget http://www.ik8s.io/kubernetes/v1.13.0/kubernetes-server-linux-amd64.tar.gz
    
    #解压到指定路径
    [root@master01 ~]# tar xf kubernetes-server-linux-amd64.tar.gz  -C /usr/local/
    
    #分发到其他节点
    [root@master01 ~]# scp -r  /usr/local/kubernetes   master02:/usr/local/
    [root@master01 ~]# scp -r  /usr/local/kubernetes   master03:/usr/local/
    
    #使用master配置文件模板
    [root@master01 ~]# tree k8s-bin-inst/master/
    k8s-bin-inst/master/
    ├── etc
    │   └── kubernetes
    │       ├── apiserver
    │       ├── config
    │       ├── controller-manager
    │       └── scheduler
    └── unit-files
        ├── kube-apiserver.service
        ├── kube-controller-manager.service
        └── kube-scheduler.service
    
    3 directories, 7 files
    
    #修改配置文件
    [root@master01 ~]# vim k8s-bin-inst/master/etc/kubernetes/apiserver 
    ..........
         KUBE_API_ADDRESS="--advertise-address=0.0.0.0"
         # The port on the local server to listen on.
         KUBE_API_PORT="--secure-port=6443 --insecure-port=0"
         # Comma separated list of nodes in the etcd cluster
         KUBE_ETCD_SERVERS="--etcd-
      servers=https://master01.hehe.com:2379,https://master02.hehe.com:2379,https://master03.hehe.com:2379"
        # Address range to use for services
        KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12"
        # default admission control policies
        KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NodeRestriction"
    ...........
    
    #三个节点创建配置目录
    [root@master01 ~]# mkdir /etc/kubernetes
    [root@master02~]# mkdir /etc/kubernetes
    [root@master03~]# mkdir /etc/kubernetes
    
    #分发配置文件
    [root@master01 ~]# cp k8s-bin-inst/master/etc/kubernetes/* /etc/kubernetes/
    [root@master01 ~]# scp k8s-bin-inst/master/etc/kubernetes/* master02:/etc/kubernetes/
    [root@master01 ~]# scp k8s-bin-inst/master/etc/kubernetes/* master03:/etc/kubernetes/
    
    #分发证书文件
    [root@master01 ~]# cp -rp k8s-certs-generator/kubernetes/master01/*  /etc/kubernetes/
    [root@master01 ~]# scp -rp k8s-certs-generator/kubernetes/master02/*  master02:/etc/kubernetes/
    [root@master01 ~]# scp -rp k8s-certs-generator/kubernetes/master03/*  master03:/etc/kubernetes/
    
    #为了使用system方式管理k8s,分发unitfile文件
    [root@master01 ~]# cp k8s-bin-inst/master/unit-files/kube-*  /usr/lib/systemd/system/
    [root@master01 ~]# scp k8s-bin-inst/master/unit-files/kube-* master02:/usr/lib/systemd/system/
    [root@master01 ~]# scp k8s-bin-inst/master/unit-files/kube-* master03:/usr/lib/systemd/system/
    
    ####7、三个节点重载system
    [root@master01 ~]# systemctl daemon-reload
    [root@master02 ~]# systemctl daemon-reload
    [root@master03 ~]# systemctl daemon-reload
    
    

    8、三个节点根据脚本,设置环境变量

    [root@master01 ~]# vim /etc/profile.d/k8s.sh
    export PATH=$PATH:/usr/local/kubernetes/server/bin
    
    #分发脚本
    [root@master01 ~]# scp /etc/profile.d/k8s.sh master02:/etc/profile.d/
    [root@master01 ~]# scp /etc/profile.d/k8s.sh master03:/etc/profile.d/
    
    #载入脚本
    [root@master01 ~]# . /etc/profile.d/k8s.sh
    [root@master02 ~]# . /etc/profile.d/k8s.sh
    [root@master03 ~]# . /etc/profile.d/k8s.sh
    
    
    

    9、三个节点创建kube用户

    [root@master01 ~]# useradd -r kube
    [root@master01 ~]# mkdir /var/run/kubernetes
    [root@master01 ~]# chown kube.kube /var/run/kubernetes
    
    

    10、三个节点分别启动kube-apiserver

    [root@master01 ~]# systemctl start kube-apiserver
    [root@master02~]# systemctl start kube-apiserver
    [root@master03 ~]# systemctl start kube-apiserver
    

    11、三个节点设置kubectl

    #创建目录
    [root@master01 ~]# mkdir .kube 
    [root@master01 ~]# cp /etc/kubernetes/auth/admin.conf  .kube/config
    [root@master01 ~]# . /etc/profile.d/k8s.sh   #执行脚本
    #测试
    [root@master01 ~]# kubectl get pods
    No resources found.
    

    12、节点master01创建ClusterRoleBinding,授予用户相应操作需要的权限

    [root@master01 ~]# cat /etc/kubernetes/token.csv 
    d5c74f.5d1a642f1a6e5edb,"system:bootstrapper",10001,"system:bootstrappers"
    
    #完成绑定,使token.csv中的bootstrapper拥有特定的权限,来引导node节点
    [root@master01 ~]# kubectl create clusterrolebinding system:bootstrapper --user=system:bootstrapper --clusterrole=system:node-bootstrapper
    clusterrolebinding.rbac.authorization.k8s.io/system:bootstrapper created
    

    13、三个节点启动kube-controlle-manager

    #查看配置文件
    [root@master01 ~]# cat /etc/kubernetes/controller-manager 
    ###
    # The following values are used to configure the kubernetes controller-manager
    
    # defaults from config and apiserver should be adequate
    
    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=127.0.0.1 \     #监听地址
        --allocate-node-cidrs=true \
        --authentication-kubeconfig=/etc/kubernetes/auth/controller-manager.conf \
        --authorization-kubeconfig=/etc/kubernetes/auth/controller-manager.conf \
        --client-ca-file=/etc/kubernetes/pki/ca.crt \
        --cluster-cidr=10.244.0.0/16 \    #如果使用flannel的网段
        --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \
        --cluster-signing-key-file=/etc/kubernetes/pki/ca.key \
        --controllers=*,bootstrapsigner,tokencleaner \
        --kubeconfig=/etc/kubernetes/auth/controller-manager.conf \
        --leader-elect=true \
        --node-cidr-mask-size=24 \
        --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \
        --root-ca-file=/etc/kubernetes/pki/ca.crt \
        --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
        --use-service-account-credentials=true"
    
    #启动
    [root@master01 ~]# systemctl start kube-controller-manager
    [root@master02 ~]# systemctl start kube-controller-manager
    [root@master03~]# systemctl start kube-controller-manager
    

    14、三个节点启动kube-scheduler

    #查看kube-scheduler配置文件
    [root@master01 ~]# cat /etc/kubernetes/scheduler 
    ###
    # kubernetes scheduler config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_SCHEDULER_ARGS="--address=127.0.0.1 \
        --kubeconfig=/etc/kubernetes/auth/scheduler.conf \
        --leader-elect=true"
    
    #启动
    [root@master01 ~]# systemctl start kube-scheduler
    [root@master02 ~]# systemctl start kube-scheduler
    [root@master03 ~]# systemctl start kube-scheduler
    

    node节点配置

    1、两个node节点同步时间,安装docker

    #安装docker
    [root@node-67 ~]# yum install docker -y
    #编辑配置文件
    [root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service 
    [Unit]
    Description=Docker Application Container Engine
    Documentation=http://docs.docker.com
    After=network.target
    Wants=docker-storage-setup.service
    Requires=docker-cleanup.timer
    
    [Service]
    Type=notify
    NotifyAccess=main
    EnvironmentFile=-/run/containers/registries.conf
    EnvironmentFile=-/etc/sysconfig/docker
    EnvironmentFile=-/etc/sysconfig/docker-storage
    EnvironmentFile=-/etc/sysconfig/docker-network
    Environment=GOTRACEBACK=crash
    Environment=DOCKER_HTTP_HOST_COMPAT=1
    Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
    ExecStart=/usr/bin/dockerd-current \
              --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
              --default-runtime=docker-runc \
    #修改此处为cgroupfs,保持和kubelet配置一致,否则kubelet启动报错
              --exec-opt native.cgroupdriver=cgroupfs \ 
           ..............
    
    #node02节点同步docker配置
    [root@k8s-node01 ~]# scp /usr/lib/systemd/system/docker.service  k8s-node02:/usr/lib/systemd/system/
    

    2、node01和node02节点启动docker

    [root@k8s-node01 ~]# systemctl daemon-reload
    [root@k8s-node01 ~]# systemctl start docker
    
    #查询docker信息
    [root@k8s-node01 ~]# docker info
    

    3、下载kubelet

    #下载
    [root@k8s-node01 ~]# wget http://www.ik8s.io/kubernetes/v1.13.0/kubernetes-node-linux-amd64.tar.gz
    
    #解压到指定目录下
    [root@k8s-node01 ~]# tar -xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/
    
    #拷贝到其他节点上
    [root@k8s-node01 ~]# scp -rp  /usr/local/kubernetes  k8s-node02:/usr/local/
    
    

    4、由于使用pause镜像存放在谷歌上,所以编辑脚本拉取,使用阿里云仓库,下载后自动更改tag

    
    #创建拉取镜像脚本
    [root@k8s-node01 ~]# vim dockerimages_pull.sh
    #!/bin/bash
    images=(  
      # 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成上面获取到的版本
      #  kube-apiserver:v1.13.0
       # kube-controller-manager:v1.13.0
       # kube-scheduler:v1.13.0
       # kube-proxy:v1.13.0
        pause:3.1
       # etcd:3.2.24
       # coredns:1.2.2
    )
    
    for imageName in ${images[@]} ; do
        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    
    #执行拉取镜像脚本
    [root@k8s-node01 ~]# bash -s dockerimages_pull.sh 
    [root@k8s-node01 ~]# . dockerimages_pull.sh 
    
    [root@k8s-node01 ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    ..............................
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        13 months ago       742 kB
    
    
    

    5、创建配置目录,拷贝配置文件

    [root@k8s-node01 ~]# mkdir /etc/kubernetes/
    [root@k8s-node02 ~]# mkdir /etc/kubernetes/
    
    #使用master01上的证书中的文件
    [root@master01 ~]# scp -rp  k8s-certs-generator/kubernetes/kubelet/* k8s-node01:/etc/kubernetes/
    [root@master01 ~]# scp -rp  k8s-certs-generator/kubernetes/kubelet/* k8s-node02:/etc/kubernetes/
    
    #使用master01上的配置模板中的文件
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/etc/kubernetes/* k8s-node01:/etc/kubernetes/
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/etc/kubernetes/* k8s-node02:/etc/kubernetes/
    
    #复制unitfile文件
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/unit-files/*  k8s-node01:/usr/lib/systemd/system/
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/unit-files/*  k8s-node02:/usr/lib/systemd/system/
    
    #复制kubelet配置文件
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/var/lib/kube*  k8s-node01:/var/lib/
    [root@master01 ~]# scp -rp  k8s-bin-inst/nodes/var/lib/kube*  k8s-node02:/var/lib/
    
    

    6、两node节点编辑配置文件

    [root@k8s-node01 ~]# vim /etc/kubernetes/kubelet 
    
    ###
    # kubernetes kubelet config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"  #监听地址
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # Add your own!
    KUBELET_ARGS="--network-plugin=cni \     #使用cni方式做网络插件
        --config=/var/lib/kubelet/config.yaml \    #读取配置文件目录
        --kubeconfig=/etc/kubernetes/auth/kubelet.conf \   #aipiserver认证文件路径
        --bootstrap-kubeconfig=/etc/kubernetes/auth/bootstrap.conf"  #bootstrap认证文件路径
    

    7、下载cni网络插件,下载地址https://github.com/containernetworking/plugins/releases

    [root@k8s-node01 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz
    
    #创建指定目录,并解压
    [root@k8s-node01 ~]# mkdir -p /opt/cni/bin
    [root@k8s-node01 ~]# tar -xf cni-plugins-amd64-v0.7.4.tgz -C /opt/cni/bin/
    
    [root@k8s-node01 ~]# ls /opt/cni/bin/
    bridge  flannel      host-local  loopback  portmap  sample  vlan
    dhcp    host-device  ipvlan      macvlan   ptp      tuning
    
    #拷贝到其他节点上
    [root@k8s-node01 ~]# scp -rp /opt/cni/bin/*  k8s-node02:/opt/cni/bin
    
    

    8、两个node节点启动kuberlet

    [root@k8s-node01 ~]# systemctl start kubelet
    [root@k8s-node02 ~]# systemctl start kubelet
    
    

    9、由于node节点和master节点需要证书认证,到master01节点签署node证书

    #查询待签署证书
    [root@master01 ~]# kubectl get csr
    NAME                                                   AGE   REQUESTOR             CONDITION
    node-csr-7MMtcGlQbb8KLOrhY-M9CX8Q8QhUuLs0M_sivCFtwEI   12m   system:bootstrapper   Pending
    node-csr-OzEqtx1uTXyZ7gLOzkeh5Yr2hf8BQR_-e6iSGecyN1c   15m   system:bootstrapper   Pending
    
    #签署证书
    [root@master01 ~]# kubectl certificate approve node-csr-7MMtcGlQbb8KLOrhY-M9CX8Q8QhUuLs0M_sivCFtwEI
    [root@master01 ~]# kubectl certificate approve node-csr-OzEqtx1uTXyZ7gLOzkeh5Yr2hf8BQR_-e6iSGecyN1c
    
    #查询证书颁发状态
    [root@master01 ~]# kubectl get csr
    NAME                                                   AGE   REQUESTOR             CONDITION
    node-csr-7MMtcGlQbb8KLOrhY-M9CX8Q8QhUuLs0M_sivCFtwEI   19m   system:bootstrapper   Approved,Issued
    node-csr-OzEqtx1uTXyZ7gLOzkeh5Yr2hf8BQR_-e6iSGecyN1c   22m   system:bootstrapper   Approved,Issued
    
    #查询节点状态
    [root@master01 ~]# kubectl get nodes
    NAME         STATUS     ROLES    AGE     VERSION
    k8s-node01   NotReady   <none>   6m39s   v1.13.0
    k8s-node02   NotReady   <none>   25s     v1.13.0
    
    

    10、两个node节点加载ipvs模块

    启用ipvs内核模块,创建内核模块载入相关的脚本文件/etc/sysconfig/modules/ipvs.modules,设定自动载入内核模块

    [root@k8s-node01 ~]# vim /etc/sysconfig/modules/ipvs.mdules
    #!/bin/bash
    ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
    for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*");do
        /sbin/modinfo -F filename $i &> /dev/null
        if [ $? -eq 0 ]; then
           /sbin/modprobe $i
        fi
    done
    [root@k8s-node01 ~]# chmod +x /etc/sysconfig/modules/ipvs.mdules 
    
    #执行脚本
    [root@k8s-node01 ~]# bash /etc/sysconfig/modules/ipvs.mdules 
    #拷贝到其他node节点
    [root@k8s-node01 ~]# scp  /etc/sysconfig/modules/ipvs.mdules k8s-node02:/etc/sysconfig/modules/ipvs.mdules 
    
    [root@k8s-node01 ~]# lsmod | grep ip_vs
    ip_vs_wrr              12697  0 
    ip_vs_wlc              12519  0 
    ip_vs_sh               12688  0 
    ip_vs_sed              12519  0 
    ip_vs_rr               12600  0 
    ip_vs_pe_sip           12697  0 
    nf_conntrack_sip       33860  1 ip_vs_pe_sip
    ip_vs_nq               12516  0 
    ip_vs_lc               12516  0 
    ip_vs_lblcr            12922  0 
    ip_vs_lblc             12819  0 
    ip_vs_ftp              13079  0 
    ip_vs_dh               12688  0 
    ip_vs                 141092  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
    nf_nat                 26787  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
    nf_conntrack          133387  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_sip,nf_conntrack_ipv4
    libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
    

    11、两个node节点启动kube-proxy

    [root@k8s-node01 ~]# systemctl start kube-proxy
    [root@k8s-node02 ~]# systemctl start kube-proxy
    

    12、查询node节点ipvs工作状态和规则

    #安装ipvs管理工具
    [root@k8s-node01 ~]# yum install ipvsadm -y
    
    #查询ipvs规则
    [root@k8s-node01 ~]# ipvsadm -Ln
    

    13、以pod方式部署flannel网络

    #运行flannel网络插件
    [root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
    
    [root@master01 ~]#  kubectl get pods -n kube-system
    NAME                          READY   STATUS             RESTARTS   AGE
    kube-flannel-ds-amd64-fswxh   1/1     Running            1          8m40s
    kube-flannel-ds-amd64-lhjnq   0/1     CrashLoopBackOff   6          8m40s
    
    #两个节点状态
    [root@master01 ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    k8s-node01   Ready    <none>   30m   v1.13.0
    k8s-node02   Ready    <none>   23m   v1.13.0
    
    

    14、部署dns附件

    在三个master节点上

    #创建目录
    [root@master01 ~]# mkdir coredns && cd coredns
    
    #下载文件
    [root@master01 coredns]# wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
    #下载处理脚本
    [root@master01 coredns]# wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
    
    #执行脚本,带入自定义dns的ip
    [root@master01 coredns]# bash deploy.sh -i 10.96.0.10 -r "10.96.0.0/12" -s -t coredns.yaml.sed | kubectl apply -f -
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    configmap/coredns created
    deployment.apps/coredns created
    service/kube-dns created
    
    #查询ipvs规则
    [root@k8s-node01 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.96.0.1:443 rr
      -> 192.168.1.65:6443            Masq    1      0          0         
    TCP  10.96.0.10:53 rr
    TCP  10.96.0.10:9153 rr
    UDP  10.96.0.10:53 rr
    
    

    参考链接:https://zhuanlan.zhihu.com/p/46341911
    https://www.jianshu.com/p/c92e46e193aa

    相关文章

      网友评论

        本文标题:手动部署kubernetes集群

        本文链接:https://www.haomeiwen.com/subject/ibxssqtx.html