美文网首页
实测:二进制装k8s

实测:二进制装k8s

作者: 凤非飞 | 来源:发表于2019-04-15 00:38 被阅读0次

    本人自己装的k8s,参考了无数的文档,当初连Linux命令都不认识,都不知道脚本是什么,真的是走了无数的弯路,吃了很多苦,哈哈,不多说了,下面的内容参考了许多文档,举了几个最重要的。
    https://blog.csdn.net/qq_25611295/article/details/81912295
    https://blog.csdn.net/doegoo/article/details/80062132
    https://jingyan.baidu.com/article/6b97984dd30cb51ca3b0bf41.html
    https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/03.%E9%83%A8%E7%BD%B2kubectl%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%B7%A5%E5%85%B7.md



    1.事前准备阶段
    两台虚拟机,设置静态IP

    主机名 IP 安装的服务
    K8S-M1 192.168.88.111 kube-apiserver kube-controller-manager kube-scheduler flannel etcd
    K8S-M2 192.168.88.114 kubelet kube-proxy docker flannel etcd

    1.1 K8S-M1节点免密码ssh登录其他节点(我只有192.168.88.111这个节点作为node)
    在K8S-M1(192.168.88.111)节点上操作

    # ssh-keygen
    

    回车三次

    # ssh-copy-id   {192.168.88.114}
    

    输入密码
    然后测试一下从这台服务器登录另外一台服务器。

    #ssh ip
    

    不输入密码直接登录成功。这是我认为最简单的免密码登录操作

    1.2关闭防护墙
    所有机器都执行

    systemctl disable --now firewalld NetworkManager
    setenforce 0
    sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config
    

    1.3关闭Swap

    swapoff -a && sysctl -w vm.swappiness=0
    sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
    

    1.4关闭SELinux

    swapoff -a
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 
    


    2 安装docker
    (所有机器)
    2.1卸载旧版本

    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-selinux \
                      docker-engine-selinux \
                      docker-engine
    

    2.2安装相关工具类

    sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
    

    2.3配置docker仓库

    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    

    2.4安装docker

    yum install docker-ce
    如果成功,则内容如下:
    Installed:
      docker-ce.x86_64 0:18.03.0.ce-1.el7.centos
    
    Dependency Installed:
      audit-libs-python.x86_64 0:2.7.6-3.el7 checkpolicy.x86_64 0:2.5-4.el7   container-selinux.noarch 2:2.42-1.gitad8f0f7.el7 libcgroup.x86_64 0
      libtool-ltdl.x86_64 0:2.4.2-22.el7_3   pigz.x86_64 0:2.3.3-1.el7.centos policycoreutils-python.x86_64 0:2.5-17.1.el7     python-IPy.noarch
    
    Complete!
    

    2.5配置阿里云的docker镜像库
    registry-mirrors的值修改为自己的阿里云镜像加速地址

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://{自已的编码}.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    2.6验证docker安装成功

    sudo docker run hello-world
    出现以下内容则表示安装成功:
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    9bb5a5d4561a: Pull complete
    Digest: sha256:f5233545e43561214ca4891fd1157e1c3c563316ed8e237750d59bde73361e77
    Status: Downloaded newer image for hello-world:latest
    
    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    
    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
        (amd64)
     3. The Docker daemon created a new container from that image which runs the
        executable that produces the output you are currently reading.
     4. The Docker daemon streamed that output to the Docker client, which sent it
        to your terminal.
    
    To try something more ambitious, you can run an Ubuntu container with:
     $ docker run -it ubuntu bash
    
    Share images, automate workflows, and more with a free Docker ID:
     https://hub.docker.com/
    
    For more examples and ideas, visit:
    

    ok,到此安装成功
    3、自签TLS证书
    用到证书的地方

    image

    在master上面操作,即192.168.88.111
    安装证书生成工具cfssl:

    cd /opt/ssl
    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x *
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    

    编写脚本,生成我们需要的证书;

    [root@localhost ssl]# cat certificate.sh
    #证书根机构
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    #生成根证书的具体信息
    cat > ca-csr.json <<EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "hangzhou",
                "ST": "hangzhou",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    #用cfssl生成证书
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    #-----------------------
    
    #用于api http通信的证书信息
    cat > server-csr.json <<EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "127.0.0.1",
          "192.168.1.6",
          "192.168.1.7",
          "192.168.1.8",
          "10.10.10.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    #生成server证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    #-----------------------
    
    #集群管理员证书,权限
    cat > admin-csr.json <<EOF
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    
    #生成管理员证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
    
    #-----------------------
    
    #关于网络策略的证书
    cat > kube-proxy-csr.json <<EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    
    #生成网络策略证书
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    

    然后运行该脚本
    我们只需要生成的pem格式的证书,其他的可以删掉了。

    ls |grep -v pem|xargs -i rm {}
    

    然后将我们证书拷贝到我们定义的地方:

    cp server* ca* /opt/kubernetes/ssl/
    

    ok,ssl证书生成完毕,一共需要四个,保存在/opt/kubernetes/ssl/

    [root@K8S-M1 ssl]# ls
    ca-key.pem  ca.pem  server-key.pem  server.pem 
    


    4、部署etcd 存储集群
    (在master上操作,即192.168.88.111)
    etcd二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
    我使用的是3.2版本

    tar xvf etcd-v3.2.12-linux-amd64.tar.gz
    cd etcd-v3.2.12-linux-amd64
    

    将我们需要的可执行文件拷贝到我们自定义的地方

    cp etcd  /opt/kubernetes/bin/
    cp etcdctl /opt/kubernetes/bin/
    

    编辑etcd配置文件:

    vim /opt/kubernetes/cfg/etcd
    
    #[Member]
    ETCD_NAME="etcd01"
    #数据目录
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.88.111:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.88.111:2379"
    
    #节点信息
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.88.111:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.88.111:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.88.111:2380,etcd02=https://192.168.88.114:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    

    将etcd配置到系统环境中

    vim /usr/lib/systemd/system/etcd.service
    
    
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=-/opt/kubernetes/cfg/etcd
    ExecStart=/opt/kubernetes/bin/etcd \
    --name=${ETCD_NAME} \
    --data-dir=${ETCD_DATA_DIR} \
    --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
    --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
    --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
    --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
    --initial-cluster=${ETCD_INITIAL_CLUSTER} \
    --initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
    --initial-cluster-state=new \
    --cert-file=/opt/kubernetes/ssl/server.pem \
    --key-file=/opt/kubernetes/ssl/server-key.pem \
    --peer-cert-file=/opt/kubernetes/ssl/server.pem \
    --peer-key-file=/opt/kubernetes/ssl/server-key.pem \
    --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
    --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动etcd服务:

    systemctl start etcd
    

    这时只有master(192.168.88.111)开启etcd,所以运行etcd会出错,这时使用journalctl -xe命令查看错误,如果是下面的类型的错误(连接ETCD_INITIAL_CLUSTER中其他节点失败,被拒接),那么不要急,是正常的,当你的其他节点也安装开启了etcd,那么这时,所有节点的etcd都会贯通,成功运行

    [root@K8S-M1 ~]# systemctl start etcd
    Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
    [root@K8S-M1 ~]# journalctl -xe
    4月 13 23:42:44 K8S-M1 etcd[6877]: health check for peer 2409915665a4bc87 could not connect: dial tcp 192.168.88.114:2380: getsockopt: connection refused
    4月 13 23:42:44 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 192
    4月 13 23:42:44 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 193
    4月 13 23:42:44 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 193
    4月 13 23:42:44 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 193
    4月 13 23:42:45 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 193
    4月 13 23:42:45 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 194
    4月 13 23:42:45 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 194
    4月 13 23:42:45 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 194
    4月 13 23:42:46 K8S-M1 etcd[6877]: publish error: etcdserver: request timed out
    4月 13 23:42:47 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 194
    4月 13 23:42:47 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 195
    4月 13 23:42:47 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 195
    4月 13 23:42:47 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 195
    4月 13 23:42:48 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 195
    4月 13 23:42:48 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 196
    4月 13 23:42:48 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 196
    4月 13 23:42:48 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 196
    4月 13 23:42:49 K8S-M1 etcd[6877]: health check for peer 2409915665a4bc87 could not connect: dial tcp 192.168.88.114:2380: getsockopt: connection refused
    4月 13 23:42:50 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 196
    4月 13 23:42:50 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 197
    4月 13 23:42:50 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 197
    4月 13 23:42:50 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 197
    4月 13 23:42:52 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 197
    4月 13 23:42:52 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 198
    4月 13 23:42:52 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 198
    4月 13 23:42:52 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 198
    4月 13 23:42:53 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 198
    4月 13 23:42:53 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 199
    4月 13 23:42:53 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 199
    4月 13 23:42:53 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 199
    4月 13 23:42:53 K8S-M1 etcd[6877]: publish error: etcdserver: request timed out
    4月 13 23:42:54 K8S-M1 etcd[6877]: health check for peer 2409915665a4bc87 could not connect: dial tcp 192.168.88.114:2380: getsockopt: connection refused
    4月 13 23:42:54 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 199
    4月 13 23:42:54 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 200
    4月 13 23:42:54 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 200
    4月 13 23:42:54 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 200
    4月 13 23:42:55 K8S-M1 etcd[6877]: 16c24bbb1130c408 is starting a new election at term 200
    4月 13 23:42:55 K8S-M1 etcd[6877]: 16c24bbb1130c408 became candidate at term 201
    4月 13 23:42:55 K8S-M1 etcd[6877]: 16c24bbb1130c408 received MsgVoteResp from 16c24bbb1130c408 at term 201
    4月 13 23:42:55 K8S-M1 etcd[6877]: 16c24bbb1130c408 [logterm: 1, index: 2] sent MsgVote request to 2409915665a4bc87 at term 201
    
    

    开机启动:

    systemctl enable etcd
    

    此时master上面etcd配置完毕,我们将其配置拷贝到其他机器上面(192.168.88.114)

    rsync -avzP /opt/kubernetes root@192.168.88.114:/opt
    rsync -avzP /usr/lib/systemd/system/etcd.service root@192.168.88.114:/usr/lib/systemd/system/etcd.service
    

    现在切换到每个其他节点上,我的其他节点只有192.168.88.114这机器,修改/opt/kubernetes/cfg/etcd文件,修改ETCD_NAME,ETCD_LISTEN_PEER_URLS,ETCD_LISTEN_CLIENT_URLS,ETCD_INITIAL_ADVERTISE_PEER_URLS和ETCD_ADVERTISE_CLIENT_URLS为对应的机器的IP

    [root@K8S-N1 ~]# cat /opt/kubernetes/cfg/etcd
    #[Member]
    ETCD_NAME="etcd02"
    #数据目录
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.88.114:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.88.114:2379"
    
    #节点信息
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.88.114:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.88.114:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.88.111:2380,etcd02=https://192.168.88.114:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    

    启动然后开机自动启动

    systemctl start etcd
    systemctl enable etcd
    

    现在再切换到master,即192.168.88.111,来测试etcd

    cd  /opt/kubernetes/ssl
    /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.88.111:2379,https://192.168.88.114:2379" cluster-health
    显示此,则为k8s-etcd安装成功
    member 16c24bbb1130c408 is healthy: got healthy result from https://192.168.88.111:2379
    member 2409915665a4bc87 is healthy: got healthy result from https://192.168.88.114:2379
    cluster is healthy
    也可以用systemctl status etcd命令在所有机器上执行,如果出现active (running),表示k8s-etcd安装完毕
    ● etcd.service - Etcd Server
       Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
       Active: active (running) since 日 2019-04-14 09:42:04 CST; 13h ago
     Main PID: 6539 (etcd)
        Tasks: 14
       Memory: 98.7M
       CGroup: /system.slice/etcd.service
               └─6539 /opt/kubernetes/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.88.114:2380 --listen-client-urls=https://192.168.88.114:2379,http://127.0.0....
    

    如果出现:8月 20 17:51:06 es1 etcd[2068]: request sent was ignored (cluster ID mismatch: peer[5fe38a6e135d0fde]=cdf818194e3a8c32, local=b37e2d2fbf0626a5)
    类似错误,我们需要删除etcd的数据目录即可解决。rm -rf /var/lib/etcd/*



    5、部署Flannel网络
    先在master上面操作,即192.168.88.111
    wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

    将解压后得到的可执行文件放入我们之定义的路径下面

    cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
    

    配置配置文件:
    直接在命令行 将配置文件利用EOF写进去

    #写flanneld配置文件
    cat <<EOF >/opt/kubernetes/cfg/flanneld
    
    FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.88.111:2379,https://192.168.88.114:2379 \
    -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
    -etcd-certfile=/opt/kubernetes/ssl/server.pem \
    -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
    
    EOF
    
    #写入 flanneld系统配置文件
    cat <<EOF >/usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/kubernetes/cfg/flanneld
    ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
    ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    
    #写入分配的子网段到etcd,供flanneld使用
    cd /opt/kubernetes/ssl
    
    /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.88.111:2379,https://192.168.88.114:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
    

    测试

    [root@K8S-M1 ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.88.111:2379,https://192.168.88.114:2379" get /coreos.com/network/config
    显示
    { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
    

    运行并开启启动

    [root@K8S-M1 ssl]# systemctl enable flanneld
    Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
    [root@K8S-M1 ssl]# systemctl start flanneld
    

    查看flanneld分配的网络:

    [root@K8S-M1 ssl]# cat /run/flannel/subnet.env
    DOCKER_OPT_BIP="--bip=172.17.60.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=false"
    DOCKER_OPT_MTU="--mtu=1450"
    DOCKER_NETWORK_OPTIONS=" --bip=172.17.60.1/24 --ip-masq=false --mtu=1450"
    

    编辑docker系统配置文件

    cat <<EOF >/usr/lib/systemd/system/docker.service
    
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd  \$DOCKER_NETWORK_OPTIONS
    ExecReload=/bin/kill -s HUP \$MAINPID
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    
    systemctl daemon-reload
    systemctl restart docker
    

    设置完成后将配置文件分发到node,即192.168.88.114

    rsync -avzP /opt/kubernetes/bin/flanneld mk-docker-opts.sh root@192.168.88.114:/opt/kubernetes/bin/
    rsync -avzP /opt/kubernetes/cfg/flanneld root@192.168.88.114:/opt/kubernetes/cfg/
    rsync -avzP /usr/lib/systemd/system/flanneld.service root@192.168.88.114:/usr/lib/systemd/system/
    

    切换到其他节点,即192.168.88.114上操作,将docker系统配置文件(即/usr/lib/systemd/system/docker.servic)改成和 master(192.168.1.6)一样,也可以将master的复制给node上

    rsync -avzP /usr/lib/systemd/system/docker.servic root@192.168.88.114:/usr/lib/systemd/system/docker.servic
    

    然后开启并开机启动flanneld,重启docker

    systemctl daemon-reload
    systemctl enable flanneld
    systemctl start flanneld
    systemctl restart docker
    

    测试,在master(192.168.88.111)运行下面命令,然后在master上ping node上的docker0网关,如果能通的话证明OK,k8s-flanneld安装完毕

    [root@K8S-M1 ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.88.111:2379,https://192.168.88.114:2379" ls /coreos.com/network/subnets
    显示类似如下,肯定值不一样啦
    /coreos.com/network/subnets/172.17.60.0-24
    /coreos.com/network/subnets/172.17.75.0-24
    
    [root@K8S-M1 ssl]# ping 172.17.75.0
    PING 172.17.75.0 (172.17.75.0) 56(84) bytes of data.
    64 bytes from 172.17.75.0: icmp_seq=1 ttl=64 time=0.527 ms
    
    


    6、安装kubectl工具,创建Node节点kubeconfig文件(这边我有点迷惑,后面再看看其他人的文章)

    wget https://dl.k8s.io/v1.12.3/kubernetes-client-linux-amd64.tar.gz
    tar -xzvf kubernetes-client-linux-amd64.tar.gz
    将kubernetes/client/bin/kubectl这个执行命令添加到环境里,全局可使用
    

    在Master上执行即192.168.1.6

    cd /opt/ssl
    

    创建再运行脚本获取我们所需的文件,总共有三个:
    1、TLS Bootstrapping Token
    2、kubelet kubeconfig
    3、kube-proxy kubeconfig

    [root@localhost ssl]# cat kubeconfig.sh
    # 创建 TLS Bootstrapping Token
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    
    #----------------------
    
    # 创建kubelet bootstrapping kubeconfig
    export KUBE_APISERVER="https://192.168.88.111:6443"
    
    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=./ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
    #----------------------
    
    # 创建kube-proxy kubeconfig文件
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=./ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials kube-proxy \
      --client-certificate=./kube-proxy.pem \
      --client-key=./kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    

    运行该脚本:

    sh -x kubeconfig.sh
    

    此时我们可以获取到三个配置文件,我们将其拷贝到指定目录:

     cp -rf token.csv bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/
    


    8、运行Master组件
    1.10包地址:

    wget https://dl.k8s.io/v1.10.7/kubernetes-server-linux-amd64.tar.gz
    我们先在master(192.168.1.6)上操作

    master需要三个组件:kube-apiserver kube-controller-manager kube-scheduler
    我们将其拿出来 放入指定目录:

    tar xvf kubernetes-server-linux-amd64.tar.gz
    cp kubernetes/server/bin/kube-scheduler ./
    cp kubernetes/server/bin/kube-controller-manager ./
    cp kubernetes/server/bin/kube-apiserver ./
    [root@localhost kubernetes]# ls
    apiserver.sh  controller-manager.sh  kube-apiserver  kube-controller-manager  kubectl  kube-scheduler  master.zip  scheduler.sh
    
    mv kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
    赋予权限
    chmod +x /opt/kubernetes/bin/*
    
    
    echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
    
    

    其中的*.sh文件是我们自定义的脚本,帮助我们安装
    1.安装kube-apiserver

    [root@localhost kubernetes]# cat apiserver.sh
    #!/bin/bash
    
    MASTER_ADDRESS=${1:-"192.168.88.111"}
    ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
    
    KUBE_APISERVER_OPTS="--logtostderr=true \\
    --v=4 \\
    --etcd-servers=${ETCD_SERVERS} \\
    --insecure-bind-address=127.0.0.1 \\
    --bind-address=${MASTER_ADDRESS} \\
    --insecure-port=8080 \\
    --secure-port=6443 \\
    --advertise-address=${MASTER_ADDRESS} \\
    --allow-privileged=true \\
    --service-cluster-ip-range=10.10.10.0/24 \\
    --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \\
    --kubelet-https=true \\
    --enable-bootstrap-token-auth \\
    --token-auth-file=/opt/kubernetes/cfg/token.csv \\
    --service-node-port-range=30000-50000 \\
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
    --etcd-certfile=/opt/kubernetes/ssl/server.pem \\
    --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver
    

    执行脚本:

    [root@K8S-M1 kubernetes]chmod +x apiserver.sh
    [root@K8S-M1 kubernetes]# ./apiserver.sh 192.168.88.111 https://192.168.88.111:2379,https://192.168.88.114:2379
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    
    

    其中 192.168.88.111 代表master ip , https://192.168.88.111:2379,https://192.168.88.114:2379代表枚举ip

    2.安装:controller-manager
    编写安装脚本:

    [root@localhost master_pkg]# cat controller-manager.sh
    #!/bin/bash
    
    MASTER_ADDRESS=${1:-"127.0.0.1"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
    
    
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
    --v=4 \\
    --master=${MASTER_ADDRESS}:8080 \\
    --leader-elect=true \\
    --address=127.0.0.1 \\
    --service-cluster-ip-range=10.10.10.0/24 \\
    --cluster-name=kubernetes \\
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --root-ca-file=/opt/kubernetes/ssl/ca.pem"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager
    

    安装scheduler:

    [root@K8S-M1 kubernetes]# chmod +x controller-manager.sh
    [root@K8S-M1 kubernetes]# ./controller-manager.sh 127.0.0.1
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    [root@K8S-M1 kubernetes]# ps uxa |grep controller-manager
    root      21573 23.2  2.8 136236 58364 ?        Ssl  11:26   0:02 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.10.10.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem
    root      21594  0.0  0.0 112724   988 pts/0    S+   11:26   0:00 grep --color=auto controller-manager
    

    3.安装scheduler:

    编写安装脚本:
    [root@localhost master_pkg]# cat scheduler.sh
    #!/bin/bash
    
    MASTER_ADDRESS=${1:-"127.0.0.1"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
    
    KUBE_SCHEDULER_OPTS="--logtostderr=true \\
    --v=4 \\
    --master=${MASTER_ADDRESS}:8080 \\
    --leader-elect"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler
    

    执行脚本安装启动:

    [root@K8S-M1 kubernetes]# vi scheduler.sh
    [root@K8S-M1 kubernetes]# chmod +x  scheduler.sh
    [root@K8S-M1 kubernetes]# ./scheduler.sh 127.0.0.1
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    [root@K8S-M1 kubernetes]# ps aux |grep scheduler
    root      21878  2.2  0.9  45616 20232 ?        Ssl  11:29   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
    root      21901  0.0  0.0 112724   988 pts/0    S+   11:29   0:00 grep --color=auto scheduler
    

    到这里我们可以测试集群大概状态了

    [root@K8S-M1 kubernetes]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-1               Healthy   {"health": "true"}   
    etcd-0               Healthy   {"health": "true"}   
    

    9、运行node组件
    首先我们需要在master上面生成一个角色用于node上证书绑定认证
    在master上面操作(192.168.88.111)
    创建认证用户

    [root@K8S-M1 kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
    

    将在master上面生成的 bootstrap.kubeconfig ,kube-proxy.kubeconfig文件传到node节点上面去

    rsync -avPz bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.88.114:/opt/kubernetes/cfg/
    

    在node上操作(192.168.88.114)

    下载kubernetes-node-linux-amd64.tar.gz包
    tar xvf kubernetes-node-linux-amd64.tar.gz
    会解压为kubernetes文件夹
    
    

    将我们需要的文件(kubelet ,kube-proxy)拿出来,文件都在我们下载的二进制包中,其中*.sh为我们自定义的脚本

    [root@mail node_pkg]# ls
    kubelet  kubelet.sh  kube-proxy   proxy.sh
    chmod +x *.sh
    [root@K8S-N1 kubernetes]# cd kubernetes/node/bin
    [root@K8S-N1 bin]# ls
    kubeadm  kubectl  kubelet  kube-proxy
    
    mv kubelet kube-proxy /opt/kubernetes/bin/
    chmod +x /opt/kubernetes/bin/*
    
    echo "export PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
    

    1.安装kubelet
    编辑安装脚本:

    [root@mail node_pkg]# cat kubelet.sh
    #!/bin/bash
    
    NODE_ADDRESS=${1:-"192.168.88.111"}
    DNS_SERVER_IP=${2:-"10.10.10.2"}
    
    cat <<EOF >/opt/kubernetes/cfg/kubelet
    
    KUBELET_OPTS="--logtostderr=true \\
    --v=4 \\
    --address=${NODE_ADDRESS} \\
    --hostname-override=${NODE_ADDRESS} \\
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
    --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
    --cert-dir=/opt/kubernetes/ssl \\
    --allow-privileged=true \\
    --cluster-dns=${DNS_SERVER_IP} \\
    --cluster-domain=cluster.local \\
    --fail-swap-on=false \\
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    

    执行脚本安装:

    [root@K8S-N1 bin]# chmod +x kubelet.sh
    [root@K8S-N1 bin]# sh ./kubelet.sh 192.168.88.114 10.10.10.2
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    

    2.安装kube-proxy:
    编写安装脚本:

    [root@mail node_pkg]# cat proxy.sh
    #!/bin/bash
    
    NODE_ADDRESS=${1:-"192.168.88.114"}
    
    cat <<EOF >/opt/kubernetes/cfg/kube-proxy
    
    KUBE_PROXY_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=${NODE_ADDRESS} \
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
    

    启动脚本:

    [root@K8S-N1 bin]# chmod +x proxy.sh
    [root@K8S-N1 bin]# sh ./proxy.sh 192.168.88.114
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    [root@K8S-N1 bin]# ps aux |grep proxy
    root      24260  2.0  0.9  41840 20148 ?        Ssl  12:01   0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.88.114 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
    root      24378  0.0  0.0 112724   988 pts/0    S+   12:01   0:00 grep --color=auto proxy
    
    

    node没有出错的话我们就去node上操作(我没有,若有,IP为192.168.88.115)
    同样的操作:
    只是将脚本里面对应的ip改成本机ip

    ./kubelet.sh 192.168.88.115 10.10.10.2
    ./proxy.sh 192.168.88.115
    

    到此集群的安装结束,我们测试集群通不通


    10、查询集群状态

    [root@K8S-M1 kubernetes]# kubectl get csr
    NAME                                                   AGE   REQUESTOR           CONDITION
    node-csr-dENoyjtnUVT4yKbUjAzMi2iBS321jWNEhQdfolGwbIQ   96s   kubelet-bootstrap   Pending
    

    可以看到,节点 是处于等待状态,正常

    我们查看节点接入情况:

    [root@K8S-M1 kubernetes]# kubectl get node
    No resources found.
    

    目前没有节点加入,我们将节点加入

    [root@K8S-M1 kubernetes]# kubectl certificate approve node-csr-dENoyjtnUVT4yKbUjAzMi2iBS321jWNEhQdfolGwbIQ
    certificatesigningrequest.certificates.k8s.io/node-csr-dENoyjtnUVT4yKbUjAzMi2iBS321jWNEhQdfolGwbIQ approved
    

    查看节点,可能是NotReady,稍等片刻,我们发现节点已经加入进来:

    [root@K8S-M1 kubernetes]# kubectl get node
    NAME             STATUS     ROLES    AGE   VERSION
    192.168.88.114   NotReady   <none>   6s    v1.12.0-rc.2
    [root@K8S-M1 kubernetes]# kubectl get node
    NAME             STATUS   ROLES    AGE   VERSION
    192.168.88.114   Ready    <none>   16s   v1.12.0-rc.2
    

    查看集群状态:( kubectl get componentstatus)

    [root@K8S-M1 ssl]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok                   
    scheduler            Healthy   ok                   
    etcd-1               Healthy   {"health": "true"}   
    etcd-0               Healthy   {"health": "true"}   
    [root@K8S-M1 ssl]# kubectl get namespace
    NAME          STATUS   AGE
    default       Active   13h
    kube-public   Active   13h
    kube-system   Active   13h
    

    恭喜你,k8s安装完毕


    1.里面的有的包下不下来,我会打包到百度云,分享下来,宿主机下下来,通过xftp传进去。还有里面有些输入值或者输出值可能会和你的不一致,自行判断。

    2.还有一件事,master(192.168.88.111)和node(192.168.88.114)下载的kubernetes-server-linux-amd64.tar.gz和kubernetes-client-linux-amd64.tar.gz包,我是这样的,可能只需要kubernetes-server-linux-amd64.tar.gz包,解压后为kubernetes文件夹,你们有master和node文件夹,即master节点需要的东西在master,node节点需要的东西在node文件夹,即server包,里面内容很全面。但是我是两个分开下的,两个gz。
    链接:https://pan.baidu.com/s/1KnsuGllNQXfUoNGP58SSUA
    提取码:kmqy

    3.最后一件事,有时候从master拷贝文件到其他节点时,可能会有某个文件拷贝失败,导致后面失败,不过你正常操作,不用每次都去看,拷贝失败会导致某个后续操作失败,错误信息百度一下,就知道是哪个文件没有拷过来,你再去重新拷一下。也可能是没有命令把文件拷过去,上面的命令忘记写这个文件,我后面再去查看一下,我出现过这种情况,但是我忘记是那种原因了。我会再去看一下。


    如果这篇文章对你有用,只需要给我个赞,最好再给我评论,赞赞我,哈哈

    相关文章

      网友评论

          本文标题:实测:二进制装k8s

          本文链接:https://www.haomeiwen.com/subject/zhlewqtx.html