K8s安装
主机部署组件说明(根据文档架构):
master-1主机对应的是ETCE_NAME="etcd01" IP为192.168.174.30 安装组件包括 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
master-2主机对应的是ETCE_NAME="etcd02" IP为192.168.174.31 安装组件包括 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
node-1主机对应的是ETCE_NAME="etcd03" IP为192.168.174.40 安装组件包括 kubelet, kube-proxy, docker, flannel, etcd
node-1主机对应的IP为192.168.174.41 安装组件包括 kubelet, kube-proxy, docker, flannel
master组件作用
kube-apiserver:
kube-apiserver对外暴露了Kubernetes API。它是的 Kubernetes 前端控制层。它被设计为水平扩展,即通过部署更多实例来缩放
kube-cotroller-manager:
kube-controller-manager运行控制器,它们是处理集群中常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成独立的可执行文件,并在单个进程中运行。
etcd:
etc用于kubernetes的后端存储。所有集群数据都存储在此处,始终为kubernetes集群的etcd数据提供备份计划。
kube-scheduler:
kube-scheduler监视没有分配节点的新创建的pod,选择一个节点供他们运行。
node组件作用:
kubelet:
kubelet是主要的节点代理,它监测已分配给其节点的Pod(通过apiserver或通过本地配置文件),提供如下功能:
* 挂载Pod所需要的数据卷(volume)
* 下载Pod的secrets
* 通过Docker运行(或通过rkt)运行Pod的容器
* 周期性的对容器生命周期进行探测
* 如果需要,通过创建镜像Pod(Mirror Pod)将Pod的状态报告回系统的其余部分
* 将节点的状态报告会系统的其余部分
kube-proxy:
kube-proxy通过维护主机上的网络规则并执行连接转发,实现了kubernetes服务抽象
flannel:
flannel网络插件,目前支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式
工作原理是,数据源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,node间的flannel0虚拟网卡互为网关,所以当node1上的pod访问node2上的pod时就可以通信了。如果两台node其中一台无法访问公网一台可访问公网,那么无法访问公网的会通过可以访问公网的node而访问公网。
laster负载均衡 10.206.176.19对应192.168.176.19 组件 LVS
1、使用cfssl生成自签证书所需文件已下载到目录中,如需重复下载可用浏览器打开下面链接,或直接wget
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
#chmod +x cfssl*
#mv cfssl_linux-amd64 /usr/local/bin/cfssl
#mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
#mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
2、创建CA证书配置,生成CA证书和密钥
etcd和kubernetes都需要生成证书
etcd证书存放目录:mkdir -p /etc/etcd/ssl
kubernetes证书存放目录:mkdir -p /etc/kubernetes/ssl
创建证书时临时存放:mkdir /root/ssl
文件有格式要求,首先我们先生成默认文件,然后根据config.json文件的格式创建ca-config.json文件,过期时间设置为87600h。
创建CA配置文件
#cd /root/ssl
#cfssl print-defaults config > config.json
#vim config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
#mv config.json ca-config.json
ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
server auth:表示client可以用该 CA 对server提供的证书进行验证;
client auth:表示server可以用该CA对client提供的证书进行验证;
创建CA证书签名请求
#cfssl print-defaults csr > csr.json
#vim csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
#mv csr.json ca-csr.json
“CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
“O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
生成 CA 证书和私钥
#cfssl gencert -initca ca-csr.json | cfssljson -bare ca
创建kubernetes证书
创建kubernetes证书签名请求
#vim server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.174.30",
"192.168.174.31",
"192.168.174.40"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
生成 kubernetes 证书和私钥
#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2019/03/01 10:55:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
上面的警告忽略
一、安装Etcd
以下配置需要在三台服务器上配置,参考excel文档中的部署架构
这里对应的主机分别是:
master-1主机对应的是ETCE_NAME="etcd01" IP为192.168.174.30
master-2主机对应的是ETCE_NAME="etcd02" IP为192.168.174.31
node-1主机对应的是ETCE_NAME="etcd03" IP为192.168.174.40
除了配置文件需要三台不同,其余操作一致。
1、二进制包下载位置
https://github.com/etcd-io/etcd/releases/tag/v3.2.12
下载etcd-v3.2.12-linux-amd64.tar.gz
#mkdir /opt/etcd/{bin,cfg,ssl} -p
#tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
#mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2、创建etcd配置文件
#cd /opt/etcd/cfg/
#vim etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.174.31:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.174.31:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.174.31:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.174.31:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.174.31:2380,etcd02=https://192.168.174.30:2380,etcd03=https://192.168.174.40:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
配置文件注解
ETCD_NAME="etcd01" #节点名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_ISTEN_PEER_URLS="https://192.168.174.31:2380" #集群通信监听地址 ,IP为自己服务器IP
ETCD_ISTEN_CLIENT_URLS="https://192.168.174.31:2379" #客户端访问监听地址,IP为自己服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.174.31:2380" #集群通告地址,IP为自己服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.174.31:2379" #客户端通告地址,IP为自己服务器IP
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.174.31:2380,etcd02=https://192.168.174.30:2380,etcd03=https://192.168.174.40:2380" #集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群Token
ETCD_INITIAL_CLUSTER_STATE="new" #加入集群的当前状态,new是新集群,existing表示加入已有集群
systemctl管理etcd
#vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=ontify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
把证书拷贝到配置文件中的位置
#cp ca*pem server*pem /opt/etcd/ssl/
关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service #禁止firewall开机启动
setenforce 0
启动etcd
systemctl start etcd
systemctl enable etcd
检查etcd集群状态
#/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.174.31:2379,https://192.168.174.30:2379,https://192.168.174.40:2379" cluster-health
输出以下内容即为成功:
luster-health
member 21cdde3b45cde04f is healthy: got healthy result from https://192.168.174.40:2379
member ca2bc200e194cb1c is healthy: got healthy result from https://192.168.174.30:2379
member dd8d169df81310cc is healthy: got healthy result from https://192.168.174.31:2379
日志查看
/var/log/message
或
journalctl -u etcd
二、在Node节点安装Docker
#yum -y install yum-utils device-mapper-persistent-data lvm2
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#yum install docker-ce -y
#curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
#systemctl start docker
#systemctl enable docker
三、部署Flannel网络
Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义字网段
在master-1上执行:
#/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.174.30:2379,https://192.168.174.31:2379,htts://192.168.174.40:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
已下步骤在每个node节点上都操作
1、下载二进制包
#wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
#tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
#mkdir -p /opt/kubernetes/bin
#mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
把证书拷贝到配置文件中的位置
#cp ca*pem server*pem /opt/etcd/ssl/ (所说的证书都是master-01上生成的证书)
2、配置Flannel
#mkdir /opt/kubernetes/cfg
#vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
3、systemd管理Flannel
#vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
4、配置docker启动指定子网段
#cp /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service.bak
#vim /usr/lib/systemd/system/docker.service
#cat /usr/lib/systemd/system/docker.service |egrep -v '^#|^$'
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
5、重启Flannel和docker
#systemctl daemon-reload
#systemctl start flanneld
#systemctl enable flanneld
#systemctl restart docker
6、检查是否生效
#ps -ef |grep docker
root 3761 1 1 15:27 ? 00:00:00 /usr/bin/dockerd --bip=172.17.44.1/24 --ip-masq=false --mtu=1450
#ip addr
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:3a:13:4d:7f brd ff:ff:ff:ff:ff:ff
inet 172.17.44.1/24 brd 172.17.44.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 02:0e:32:58:7f:75 brd ff:ff:ff:ff:ff:ff
inet 172.17.44.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e:32ff:fe58:7f75/64 scope link
valid_lft forever preferred_lft forever
docker0和flannel网卡在同一个网段,并且两个node节点直接应该能通
如果ping不通,并且/var/log/message下有类似iptables的告警,需要再关闭防火墙systemctl stop firewalld.service ,之后重启docker
==========================================================================================================================================
四、在Master节点部署组件
1、生成证书
创建CA证书
mkdir /root/kubernetes-ssl
cd /root/kubernetes-ssl
# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
# cat ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -initca ca-csr.json |cfssljson -bare ca
生成apiserver证书
# cat server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.176.19",
"192.168.174.30",
"192.168.174.31",
"192.168.174.40",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
生成kube-proxy证书
# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
最终生成证书文件包括:
# ls *pem
ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
2、部署apiserver组件
下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md (只下载kubernetes-server-linux-amd64.tar.gz)
# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin/
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/
创建token文件
# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组
创建apiserver配置文件
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 \
--bind-address=192.168.174.30 \
--secure-port=6443 \
--advertise-address=192.168.174.30 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
注解:
cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \ #启用日志
--v=4 \ #日志等级
--etcd-servers=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 \ #集群地址
--bind-address=192.168.174.30 \ #监听地址
--secure-port=6443 \ #安全端口
--advertise-address=192.168.174.30 \ #集群通告地址
--allow-privileged=true \ #启用授权
--service-cluster-ip-range=10.0.0.0/24 \ #Service虚拟IP地址段
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ #准入控制模块
--authorization-mode=RBAC,Node \ #认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \ #启用TLS bootstrap功能
--service-node-port-range=30000-50000 \ #Service Node类型默认分配端口范围
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
system管理apiserver
# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
#cd /root/kubernetes-ssl/
#cp *.pem /opt/kubernetes/ssl/
启动:
#systemctl daemon-reload
#systemctl enable kube-apiserver
#systemctl restart kube-apiserver
部署schduler组件
创建schduler配置文件
# vim /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
注解:
--master 连接本地apiserver
--leader-elect 当该组件启动多个时,自动选举(HA)
systemd管理schduler组件
# vim /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
# vim /usr/lib/systemd/system/kube-scheduler.service
[Unti]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
#systemctl daemon-reload
#systemctl enable kube-scheduler
#systemctl restart kube-scheduler
部署controller-manager组件
创建controller-manager配置文件:
# vim /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
system管理controller-manager组件
# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
#systemctl daemon-reload
#systemctl enable kube-controller-manager
#systemctl restart kube-controller-manager
查看集群组件状态:
# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
输出如上内容表示成功
两个节点都部署
在NODE节点部署组件
master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
在Master节点操作:
1。将kubelet-bootstrap用户绑定到系统集群角色
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
2.创建kubeconfig文件
进入到生成kubernetes证书的目录
#cd /root/kubernets-ssl
定义环境变量
#KUBE_APISERVER="https://192.168.176.19:6443" (如果配置负载均衡则使用负载均衡地址,如果没有配置则指定master节点地址)
#BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
创建bootstrapping.kubeconfig文件
设置集群参数(直接在终端执行)
#/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
设置客户端认证参数(直接在终端执行)
#/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
设置上下文参数(直接在终端执行)
#/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
设置默认上下文(直接在终端执行)
#/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
创建kube-proxy.kubeconfig文件(直接在终端执行)
#/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#ls
bootstrapping.kubeconfig kube-proxy.kubeconfig
将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下
3.部署kubelet组件
将之前下载的二进制包中kubele和kube-proxy拷贝到Node节点的/opt/kubernetes/bin目录下,之前下载的二进制包解压后目录/root/kubernetes/server/bin,找不到用find搜
在node节点下创建kubelet配置文件(其中的IP写本机IP)
#cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.174.41 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
参数说明
--hostname-override=192.168.174.41 \ #在集群中显示的主机名
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ #文件位置,会自动生成
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ #指定刚生成的bootstrap.kubeconfig文件
--cert-dir=/opt/kubernetes/ssl \ #颁发证书存放位置
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" #管理Pod网络的镜像
编写kubelet.config配置文件
# cat /opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.174.41
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
webhook:
enabled: false
systemd管理kubele组件
# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
启动
#systemctl daemon-reload
#systemctl enable kubelet
#systemctl restart kubelet
在Mater审批Node加入集群
需要手动允许节点加入
#/opt/kubernetes/bin/kubectl get csr
#/opt/kubernetes/bin/kubectl certificate approve XXXXXID
#/opt/kubernetes/bin/kubectl get node
4.部署kube-proxy组件
创建kube-proxy配置文件:
# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.174.40 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
systemd管理kube-proxy组件
cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
#systemctl daemon-reload
#systemctl enable kube-proxy
#systemctl restart kube-proxy
如果启动后报错:
kube-proxy: W0305 10:54:29.666610 31207 server.go:605] Failed to retrieve node info: nodes "192.168.174.40" not found
需要检查:
/opt/kubernetes/cfg/kubelet.config 和 /opt/kubernetes/cfg/kube-proxy 中的IP地址是否都是本机地址。
查看集群状态
# /opt/kubernetes/bin/kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.174.40 Ready <none> 6h v1.11.8
192.168.174.41 Ready <none> 1d v1.11.8
# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
5.运行一个测试示例
创建一个nginx web,判断集群是否正常工作
# /opt/kubernetes/bin/kubectl run nginx --image=nginx --replicas=3
# /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed
查看Pod,Service:
# /opt/kubernetes/bin/kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-64f497f8fd-9ccgb 1/1 Running 0 6h
nginx-64f497f8fd-bbw97 1/1 Running 0 6h
nginx-64f497f8fd-pxkh8 1/1 Running 0 6h
# /opt/kubernetes/bin/kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d
nginx NodePort 10.0.0.16 <none> 88:39427/TCP 57s
记录下最后一条的端口号39427
访问:http://192.168.174.40:39427
6.部署Dashboard(Web UI)
dashboard-deployment.yaml 部署Pod,提供web服务
dashboard-rbac.yaml 授权访问apiserver获取信息
dashboard-service.yaml 发布服务,提供对外访问
# cat dashboard-deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.1
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 9090
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
# cat dashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
# cat dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
type: NodePort
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
创建:
/opt/kubernetes/bin/kubectl create -f dashboard-rbac.yaml
/opt/kubernetes/bin/kubectl create -f dashboard-deployment.yaml
/opt/kubernetes/bin/kubectl create -f dashboard-service.yaml
等待一会,查看资源状态
# /opt/kubernetes/bin/kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/kubernetes-dashboard-d9545b947-jrmmc 1/1 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard NodePort 10.0.0.6 <none> 80:41545/TCP 16m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kubernetes-dashboard 1 1 1 1 16m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubernetes-dashboard-d9545b947 1 1 1 16m
# /opt/kubernetes/bin/kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.0.6 <none> 80:41545/TCP 16m
访问:http://192.168.174.40:41545
网友评论