k8s集群的安装
k8s的架构
除了核心组件,还有一些推荐的Add-ons:
image.png修改IP地址、主机和host解析
10.0.0.11 k8s-master
10.0.0.12 k8s-node-1
10.0.0.13 k8s-node-2
master节点安装etcd
yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
systemctl start etcd.service
systemctl enable etcd.service
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
etcdctl -C http://10.0.0.11:2379 cluster-health
master节点安装kubernetes
yum install kubernetes-master.x86_64 -y
vim /etc/kubernetes/apiserver
8行: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
node节点安装kubernetes
yum install kubernetes-node.x86_64 -y
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
以上环境节点测试
[root@docker01 ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 31m
10.0.0.13 Ready 29m
所有节点配置flannel网络
yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
master节点:
etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
etcdctl get /atomic.io/network/config
yum install docker -y
systemctl enable flanneld.service
systemctl restart flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
node节点:
systemctl enable flanneld.service
systemctl restart flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
测试:(三台机器同时执行)
vim /usr/lib/systemd/system/docker.service
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
ExecStart=/usr/bin/dockerd-current \
systemctl daemon-reload
systemctl restart docker
wget http://192.168.12.201/docker_image/docker_busybox.tar.gz
docker load -i docker_busybox.tar.gz
docker run -it docker.io/busybox:latest
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:37:03
inet addr:172.16.55.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe10:3703/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:516 (516.0 B) TX bytes:516 (516.0 B)
/ # ping 172.16.38.3
PING 172.16.38.3 (172.16.38.3): 56 data bytes
64 bytes from 172.16.38.3: seq=0 ttl=60 time=1.255 ms
^C
--- 172.16.38.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.181/1.272/1.381 ms
/ # ping 172.16.77.4
PING 172.16.77.4 (172.16.77.4): 56 data bytes
64 bytes from 172.16.77.4: seq=0 ttl=60 time=1.032 ms
^C
--- 172.16.77.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.718/1.021/1.314 ms
配置master为镜像仓库
#master节点
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.11:5000'
systemctl restart docker
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
#node节点
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=10.0.0.11:5000'
systemctl restart docker
什么是k8s,k8s有什么功能?
k8s是一个docker集群的管理工具
k8s的核心功能
自愈: 重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容
器会被中止,并且在容器准备好服务之前不会把其向客户端广播。
弹性伸缩: 通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的
数量
服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自
己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。
滚动升级和一键回滚: Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不
会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。
k8s的历史
k8s的安装
yum安装 1.5
源码编译安装---难度最大 可以安装最新版
二进制安装---步骤繁琐 可以安装最新版 shell,ansible,saltstack
kubeadm 安装最容易, 网络 可以安装最新版
minikube 适合开发人员体验k8s, 网络
k8s的应用场景
k8s最适合跑微服务项目!
k8s常用的资源
创建pod资源
k8s yaml的主要组成
```shell
apiVersion: v1 api版本
kind: pod 资源类型
metadata: 属性
spec: 详细
```
k8s_pod.yaml
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
```
pod资源:至少由两个容器组成,pod基础容器和业务容器组成
pod是k8s最小的资源单位
测试:
[root@k8s-node-2 ~]# wget http://192.168.12.201/docker_image/docker_nginx1.13.tar.gz
docker load -i docker_nginx1.13.tar.gz
docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
docker push 10.0.0.11:5000/nginx:1.13
docker load -i pod-infrastructure-latest.tar.gz
docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest
docker push 10.0.0.11:5000/pod-infrastructure:latest
vim /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
systemctl restart kubelet.service
[root@k8s-master ~]# mkdir k8s/kod -p
[root@k8s-master kod]# vim k8s_kod.yml
[root@k8s-master kod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 27m
ReplicationController资源
rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod
k8s资源的常见操作:
kubectl create -f xxx.yaml
kubectl get pod|rc
kubectl describe pod nginx
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml
kubectl edit pod nginx
创建一个rc
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 5
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
```
rc的滚动升级
新建一个nginx-rc1.15.yaml
升级
kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=10s
回滚
kubectl rolling-update nginx2 -f nginx-rc.yaml --update-period=1s
测试
service资源
service帮助pod暴露端口
创建一个service
```yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
targetPort: 80
selector:
app: myweb2
```
修改nodePort范围
```shell
vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
```
service默认使用iptables来实现负载均衡,新版本中推荐使用lvs(四层负载均衡)
deployment资源
有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源
创建deployment
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
```
deployment升级和回滚
命令行创建deployment
kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
命令行升级版本
kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15
查看deployment所有历史版本
kubectl rollout history deployment nginx
deployment回滚到上一个版本
kubectl rollout undo deployment nginx
deployment回滚到指定版本
kubectl rollout undo deployment nginx --to-revision=2
k8s的安装方法
一、kubernetes 二进制安装 安装最新版,步骤繁琐!!
https://github.com/minminmsn/k8s1.13/blob/master/kubernetes/kubernetes1.13.1%2Betcd3.3.10%2Bflanneld0.10%E9%9B%86%E7%BE%A4%E9%83%A8%E7%BD%B2.md
二、kubeadm 安装(网络原因)
https://www.qstack.com.cn/archives/425.html
三、minikube 安装(网络原因)
四、yum 安装(最容易 1.5)
五、go编译安装(大神级别)
网友评论