目录:
-
Service简介
kube-proxy3种不同的数据调度模式
Service资源的定义格式
示例1: ClusterIP 演示
示例2: NodePort 演示
示例3: LoadBalancer 演示
示例4: externalIPs 演示
Service简介
-
Service:可以理解为pod的负债均衡器,标准资源类型,Service Controller 为动态的一组Pod提供一个固定的访问入口, kubernetes完成SVC工作的是组件是kube-proxy
-
Endpoint Controller:管理后端端点与svc的绑定,根据标签选择器,筛选适配的pod,监控就绪的pod 并完成svc与pod的绑定
-
工作流程:Service Controller---->创建相同标签选择器的 Endpoint Controller根据标签选择器去管理和监听后端Pod状态 完成Svc与Pod绑定
Service能够提供负载均衡的能力,但是在使用上有以下限制:
- 只提供4层负载均衡能力,而没有7层功能,但有时我们可能需要更多的匹配规则来转发请求,这在4层负载均衡上是不支持的
kube-proxy3种不同的数据调度模式
- 1.Userspace
Userspace模型:Pod-->Service, iptables拦截规则,但自己不做调度 工作流程: 用户空间-->ptables(内核)-->kube-proxy-->ptables(内核)-->再调度给用户空间 效率低 - iptables 用户空间-->ptables(内核 完成数据调度)-->调度给用户空间 效率高
在iptables模型下kube-proxy的作用不在是数据调度转发,而是监听API server所有service中的定义转为本地的iptables规则
缺点:iptables模式,一个service会生成大量的规则; 如果一个service有50条规则 那如果有一万个容器,内核的性能就会受到影响 - ipvs代理模式: 在继承iptables优点的情况下,同时改进了iptables产生大量规则的缺点,在大规模集群中serice多的情况下优势更明显,
Service的类型
- clusterIP:通过集群内部IP地址暴露服务,但该地址仅在集群内部可见、可达,它无法被集群外部的客户端访问;默认类型;建议由K8S动态指定一个;也支持用户手动明确指定;
- NodePort: NodePort是ClusterIP的增强类型,它会于ClusterIP的功能之外,在每个节点上使用一个相同的端口号将外部流量引入到该Service上来。
- LoadBalancer: 是NodePort的增强类型,为各节点上的NodePort提供一个外部负载均衡器;需要公有云支持
- ExternalName:外部流程引入到K8S内部,借助集群上KubeDNS来实现,服务的名称会被解析为一个CNAME记录,而CNAME名称会被DNS解析为集群外部的服务的TP地址,实现内部服务与外部服务的数据交互 ExternallP 可以与ClusterIP、NodePort一起使用 使用其中一个IP做出口IP
ServicePort
Service:被映射进Pod上的应用程序监听的端口; 而且如果后端Pod有多个端口,并且每个端口都想通过Service暴露的话,每个都要单独定义。 最终接收请求的是PodIP和ContainerPort;
Service资源规范
Service名称空间级别的资源不能跨名称空间
apiVersion: v1
kind: Service
metadata:
name: ..
namespace: ...
labels:
key1: value1
key2: value2
spec:
type <string> #Service类型,默认为ClusterIP
selector <map[string]string> #等值类型的标签选择器,内含“与"逻辑
ports: # Service的端口对象列表
- name <string>#端口名称
protocol <string> #协议,目前仅支持TCP、UDP和SCTP,默认为TCP
port <integer> # Service的端口号
targetPort <string> #后端目标进程的端口号或名称,名称需由Pod规范定义
nodePort <integer> # 节点端口号,仅适用于NodePort和LoadBalancer类型
clusterIP <string> # Service的集群IP,建议由系统自动分配
externalTrafficPolicy <string>#外部流量策略处理方式,Local表示由当前节点处理,#Cluster表示向集群范围调度
loadBalancerIP <string> #外部负载均衡器使用的IP地址,仅适用于LoadBlancer
externalName <string> # 外部服务名称,该名称将作为Service的DNS CNAME值
示例1: ClusterIP 演示
[root@k8s-master svc]# cat services-clusterip-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: demoapp-svc
namespace: default
spec:
clusterIP: 10.97.72.1 #正式部署不需要指定 会自动生成,手动指定还可能会导致冲突
selector: #定义过滤条件
app: demoapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80 #后端pod端口
[root@k8s-master svc]# kubectl apply -f services-clusterip-demo.yaml
service/demoapp-svc created
[root@k8s-master svc]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 11s app=demoapp
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30d <none>
my-grafana NodePort 10.96.4.185 <none> 80:30379/TCP 27d app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana
myapp NodePort 10.106.116.205 <none> 80:31532/TCP 30d app=myapp,release=stabel
[root@k8s-master svc]# curl 10.97.72.1 #通过访问svc IP访问到后端节点
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
[root@k8s-master svc]# curl 10.97.72.1
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98!
[root@k8s-master svc]# kubectl describe svc demoapp-svc
Name: demoapp-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=demoapp
Type: ClusterIP
IP: 10.97.72.1
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.98:80,10.244.2.97:80 #后端节点
Session Affinity: None
Events: <none>
[root@k8s-master svc]# kubectl get pod -o wide --show-labels #匹配到前1、2个
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
demoapp-66db74fcfc-9wkgj 1/1 Running 0 39m 10.244.2.97 k8s-node2 <none> <none> app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-vzb4f 1/1 Running 0 39m 10.244.1.98 k8s-node1 <none> <none> app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=daily
liveness-httpget-demo 1/1 Running 3 29m 10.244.1.99 k8s-node1 <none> <none> app=liveness
liveness-tcpsocket-demo 1/1 Running 3 29m 10.244.1.100 k8s-node1 <none> <none> <none>
my-grafana-7d788c5479-kpq9q 1/1 Running 4 27d 10.244.1.84 k8s-node1 <none> <none> app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479
[root@k8s-master svc]# kubectl get ep #实际管理后端端点与svc的绑定是Endpoints
NAME ENDPOINTS AGE
demoapp-svc 10.244.1.98:80,10.244.2.97:80 2m33s
kubernetes 192.168.4.170:6443 30d
my-grafana 10.244.1.84:3000 27d
myapp <none> 30d
[root@k8s-master svc]# kubectl scale deployment demoapp --replicas=4 #修改deployment副本数为4
deployment.apps/demoapp scaled
[root@k8s-master svc]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
demoapp-66db74fcfc-9jzs5 1/1 Running 0 18s app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-9wkgj 1/1 Running 0 100m app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-dw9w2 1/1 Running 0 18s app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-vzb4f 1/1 Running 0 100m app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=daily
liveness-httpget-demo 1/1 Running 3 90m app=liveness
liveness-tcpsocket-demo 1/1 Running 3 90m <none>
my-grafana-7d788c5479-kpq9q 1/1 Running 4 27d app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479
[root@k8s-master svc]# kubectl get ep #已实时添加到ep与svc绑定
NAME ENDPOINTS AGE
demoapp-svc 10.244.1.101:80,10.244.1.98:80,10.244.2.97:80 + 1 more... 63m
kubernetes 192.168.4.170:6443 30d
my-grafana 10.244.1.84:3000 27d
myapp <none> 30d
示例2: NodePort 演示
[root@k8s-master svc]# cat services-nodeport-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: demoapp-nodeport-svc
namespace: default
spec:
type: NodePort
clusterIP: 10.97.56.1 #正式部署不需要指定 会自动生成手动指定还可能会导致冲突
selector:
app: demoapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80 #后端pod端口
nodePort: 31399 #正式部署不需要指定 会自动生成 默认生成端口在30000-32768之间
[root@k8s-master svc]# kubectl apply -f services-nodeport-demo.yaml
service/demoapp-nodeport-svc created
[root@k8s-master svc]# kubectl get pod
NAME READY STATUS RESTARTS AGE
demoapp-66db74fcfc-9jzs5 1/1 Running 0 8m47s
demoapp-66db74fcfc-9wkgj 1/1 Running 0 109m
demoapp-66db74fcfc-dw9w2 1/1 Running 0 8m47s
demoapp-66db74fcfc-vzb4f 1/1 Running 0 109m
liveness-httpget-demo 1/1 Running 3 98m
liveness-tcpsocket-demo 1/1 Running 3 98m
my-grafana-7d788c5479-kpq9q 1/1 Running 4 27d
[root@k8s-master svc]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 11s #可以看到两个prot 其中31399就是nodeport端口
demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 72m
[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done #通过节点IP:prot访问
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-dw9w2, ServerIP: 10.244.1.101!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
- 可以看到上面虽然是通过节点2访问,但通过IP地址发现还是会轮询到节点1上的pod
这时就要提到 externalTrafficPolicy <string>#外部流量策略处理方式,
Local表示由当前节点处理
Cluster表示向集群范围调度
[root@k8s-master ~]# kubectl edit svc demoapp-nodeport-svc
...
spec:
clusterIP: 10.97.56.1
externalTrafficPolicy: Local #把默认的Cluster改成Local
...
[root@k8s-master svc]# kubectl scale deployment demoapp --replicas=1 #调整deployment副本数为1
deployment.apps/demoapp scaled
[root@k8s-master ~]# kubectl get pod -o wide #可以看到唯一的pod运行node2节点上
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demoapp-66db74fcfc-9wkgj 1/1 Running 0 123m 10.244.2.97 k8s-node2 <none> <none>
liveness-httpget-demo 1/1 Running 3 112m 10.244.1.99 k8s-node1 <none> <none>
[root@k8s-master ~]# curl 192.168.4.171:31399 #通过节点1 失败
^C
[root@k8s-master ~]# curl 192.168.4.172:31399 #通过节点2
iKubernetes demoapp v1.0 !! ClientIP: 192.168.4.170, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
示例3: LoadBalancer 演示
[root@k8s-master svc]# cat services-loadbalancer-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: demoapp-loadbalancer-svc
namespace: default
spec:
type: LoadBalancer
selector:
app: demoapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80 #后端pod端口
# loadBalancerIP: 1.2.3.4 #这里应该不是在Iaas平台上,无法创建ELB,所以无法创建
[root@k8s-master svc]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 31s #可以看到因为不是Iaas平台上 EXTERNAL-IP一直为pending状态,表示一直在申请资源而挂起,依然可以通过NodePort的方式访问
demoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 30m
demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 102m
[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done #通过NodePort的方式访问
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-2jf49, ServerIP: 10.244.1.103!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
示例4: externalIPs 演示
NodePort 实际应用中还需要在前面加一层负载均衡,以起到统一入口和高可用,而且后端新增的节点也不会自动添加到负载上
externalIPs 在只有1个或多个节点暴露IP的情况下,可通过虚拟IP,实现高可用
[root@k8s-master ~]# ip addr add 192.168.100.100/16 dev eth0
[root@k8s-master ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:44:16:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.170/24 brd 192.168.4.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.100.100/16 scope global eth0
valid_lft forever preferred_lft forever
[root@k8s-master svc]# cat services-
services-clusterip-demo.yaml services-externalip-demo.yaml services-loadbalancer-demo.yaml services-nodeport-demo.yaml
[root@k8s-master svc]# cat services-externalip-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: demoapp-externalip-svc
namespace: default
spec:
type: ClusterIP
selector:
app: demoapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80 #后端pod端口
externalIPs:
- 192.168.100.100 #实际应用中,可以通过过haproxy等实现虚拟IP 达到高可用
[root@k8s-master svc]# kubectl apply -f services-externalip-demo.yaml
service/demoapp-externalip-svc created
[root@k8s-master svc]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-externalip-svc ClusterIP 10.110.30.133 192.168.100.100 80/TCP 16s
demoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 3h6m
demoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 3h36m
demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 4h47m
#访问测试
[root@k8s-master svc]# curl 192.168.100.100
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
[root@k8s-master svc]# while true;do curl 192.168.100.100;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-z682r, ServerIP: 10.244.2.99!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
网友评论