前言:
流量入口代理作为互联网系统的门户组件,具备众多选型:从老牌代理 HAProxy、Nginx,到微服务 API 网关 Kong、Zuul,再到容器化 Ingress 规范与实现,不同选型间功能、性能、可扩展性、适用场景参差不齐。当云原生时代大浪袭来,Envoy 这一 CNCF 毕业数据面组件为更多人所知。那么,优秀“毕业生”Envoy 能否成为云原生时代下流量入口标准组件?
背景 —— 流量入口的众多选型与场景
在互联网体系下,凡是需要对外暴露的系统几乎都需要网络代理:较早出现的 HAProxy、Nginx 至今仍在流行;进入微服务时代后,功能更丰富、管控能力更强的 API 网关又成为流量入口必备组件;在进入容器时代后,Kubernetes Ingress 作为容器集群的入口,是容器时代微服务的流量入口代理标准。关于这三类典型的七层代理,核心能力对比如下:
- 从上述核心能力对比来看:
- HAProxy&Nginx 在具备基础路由功能基础上,性能、稳定性经历多年考验。Nginx 的下游社区 OpenResty 提供了完善的 Lua 扩展能力,使得 Nginx 可以更广泛的应用与扩展,如 API 网关 Kong 即是基于 Nginx+OpenResty 实现。
- API 网关作为微服务对外 API 流量暴露的基础组件,提供比较丰富的功能和动态管控能力。
- Ingress 作为 Kubernetes 入口流量的标准规范,具体能力视实现方式而定。如基于 Nginx 的 Ingress 实现能力更接近于 Nginx,Istio Ingress Gateway 基于 Envoy+Istio 控制面实现,功能上更加丰富(本质上 Istio Ingress Gateway 能力上强于通常的 Ingress 实现,但未按照 Ingress 规范实现)。
-
那么问题来了:同样是流量入口,在云原生技术趋势下,能否找到一个能力全面的技术方案,让流量入口标准化?
-
Envoy 核心能力介绍
Envoy 是一个为云原生应用设计的开源边缘与服务代理(ENVOY IS AN OPEN SOURCE EDGE AND SERVICE PROXY, DESIGNED FOR CLOUD-NATIVE APPLICATIONS,@envoyproxy.io),是云原生计算基金会(CNCF)第三个毕业的项目,GitHub 目前有 13k+ Star。 -
Envoy 有以下主要特性:
- 基于现代 C++ 开发的 L4/L7 高性能代理。
- 透明代理。
- 流量管理。支持路由、流量复制、分流等功能。
- 治理特性。支持健康检查、熔断、限流、超时、重试、故障注入。
- 多协议支持。支持 HTTP/1.1,HTTP/2,GRPC,WebSocket 等协议代理与治理。
- 负载均衡。加权轮询、加权最少请求、Ring hash、Maglev、随机等算法支持。支持区域感知路由、故障转移等特性。
- 动态配置 API。提供健壮的管控代理行为的接口,实现 Envoy 动态配置热更新。
- 可观察性设计。提供七层流量高可观察性,原生支持分布式追踪。
- 支持热重启。可实现 Envoy 的无缝升级。
- 自定义插件能力。Lua 与多语言扩展沙箱 WebAssembly。
- 总体来说,Envoy 是一个功能与性能都非常优秀的“双优生”。在实际业务流量入口代理场景下,Envoy 具备先天优势,可以作为云原生技术趋势流量入口的标准技术方案:
-
较 HAProxy、Nginx 更丰富的功能
相较于 HAProxy、Nginx 提供流量代理所需的基本功能(更多高级功能通常需要通过扩展插件方式实现),Envoy 本身基于 C++ 已经实现了相当多代理所需高级功能,如高级负载均衡、熔断、限流、故障注入、流量复制、可观测性等。更为丰富的功能不仅让 Envoy 天生就可以用于多种场景,原生 C++ 的实现相较经过扩展的实现方式性能优势更为明显。 -
与 Nginx 相当,远高于传统 API 网关的性能
在性能方面,Envoy 与 Nginx 在常用协议代理(如 HTTP)上性能相当。与传统 API 网关相比,性能优势明显.
目前Service Mesh已经进入了以Istio为代表的第二代,由Data Panel(Proxy)、Control Panel两部分组成。Istio是对Service Mesh的产品化实践,帮助微服务实现了分层解耦,架构图如下:
HTTPProxy资源规范
apiVersion: projectcontour.io/v1 #API群组及版本;
kind: HTTPProxy #CRD资源的名称;
metadata:
name <string>
namespace <string> #名称空间级别的资源
spec:
virtualhost <VirtualHost> #定义FQDN格式的虚拟主机,类似于Ingress中host fqdn <string> #虚拟主机FQDN格式的名称
tls <TLS> #启用HTTPS,且默认以301将HTTP请求重定向至HTTPS
secretName <string> #存储于证书和私钥信息的Secret资源名称
minimumProtocolVersion <string> #支持的SSL/TLS协议的最低版本
passthrough <boolean> #是否启用透传模式,启用时控制器不卸载HTTPS会话
clientvalidation <DownstreamValidation> #验证客户端证书,可选配置
caSecret <string> #用于验证客户端证书的CA的证书
routes <[ ]Route> #定义路由规则
conditions <[]Condition> #流量匹配条件,支持PATH前缀和标头匹配两种检测机制
prefix <String> #PATH路径前缀匹配,类似于Ingress中的path字段
permitInsecure <Boolean> #是否禁止默认的将HTTP重定向到HTTPS的功能
services <[ ]Service> #后端服务,会对应转换为Envoy的Cluster定义
name <String> #服务名称
port <Integer> #服务端口
protocol <string> #到达后端服务的协议,可用值为tls、h2或者h2c
validation <UpstreamValidation> #是否校验服务端证书
caSecret <string>
subjectName <string> #要求证书中使用的Subject值
HTTPProxy 高级路由资源规范
spec:
routes <[]Route> #定义路由规则
conditions <[]Condition>
prefix <String>
header <Headercondition> #请求报文标头匹配
name <String> #标头名称
present <Boolean> #true表示存在该标头即满足条件,值false没有意义
contains <String> #标头值必须包含的子串
notcontains <string> #标头值不能包含的子串
exact <String> #标头值精确的匹配
notexact <string> #标头值精确反向匹配,即不能与指定的值相同
services <[ ]Service>#后端服务,转换为Envoy的Cluster
name <String>
port <Integer>
protocol <String>
weight <Int64> #服务权重,用于流量分割
mirror <Boolean> #流量镜像
requestHeadersPolicy <HeadersPolicy> #到上游服务器请求报文的标头策略
set <[ ]HeaderValue> #添加标头或设置指定标头的值
name <String>
value <String>
remove <[]String>#移除指定的标头
responseHeadersPolicy <HeadersPolicy> #到下游客户端响应报文的标头策略
loadBalancerPolicy <LoadBalancerPolicy> #指定要使用负载均衡策略
strategy <String>#具体使用的策略,支持Random、RoundRobin、Cookie
#和weightedLeastRequest,默认为RoundRobin;
requestHeadersPolicy <HeadersPolicy> #路由级别的请求报文标头策略
reHeadersPolicy <HeadersPolicy> #路由级别的响应报文标头策略
pathRewritePolicy <PathRewritePolicy> #URL重写
replacePrefix <[]ReplacePrefix>
prefix <String> #PATH路由前缀
replacement <string> #替换为的目标路径
HTTPProxy服务弹性 健康检查资源规范
spec:
routes <[]Route>
timeoutPolicy <TimeoutPolicy> #超时策略
response <String> #等待服务器响应报文的超时时长
idle <String> # 超时后,Envoy维持与客户端之间连接的空闲时长
retryPolicy <RetryPolicy> #重试策略
count <Int64> #重试的次数,默认为1
perTryTimeout <String> #每次重试的超时时长
healthCheckPolicy <HTTPHealthCheckPolicy> # 主动健康状态检测
path <String> #检测针对的路径(HTTP端点)
host <String> #检测时请求的虚拟主机
intervalSeconds <Int64> #时间间隔,即检测频度,默认为5秒
timeoutSeconds <Int64> #超时时长,默认为2秒
unhealthyThresholdCount <Int64> # 判定为非健康状态的阈值,即连续错误次数
healthyThresholdCount <Int64> # 判定为健康状态的阈值
Envoy部署
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
[root@k8s-master Ingress]# kubectl get ns
NAME STATUS AGE
default Active 14d
dev Active 13d
ingress-nginx Active 29h
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
kubernetes-dashboard Active 21h
longhorn-system Active 21h
projectcontour Active 39m #新增名称空间
test Active 12d
[root@k8s-master Ingress]# kubectl nget pod -n projectcontour
[root@k8s-master Ingress]# kubectl get pod -n projectcontour
NAME READY STATUS RESTARTS AGE
contour-5449c4c94d-mqp9b 1/1 Running 3 37m
contour-5449c4c94d-xgvqm 1/1 Running 5 37m
contour-certgen-v1.18.1-82k8k 0/1 Completed 0 39m
envoy-n2bs9 2/2 Running 0 37m
envoy-q777l 2/2 Running 0 37m
envoy-slt49 1/2 Running 2 37m
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 39m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 39m #因为不是Iaas平台上 显示为pending状态,表示一直在申请资源而挂起,不影响通过NodePort的方式访问
[root@k8s-master Ingress]# kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
extensionservices extensionservice,extensionservices projectcontour.io true ExtensionService
httpproxies proxy,proxies projectcontour.io true HTTPProxy
tlscertificatedelegations tlscerts projectcontour.io true TLSCertificateDelegation
- 创建虚拟机 www.ik8s.io
[root@k8s-master Ingress]# cat httpproxy-demo.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-demo
namespace: default
spec:
virtualhost:
fqdn: www.ik8s.io #虚拟主机
tls:
secretName: ik8s-tls
minimumProtocolVersion: "tlsv1.1" #最低兼容的协议版本
routes :
- conditions:
- prefix: /
services :
- name: demoapp-deploy #后端svc
port: 80
permitInsecure: true #明文访问是否重定向 true为否
[root@k8s-master Ingress]# kubectl apply -f httpproxy-demo.yaml
httpproxy.projectcontour.io/httpproxy-demo configured
- 查看代理httpproxy或 httpproxies
[root@k8s-master Ingress]# kubectl get httpproxy
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-demo www.ik8s.io ik8s-tls valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get httpproxies
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-demo www.ik8s.io ik8s-tls valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl describe httpproxy httpproxy-demo
...
Spec:
Routes:
Conditions:
Prefix: /
Permit Insecure: true
Services:
Name: demoapp-deploy
Port: 80
Virtualhost:
Fqdn: www.ik8s.io
Tls:
Minimum Protocol Version: tlsv1.1
Secret Name: ik8s-tls
Status:
Conditions:
Last Transition Time: 2021-09-13T08:44:00Z
Message: Valid HTTPProxy
Observed Generation: 2
Reason: Valid
Status: True
Type: Valid
Current Status: valid
Description: Valid HTTPProxy
Load Balancer:
Events: <none>
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 39m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 39m
- 添加hosts 访问测试
[root@bigyong ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
...
192.168.54.171 www.ik8s.io
[root@bigyong ~]# curl www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, ServerIP: 192.168.12.39!
[root@bigyong ~]# curl www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-gw6qp, ServerIP: 192.168.113.39!
- HTTPS访问
[root@bigyong ~]# curl https://www.ik8s.io:32278
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
[root@bigyong ~]# curl -k https://www.ik8s.io:32278 #忽略证书不受信问题 访问成功
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, Se
示例1:访问控制
- 创建2个Pod 分别使用不同版本
[root@k8s-master Ingress]# kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n dev
deployment.apps/demoappv11 created
[root@k8s-master Ingress]# kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n dev
deployment.apps/demoappv12 created
- 创造与之对应的Svc
[root@k8s-master Ingress]# kubectl create service clusterip demoappv11 --tcp=80 -n dev
service/demoappv11 created
[root@k8s-master Ingress]# kubectl create service clusterip demoappv12 --tcp=80 -n dev
service/demoappv12 created
[root@k8s-master Ingress]# kubectl get svc -n dev
kuNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoappv11 ClusterIP 10.99.204.65 <none> 80/TCP 19s
demoappv12 ClusterIP 10.97.211.38 <none> 80/TCP 17s
[root@k8s-master Ingress]# kubectl describe svc demoappv11 -n dev
Name: demoappv11
Namespace: dev
Labels: app=demoappv11
Annotations: <none>
Selector: app=demoappv11
Type: ClusterIP
IP: 10.99.204.65
Port: 80 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.12.53:80
Session Affinity: None
Events: <none>
[root@k8s-master Ingress]# kubectl describe svc demoappv12 -n dev
Name: demoappv12
Namespace: dev
Labels: app=demoappv12
Annotations: <none>
Selector: app=demoappv12
Type: ClusterIP
IP: 10.97.211.38
Port: 80 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.51.79:80
Session Affinity: None
Events: <none>
- 访问测试
[root@k8s-master Ingress]# curl 10.99.204.65
iKubernetes demoapp v1.1 !! ClientIP: 192.168.4.170, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# curl 10.97.211.38
iKubernetes demoapp v1.2 !! ClientIP: 192.168.4.170, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
- 部署Envoy httpproxy
[root@k8s-master Ingress]# cat httpproxy-headers-routing.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-headers-routing
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes: #路由
- conditions:
- header:
name: X-Canary #header中包含X-Canary:true
present: true
- header:
name: User-Agent #header中包含curl
contains: curl
services: #满足以上两个条件路由到demoappv11
- name: demoappv11
port: 80
- services: #其他不满足条件路由到demoapp12
- name: demoappv12
port: 80
[root@k8s-master Ingress]# kubectl apply -f httpproxy-headers-routing.yaml
httpproxy.projectcontour.io/httpproxy-headers-routing unchanged
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-headers-routing www.ilinux.io valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 114m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 114m
- 访问测试
[root@bigyong ~]# cat /etc/hosts #添加hosts
...
192.168.54.171 www.ik8s.io www.ilinux.io
[root@bigyong ~]# curl http://www.ilinux.io #默认为1.2版本
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
[root@bigyong ~]# curl http://www.ilinux.io
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
- 因为通过curl访问 所以在添加信息头中添加 X-Canary:true即可满足条件 为1.1版本
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# kubectl delete -f httpproxy-headers-routing.yaml
httpproxy.projectcontour.io "httpproxy-headers-routing" deleted
示例2:流量切割 金丝雀发布
- 先小比例发布,没问题后在发布到全部
- 部署部署Envoy httpproxy 流量比列分别为10%、90%流量
[root@k8s-master Ingress]# cat httpproxy-traffic-splitting.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-traffic-splitting
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services:
- name: demoappv11
port: 80
weight: 90 #v1.1版本为90%流量
- name: demoappv12
port: 80
weight: 10 #v1.2版本为10%流量
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-traffic-splitting www.ilinux.io valid Valid HTTPProxy
- 访问测试
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done #v1.1 v1.2的比大约是9:1
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
示例3:镜像发布
[root@k8s-master Ingress]# cat httpproxy-traffic-mirror.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-traffic-mirror
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services :
- name: demoappv11
port: 80
- name: demoappv12
port: 80
mirror: true #镜像访问
[root@k8s-master Ingress]# kubectl apply -f httpproxy-traffic-mirror.yaml
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-traffic-mirror www.ilinux.io valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
demoappv11-59544d568d-5gg72 1/1 Running 0 74m
demoappv12-64c664955b-lkchk 1/1 Running 0 74m
```shell
- 访问测试
```shell
#都是v1.1版本
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
- 查看v1.2版本Pod日志 有相同流量访问并显示访问正常
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
demoappv11-59544d568d-5gg72 1/1 Running 0 74m
demoappv12-64c664955b-lkchk 1/1 Running 0 74m
[root@k8s-master Ingress]# kubectl logs demoappv12-64c664955b-lkchk -n dev
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
192.168.4.170 - - [13/Sep/2021 09:35:01] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:24] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:29] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:12] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:25] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:07] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:16] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
[root@k8s-master Ingress]# kubectl delete -f httpproxy-traffic-mirror.yaml
httpproxy.projectcontour.io "httpproxy-traffic-mirror" deleted
示例4 :自定义调度算法
[root@k8s-master Ingress]# cat httpproxy-lb-strategy.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-lb-strategy
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services:
- name: demoappv11
port: 80
- name: demoappv12
port: 80
loadBalancerPolicy:
strategy: Random #随机访问策略
示例5: HTTPProxy服务弹性 健康检查
[root@k8s-master Ingress]# cat httpproxy-retry-timeout.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-retry-timeout
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- timeoutPolicy:
response: 2s #响应时间为2s 2s内没有响应为超时
idle: 5s #空闲5s
retryPolicy:
count: 3 #重试3次
perTryTimeout: 500ms #重试时间
services:
- name: demoappv12
port: 80
参考链接:
https://baijiahao.baidu.com/s?id=1673615010327758104&wfr=spider&for=pc
网友评论