前言
Kong是一款功能强大,使用方便,性能优异的网关组件,可以很方便的与K8S ingress集成实现灵活的路由管理,在此之前如果您对k8s网络对外映射方面不十分熟悉建议先看下这两篇文章:
https://www.jianshu.com/p/189fab1845c5/
https://www.jianshu.com/p/97dd4d59ac5a
基础概念
k8s对外暴露服务最简单的模式是通过NodePort直接暴露主机30000以上的一个端口对应一个服务使用,这种模式缺点是占用端口,不能灵活控制。
k8s提供了ingress方案对服务进行统一入口管理,其包含两大组件:
- ingress:负载管理路由规则,类似于nginx的conf文件,或者您可以直接理解为系统的hosts文件,其更新添加可以通过yaml文件形式由k8s部署。
- ingress controller:负责对外提供入口,简单说就是网关的实现。
k8s设计时,默认不提供具体的ingress controller实现,而是留给第三方集成,市面上常用的第三方网关组件会对k8s进行适配,网关组件通过与kubernetes API交互,能够动态的去感知集群中Ingress规则变化,然后读取规则并按照它自己的模板生成自己的配置规则加载使用;您可以理解为ingress controller是k8s定义的抽象类,而各网关组件是对他的具体实现。
这部分您可以参考这篇详细了解下ingress controller的选型https://www.cnblogs.com/upyun/p/12372107.html
而本文我们采用的是kong网关组件实现。
1.安装PostgreSql
指定一台服务器,然后下载镜像,我们选择9.5版本(kong支持9.4以上版本的pg数据库)
docker pull docker.mirrors.ustc.edu.cn/library/postgres:9.5 #获取镜像
mkdir /data/postgresql #创建数据目录
chmod 777 /data/postgresql #授权目录
docker run -p 5432:5432 -v /data/postgresql:/var/lib/postgresql/data -e POSTGRES_PASSWORD=123456 -e TZ=PRC -d --name=postgres postgres:9.5
参数说明:
-p端口映射
-v将数据存到宿主机的映射目录
-e POSTGRES_PASSWORD 密码(默认用户名postgres)
-e TZ=PRC时区,中国
-d后台运行
--name容器名称
创建用户及kong数据库
进入容器内
docker exec -it postgres /bin/bash
su root
su - postgres #切换帐户
psql #输入psql
create user kong with password '123456';
create database kong owner kong ; #创建数据库指定所属者
\l; # \L查看数据库
创建kong namespace供后面各组件统一使用
kong-namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kong
kubectl apply -f kong-namespaces.yaml 创建命名空间
master节点我们创建一个数据库连接
postgres-service.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: my-postgres
namespace: kong
subsets:
- addresses:
- ip: 192.168.0.230
ports:
- port: 5432
---
apiVersion: v1
kind: Service
metadata:
name: my-postgres
namespace: kong
spec:
type: NodePort
ports:
- port: 5432
protocol: TCP
targetPort: 5432
nodePort: 30432
kubectl apply -f postgres-service.yaml 创建kong postgresql的连接
创建连接的目的是我们可以使用serviceName连接数据库,通常我们会建议将db/es/redis/mq/等非k8s必须资源独立于k8s的集群外部署,降低k8s管理的复杂度;而这种独立在外部部署的资源建议添加一个k8s的endpoint/service指向来描述其调用地址,便于灵活管理及调用方便。
2.kong安装
为kong节点打标签
生产环境我们通常会为kong部署多个节点,这些节点通过vip实现NLB方案,而k8s部署默认会随机分配到某一个节点部署pod,为了保证让k8s始终将kong的pod分配到特定的有vip的节点,我们需要为运行kong的虚机节点打上标签,kong根据标签部署在这些机器,没打标签的不会部署。
kubectl get nodes --show-labels #查看标签
kubectl label k8s-node1 app=ingress-kong #打上这个标签供后面使用(key/value是我们自定义的)
-----
kubectl label k8s-node1 node=gateway --overwrite #修改/覆盖标签
kubectl label k8s-node1 key- #删除label
-----
创建kong-gateway.yaml
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongcredentials.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .type
description: Type of credential
name: Credential-type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .consumerRef
description: Owner of the credential
name: Consumer-Ref
type: string
group: configuration.konghq.com
names:
kind: KongCredential
plural: kongcredentials
scope: Namespaced
validation:
openAPIV3Schema:
properties:
consumerRef:
type: string
type:
type: string
required:
- consumerRef
- type
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
type: string
type: array
regex_priority:
type: integer
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
type: object
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
validation:
openAPIV3Schema:
properties:
config:
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongcredentials
- kongconsumers
- kongingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader-kong
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
data:
servers.conf: |
# Prometheus metrics server
server {
server_name kong_prometheus_exporter;
listen 0.0.0.0:9542; # can be any other port as well
access_log off;
location /metrics {
default_type text/plain;
content_by_lua_block {
local prometheus = require "kong.plugins.prometheus.exporter"
prometheus:collect()
}
}
location /nginx_status {
internal;
stub_status;
}
}
# Health check server
server {
server_name kong_health_check;
listen 0.0.0.0:9001; # can be any other port as well
access_log off;
location /health {
return 200;
}
}
kind: ConfigMap
metadata:
name: kong-server-blocks
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
#service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
#externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: kong-ingress-controller
namespace: kong
spec:
type: NodePort
ports:
- name: kong-admin
port: 8001
targetPort: 8001
nodePort: 30001
protocol: TCP
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
prometheus.io/port: "9542"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
node: kong
#gateway: web
containers:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: my-postgres.kong
- name: KONG_PG_PASSWORD
value: "123456" #注意修改
- name: KONG_NGINX_WORKER_PROCESSES
value: "8"
- name: KONG_NGINX_HTTP_INCLUDE
value: /kong/servers.conf
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001, 0.0.0.0:8444 ssl
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:80, 0.0.0.0:443 ssl http2
image: 192.168.0.230:8083/kong/kong:1.3.0 #注意修改
securityContext:
runAsUser: 0
#capabilities:
privileged: true
# add:
# - NET_BIND_SERVICE
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9001
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 80
name: proxy
protocol: TCP
- containerPort: 443
name: proxy-ssl
protocol: TCP
- containerPort: 9542
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9001
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
#securityContext:
# runAsUser: 0
volumeMounts:
- mountPath: /kong
name: kong-server-blocks
- args:
- /kong-ingress-controller
- --kong-url=https://localhost:8444
- --admin-tls-skip-verify
- --publish-service=kong/kong-proxy
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: 192.168.0.230:8083/kong/kong-ingress-controller:0.6.2 #注意修改
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
initContainers:
- command:
- /bin/sh
- -c
- while true; do kong migrations list; if [[ 0 -eq $? ]]; then exit 0; fi;
sleep 2; done;
env:
- name: KONG_PG_HOST
value: my-postgres.kong
- name: KONG_PG_PASSWORD
value: kong
image: 192.168.0.230:8083/kong/kong:1.3.0 #注意修改
name: wait-for-migrations
serviceAccountName: kong-serviceaccount
volumes:
- configMap:
name: kong-server-blocks
name: kong-server-blocks
---
apiVersion: batch/v1
kind: Job
metadata:
name: kong-migrations
namespace: kong
spec:
template:
metadata:
name: kong-migrations
spec:
containers:
- command:
- /bin/sh
- -c
- kong migrations bootstrap
env:
- name: KONG_PG_PASSWORD
value: "123456" #注意修改
- name: KONG_PG_HOST
value: my-postgres.kong
- name: KONG_PG_PORT
value: "5432"
image: 192.168.0.230:8083/kong/kong:1.3.0 #注意修改
name: kong-migrations
initContainers:
- command:
- /bin/sh
- -c
- until nc -zv $KONG_PG_HOST $KONG_PG_PORT -w1; do echo 'waiting for db';
sleep 1; done
env:
- name: KONG_PG_HOST
value: my-postgres.kong
- name: KONG_PG_PORT
value: "5432"
image: busybox:latest
name: wait-for-postgres
restartPolicy: OnFailure
安装kongA
kongA是kong的一个开源UI管理组件,使用kongA可以以WEB形式直观的查看与管理kong的路由规则,该组件为选装。
说明:集成了k8s ingress后的kong,不建议使用kongA上进行路由的管理,应该使用k8s ingress进行管理路由然后提供给kong使用。
---
***konga-deploy.yaml***
#deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: kong-konga
namespace: kong
spec:
selector:
matchLabels:
app: kong-konga
replicas: 1
template:
metadata:
labels:
app: kong-konga
spec:
#inodeSelector:
# node: worker
containers:
- name: kong-konga
image: pantsel/konga:0.14.7
imagePullPolicy: IfNotPresent
env:
- name: DB_ADAPTER
value: postgres
- name: DB_HOST
#服务名.命名空间
value: my-postgres.kong
- name: DB_PORT
value: "5432"
- name: DB_USER
value: postgres
- name: DB_DATABASE
value: konga
- name: DB_PASSWORD
value: "123456" #注意修改
- name: NODE_ENV
#value: production
value: development
- name: TZ
value: Asia/Shanghai
ports:
- containerPort: 1337
---
#service
apiVersion: v1
kind: Service
metadata:
name: kong-konga
namespace: kong
spec:
ports:
- port: 80
protocol: TCP
targetPort: 1337
nodePort: 31337
type: NodePort
selector:
app: kong-konga
---
截止目前一共创建了4个yaml文件,您可以根据自己环境实际情况修改镜像地址,IP及密码信息。
我们按顺序分别执行4个yaml文件
kubectl apply -f kong-namespaces.yaml #创建kong命名空间
kubectl apply -f postgres-service.yaml #创建kong postgresql的连接
kubectl apply -f kong-gateway.yaml #创建kong网关,最重要的一步
kubectl apply -f konga-deploy.yaml #创建kongA管理
3.kong网关的使用
基本测试与配置
部署完毕后,我们测试下:
http://192.168.0.137 kong网关安装的节点
浏览器返回如下:
{"message":"no Route matched with those values"}
该信息是由kong返回的,说明kong已经安装好,只是没有配置路由,kong不知道该如何路由。
接下来我们访问KongA,配置下关联,出现如下界面,我们注册一个管理帐户,帐户名称随便输入。
http://192.168.0.137:31337/register
注册后,登录会提示绑定kong,注意kong admin URL需要输入内部地址
kongA绑定kong-admin
如果绑定成功,我们是能够看到kongA获取到kong的版本号,然后我们点击列表中的ACTIVE启用该连接。
激活后,我们页面左菜单会出多一些管理菜单,我们点击ROUTES菜单,查看路由
路由管理
由于我们之前已经创建了一些ingress路由,此时已经被kong ingress自动采集上来了。
新建路由规则验证
我们的域名已经提前添加了A记录指向kong的公网服务器:api.xxxx.cn,如果您在本地测试建议用hosts文件模拟。
创建一个demo的部署,镜像是我们之前已经做好的demo程序(.net core写的)
重点是ingress部分的配置
#kong-netcore-demo.yaml 测试程序部署
#create namespace
apiVersion: v1
kind: Namespace
metadata:
name: mydemos
spec:
finalizers:
- kubernetes
---
#deploy
apiVersion: apps/v1
kind: Deployment
#kind: StatefulSet
metadata:
name: netcore-02-blue
namespace: mydemos
spec:
selector:
matchLabels:
app: netcore-02-blue
replicas: 1
template:
metadata:
labels:
app: netcore-02-blue
spec:
containers:
- name: netcore-02-blue
image: 192.168.0.230:8083/my/netcore-02:2.0.7
imagePullPolicy: Always
env:
- name: TZ
value: Asia/Shanghai
ports:
- containerPort: 8020
---
#service
apiVersion: v1
kind: Service
metadata:
name: netcore-02-blue
namespace: mydemos
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8020
selector:
app: netcore-02-blue
type: NodePort
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: netcore-02-blue
namespace: mydemos
spec:
rules:
#host 网关域名或IP,这个是路由关键
- host: api.xxx.cn
http:
paths:
#这个也很重要,path路径,如service1,这个不需要和服务名完全一致
- path: /netcore-02/
backend:
serviceName: netcore-02-blue
servicePort: 80
我们执行
[root@k8s-master es]# kubectl apply -f kong-netcore-demo.yaml
namespace/mydemos created
deployment.apps/netcore-02-blue created
service/netcore-02-blue created
ingress.extensions/netcore-02-blue created
[root@k8s-master es]# kubectl get pods -n mydemos
NAME READY STATUS RESTARTS AGE
netcore-02-blue-7ddc75cd5d-tfpc2 1/1 Running 0 13s
输入网址:http://api.xxx.cn/netcore-02/default/index 注意这个路径由三部分构成:
1:api.xxx.cn:这个是ingress里的host
2:netcore-02:这个是ingress里的path
3:default/index:这个是你程序里的api路径,我这里默认的controller/action
另外此时刷新kongA的service/routes界面,是可以直接看到新创建的服务及路由指向,kong-ingress会自动从ingress中采集并加载,几乎是实时的。
同理,我们可以将之前通过nginx转发的相关域名切换到kong-ingress来映射
以apollo的portal界面转发为例
**apollo-portal-kong.yaml**
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apollo-config-portal
namespace: apollo
spec:
rules:
- host: config.xxx.cn #host 域名
http:
paths:
- path: / #路径
backend:
serviceName: service-apollo-portal-server #apollo的service name
servicePort: 8070
这样我们就可以通过config.xxx.cn访问到apollo的portal界面
附ingress管理命令
如若ingress数据无法清除,可用以下命令清除
[root@master1 apollo]# kubectl get ingress -n kong #查看ingress
NAME HOSTS ADDRESS PORTS AGE
xxx-k8s-web k8s.xxx.cn 192.168.0.28 80 13m
xxx-konga-web konga.xxx.cn 192.168.0.137 80 18m
[root@master1 apollo]# kubectl delete ingress xxx-k8s-web -n kong #删除
ingress.extensions "xxx-k8s-web" deleted
[root@master1 apollo]# kubectl get ingress -n kong #查看ingress
NAME HOSTS ADDRESS PORTS AGE
xxx-konga-web k8s.xxx.cn 192.168.0.28 80 13m
4.总结
本文采用kong实现了ingress controller的功能,您也可以使用其他网关实现同样的功能;
k8s网络体系知识不容易掌握,需要多看多思考;
ingress使用一定要理解其原理,kong读取ingress的配置实现其controller功能,但不建议使用kong来控制ingress的配置;
网友评论