好处很多:
流量直接访问到node 上面的 ingress-nginx,再直接转发到目标 pod。
pod可以获取客户端访问的ip
ingress-nginx 相当于是一种类似于 istio 服务网格的部署方式,每个node可以分担流量压力。
限制/不足:
每个 node 节点需要部署 ingess-nginx,多占用一点资源。
每个 node 节点的端口会被 ingress-nginx 占用 80 和 443,以及将来可能会被占用的 tcp或是 udp 端口。
安全:需要评估安全影响。
软件环境:
kubernetes 1.15.3 1.17.0
nginx-ingress-controller:0.25.0 0.30.0
部署步骤:
wget -N https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml -O ingress-nginx-mandatory.yaml
wget -N https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml -O ingress-nginx-service-clusterip.yaml
sed -i 's/kind: Deployment/kind: DaemonSet/g' ingress-nginx-mandatory.yaml
sed -i 's/replicas:/#replicas:/g' ingress-nginx-mandatory.yaml
sed -i -e 's?quay.io?quay.azk8s.cn?g' -e 's?k8s.gcr.io?gcr.azk8s.cn/google-containers?g' ingress-nginx-mandatory.yaml
# spec.template.spec 下面
# serviceAccountName: nginx-ingress-serviceaccount 的前后,平级加上 hostNetwork: true 和 dnsPolicy: "ClusterFirstWithHostNet"
sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\ hostNetwork: true' ingress-nginx-mandatory.yaml
sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\ dnsPolicy: "ClusterFirstWithHostNet"' ingress-nginx-mandatory.yaml
sed -i 's/type: NodePort/type: ClusterIP/g' ingress-nginx-service-clusterip.yaml
sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\ nodeSelector:\n node-ingress: "true"' ingress-nginx-mandatory.yaml
kubectl apply -f ingress-nginx-mandatory.yaml
kubectl apply -f ingress-nginx-service-clusterip.yaml
## end
在线修改启动参数
在 args: - /nginx-ingress-controller 后面加上 --enable-ssl-passthrough (argocd 需要这个配置)
kubectl edit DaemonSet/nginx-ingress-controller -n ingress-nginx
--enable-ssl-passthrough
增加这个参数,保存退出。
##############
# 一些维护的命令
kubectl label node <node-name> node-ingress="true"
# change svc type
kubectl patch svc ingress-nginx -n ingress-nginx -p '{"spec": {"type": "ClusterIP"}}'
# node selector
append to serviceAccountName
nodeSelector:
node-ingress: "true"
负载 4 层 服务
- 生成 configmap 配置文件
- 修改 ingress 服务文件,把需要暴露的端口加进来
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#type: LoadBalancer
type: ClusterIP
externalIPs:
- 1.2.3.4
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: dnstcp
port: 53
targetPort: 53
protocol: TCP
- name: dnsudp
port: 53
targetPort: 53
protocol: UDP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
---
kubectl apply -f ingress-nginx-kube-dns.yaml
高可用
1.把 node 节点作为已有负载均衡器(nginx、haproxy)的后端使用。
这种情况下,ingress 部署的 Type 可以设置为 ClusterIP 或 LoadBalancer。
externalIPs 也可以不配置。
2.在上文选定的node节点上运行keepalived,配置虚地址。
3.在集群外部署keepalived,配置LVS和虚地址。
方法2和方法3,建议 ingress 部署的 Type 可以设置为 ClusterIP 或 LoadBalancer。需要配置 externalIPs (IPVS的DR模式或Stunel模式必须要将虚地址配置到本地的虚拟网卡上)
以下为方法2的参考配置
举例: 10.1.3.50 为虚地址, 10.1.3.41 和 10.1.3.42 为 node 节点
# /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_process { # Requires keepalived-1.1.13
script "killall -0 nginx" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
state BACKUP
interface eth0 # interface
virtual_router_id 200
priority 101 # 101 on master, 100 on backup
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass ipvsadm
}
track_script {
#chk_process
#weight 20
}
virtual_ipaddress {
10.1.3.50/24 dev eth0 label eth0:1 # VIP
}
unicast_src_ip 10.1.3.41 # My IP
unicast_peer {
10.1.3.42 # Peer IP
}
# notify_master "/app/basic/keepalived_notify_nginx.sh master"
}
virtual_server 10.1.3.50 80 { #VIP
delay_loop 6
# lb_algo wrr
lb_algo wlc
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 10.1.3.41 80 { # My IP , Check Port
weight 2
#notify_down "/app/basic/keepalived_notify_nginx.sh down"
TCP_CHECK {
connect_timeout 10
#nb_get_retry 3
delay_before_retry 3
connect_port 80 # Check Port
}
}
real_server 10.1.3.42 80 { # My IP , Check Port
weight 2
#notify_down "/app/basic/keepalived_notify_nginx.sh down"
TCP_CHECK {
connect_timeout 10
#nb_get_retry 3
delay_before_retry 3
connect_port 80 # Check Port
}
}
}
选择3台节点机器上运行keepalived,将vip作为集群外访问ingress-nginx的入口ip
type: LoadBalancer
externalIPs:
- 1.2.3.4
以上配置,在 kube-proxy 的 IPVS 模式下,会在 pod 所在的节点的网卡 kube-ipvs0 上添加此 externalIPs 地址。相当于 RS 节点已经配置完毕。
再通过 keepalived (推荐)或者 ipvsadmin ,在需要的网络节点配置虚地址,将虚地址指向 真实的 node 节点。可以选择 DR 或是 隧道模式。
ip addr add 172.17.8.201/24 dev eth0:0
ipvsadm -A -t 172.17.8.201:80
ipvsadm -a -t 172.17.8.201:80 -r 172.17.8.11:80 -g
ipvsadm -a -t 172.17.8.201:80 -r 172.17.8.12:80 -g
ipvsadm -L -n
测试
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: whoami-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: whoami
group: golang
spec:
containers:
- image: containous/whoami
name: whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
selector:
app: whoami
group: golang
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: whoami.domain.com
- host: whoami
http:
paths:
- backend:
serviceName: whoami
servicePort: 80
---
kubectl apply -f whoami.yaml
参考:
https://blog.frognew.com/2018/12/ingress-edge-node-ha-in-bare-metal-k8s-host-network.html
https://www.servicemesher.com/blog/kubernetes-ingress-controller-deployment-and-ha/
https://jishu.io/kubernetes/ipvs-loadbalancer-for-kubernetes/
网友评论