美文网首页
POD网络性能测试

POD网络性能测试

作者: 行者深蓝 | 来源:发表于2021-08-23 18:09 被阅读0次

    环境信息

    * 集群1: Uk8s    1.20.6      Node节点 4核8G    iptable(ucloud vpc cni) 
    * 集群2: Uk8s    1.20.6      Node节点 4核8G    ipvs (ucloud vpc cni)
    * 集群3: K8S     1.20.6      Node节点 4核8G    ebpf (cilium/ipvlan)   
    * 集群4: ACK.    1.20        Node节点 4核8G    ebpf (Terway/ipvlan)       
    

    测试工具

    sirot/netperf-latest 镜像,包含netperf/iperf工具

    • 准备测试环境
    # kubectl run -it --rm --restart=Never busybox --image=busybox sh
    # kubectl get nodes --show-labels
    
    kubectl  delete svc --all -n network-bench
    kubectl  delete deploy --all -n network-bench
    kubectl  delete pods --all -n network-bench
    
    kubectl create ns network-bench
    
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: netperf-server
      namespace: network-bench
      labels:
        app: netperf-server
        role: local
    spec:
      containers:
      - image: sirot/netperf-latest
        command: ["/bin/sh","-c","netserver -p 4444 -4; iperf3 -s -i 1;"] 
        imagePullPolicy: IfNotPresent
        name: netperf
        ports:
        - name: netperf-port
          containerPort: 4444
        - name: iperf-port
          containerPort: 5210
      restartPolicy: Always
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: role
                operator: In
                values:
                - local
            topologyKey: kubernetes.io/hostname
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: role
                operator: In
                values:
                - remote
            topologyKey: kubernetes.io/hostname
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: netperf-headless-svc
      namespace: network-bench
      labels:
        app: netperf-headless-svc
    spec:
      ports:
      - name: netperf-port
        port: 4444
        targetPort: 4444
      - name: iperf-port
        port: 5201
        targetPort: 5201
      clusterIP: None
      selector:
        app: netperf-server
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: netperf-client
      namespace: network-bench
      labels:
        app: netperf-client
        role: local
    spec:
      containers:
      - image: sirot/netperf-latest
        command:
          - sleep
          - "7200"
        imagePullPolicy: IfNotPresent
        name: netperf
      restartPolicy: Always
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: role
                operator: In
                values:
                - local
            topologyKey: kubernetes.io/hostname
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: role
                operator: In
                values:
                - remote
            topologyKey: kubernetes.io/hostname
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: netperf-deploy
      namespace: network-bench
      labels:
        app: netperf-deploy
    spec:
      replicas: 1
      selector:
        matchLabels:
          role: remote
      template:
        metadata:
          labels:
            app: netperf-remote
            role: remote
        spec:
          containers:
          - name: netperf-remote
            image: sirot/netperf-latest
            command: ["/bin/sh","-c","netserver -p 4444 -4; iperf3 -s -i 1;"] 
            ports:
            - name: netperf-port
              containerPort: 4444
            - name: iperf-port
              containerPort: 5210
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: role
                    operator: In
                    values:
                    - local
                topologyKey: kubernetes.io/hostname
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: netperf-remote-svc
      namespace: network-bench
    spec:
      selector:
        role: remote
      type: ClusterIP
      ports:
      - name: netperf-port
        port: 4444
        targetPort: 4444
      - name: iperf-port
        port: 5201
        targetPort: 5201
    EOF
    
    • 执行测试脚本
    #!/bin/bash
    
    echo "# POD NetWork Test Result"
      echo "## iperf_tcp_pod_to_pod: "
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'iperf3 -c netperf-headless-svc -t 10' | tail -n 5
      
      echo "## iperf_udp_pod_to_pod: "
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'iperf3 -u -c netperf-headless-svc -t 10' | tail -n 5
    
      echo "## netperf_tcp_rr_pod_to_pod: "
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'netperf -t TCP_RR -H netperf-headless-svc -p 4444 -l 10' | tail -n 5
    
      echo "## netperf_tcp_crr_pod_to_pod:"
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'netperf -t TCP_CRR -H netperf-headless-svc -p 4444 -l 10' | tail -n 5
    
    
      netperf_pod=`kubectl  get pods -n network-bench | grep deploy | awk  '{print $1}'`
    
      echo "## iperf_tcp_pod_to_pod_over_node:"
      kubectl exec -t -i ${netperf_pod} -n network-bench -- sh -c 'iperf3 -c netperf-headless-svc -t 10' | tail -n 5
    
      echo "## iperf_udp_pod_to_pod_over_node"
      kubectl exec -t -i ${netperf_pod} -n network-bench -- sh -c 'iperf3 -u -c netperf-headless-svc -t 10' | tail -n 5
    
      echo "## netperf_tcp_rr_pod_to_pod_over_node: "
      kubectl exec -t -i ${netperf_pod} -n network-bench -- sh -c 'netperf -t TCP_RR -H netperf-headless-svc -p 4444 -l 10' | tail -n 5
    
      echo "## netperf_tcp_crr_pod_to_pod_over_node:"
      kubectl exec -t -i ${netperf_pod} -n network-bench -- sh -c 'netperf -t TCP_CRR -H netperf-headless-svc -p 4444 -l 10' | tail -n 5
    
      echo "## iperf3_tcp_pod_to_remote_svc" 
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'iperf3 -c netperf-remote-svc -t 10'  | tail -n 6
    
      echo "## iperf3_udp_pod_to_remote_svc" 
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'iperf3 -u -c netperf-remote-svc -t 10' | tail -n 6
    
      echo "## netperf_tcp_rr_pod_to_remote_svc"
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'netperf -t TCP_RR -H netperf-remote-svc -p 4444 -l 10' | tail -n 6
    
      echo "## netperf_tcp_crr_pod_to_remote_svc"
      kubectl exec -t -i netperf-client -n network-bench -- sh -c 'netperf -t TCP_CRR -H netperf-remote-svc -p 4444 -l 10' | tail -n 6
    
    
    

    TestCase

    1. iperf 参考测试用例
    服务端: iperf3 -s -i 1
    客户端(TCP请求): iperf3 -c iperf3_server_ip -t 60
    客户端(UDP请求): iperf3 -u -c iperf3_server_ip -t 60
    
    1. netperf 参考测试用例
    服务端: netserver -p 4444 -D -4
    客户端(TCP_RR方式): netperf -t TCP_RR -H netperf_server_ip -p 4444
    客户端(TCP_CRR方式): netperf -t TCP_CRR -H netperf_server_ip -p 4444
    

    测试场景

    1. 同Node pod 网络性能测试
    2. 跨Node pod 网络性能测试
    3. Pod to ClusterIP 网络性能测试

    测试结果

    case iptable ipvs ebpf
    iperf_tcp_pod_to_pod 20.1 Gbits/sec 19.0 Gbits/sec 20.6 Gbits/sec
    iperf_udp_pod_to_pod 1.04 Mbits/sec 1.04 Mbits/sec 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_pod 82203.75 81789.85/sec 93794.91/sec
    netperf_tcp_crr_pod_to_pod 18387.24 18762.88/sec 11449.55/sec
    iperf_tcp_pod_to_pod_over_node 2.01 Gbits/sec 1.95 Gbits/sec 1.95 Gbits/sec
    iperf_udp_pod_to_pod_over_node 1.04 Mbits/sec 1.04 Mbits/sec 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_pod_over_node 21380.13 22005.29/sec 18414.08/sec
    netperf_tcp_crr_pod_to_pod_over_node 5005.35 5863.64/sec 2793.67/sec
    iperf_tcp_pod_to_remote_svc 2.08 Gbits/sec 1.95 Gbits/sec 1.95 Gbits/sec
    iperf_udp_pod_to_remote_svc ---- ---- 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_remote_svc ---- ---- ----
    netperf_tcp_crr_pod_to_remote_svc ---- ---- ----

    云厂商K8S集群对比

    case 阿里云(Terway/ipvlan) Ucloud (vpc cni/ipvs) Ucloud自建K8S(cilium/ipvlan)
    iperf_tcp_pod_to_pod 34.6 Gbits/sec 19.0 Gbits/sec 20.6 Gbits/sec
    iperf_udp_pod_to_pod 1.04 Mbits/sec 1.04 Mbits/sec 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_pod 56016.60 81789.85/sec 93794.91/sec
    netperf_tcp_crr_pod_to_pod 17168.02 18762.88/sec 11449.55/sec
    iperf_tcp_pod_to_pod_over_node 11.7 Gbits/sec 1.95 Gbits/sec 1.95 Gbits/sec
    iperf_udp_pod_to_pod_over_node 1.04 Mbits/sec 1.04 Mbits/sec 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_pod_over_node 13334.22 22005.29/sec 18414.08/sec
    netperf_tcp_crr_pod_to_pod_over_node 3141.96 5863.64/sec 2793.67/sec
    iperf_tcp_pod_to_remote_svc 11.6 Gbits/sec 1.95 Gbits/sec 1.95 Gbits/sec
    iperf_udp_pod_to_remote_svc 1.04 Mbits/sec ---- 1.04 Mbits/sec
    netperf_tcp_rr_pod_to_remote_svc ---- ---- ----
    netperf_tcp_crr_pod_to_remote_svc ---- ---- ----

    结论与分析

    1. cilium cni
    • 能实现 同node POD间网络最大带宽吞吐,和tcp_rr测试项的最大交易量
    • 在跨节点 pod 网络吞吐性能低于Ucloud Vpc CNI,
    • 在跨节点 pod 网络tcp_crr只能达到Ucloud Vpc CNI的一半
    1. UcloudVpc cni iptable 转发模式下能达到最大的跨节点 pod 网络带宽
    2. UcloudVpc cni ipvs 转发模式下能达到最大的跨节点 tcp_rr,tcp_crr测试的最大交易量
    3. 阿里云(Terway/ipvlan) POD网络带宽最高,无论同节点POD网络带宽,还是跨节点POD网络带宽,接近宿主Node网络的性能极限

    以上只是对 k8s集群pod网络的基准测试

    参考

    1.【山外笔记-工具框架】Netperf网络性能测试工具详解教程
    https://www.cnblogs.com/davidesun/p/12726006.html
    2.【网络性能测试方法 】https://help.aliyun.com/knowledge_detail/55757.html#HFXbx

    相关文章

      网友评论

          本文标题:POD网络性能测试

          本文链接:https://www.haomeiwen.com/subject/yvpliltx.html