美文网首页
ipvlan cni的一些使用上的细节

ipvlan cni的一些使用上的细节

作者: cloudFans | 来源:发表于2021-12-10 09:03 被阅读0次

    一:不给eth1分配IP(ipvlan+whereabouts的方案)
    二:给eth1分配IP,但用ipvl子接口禁用eth1的数据转发(terway方案)

    二与一的优势对比:
    二使得同一node下的pod可以与node直接通讯而不需要走vpc网关。
    一中所有pod与node的通讯都需要走vpc网关(从eth1->gw->eth0)。

    terway ipvlan子接口依赖的主网卡的使用上的细节:

    一旦主网卡定义声明为ipvlan L2 模式,那么这张网卡就无法向外发送流量

    在node上的作用依旧是正常的:

    1. 主网卡的ip依然可以ping通
    2. 可以作为“网桥” 让所有slave子接口互通

    所以目前使用的双网卡模式,在eth1未初始化ipvlan模式之前,仍具有转发功能,但是eth0负责提供负载均衡器依赖的vip。

    eth0 eth1分属于两个子网,但属于同一个vpc,连接到同一个软路由。为了保证 vip对应的ipvs后端pod的流量走eth0,在初始化ipvlan cni 子接口之前必须关闭eth1,防止由eth1转发pod的流量。

    延伸:

    当eth1已创建了ipvlan 子接口,此时down掉eth1有什么影响

    关闭eth1后:

    1. ipvlan pod无法ping通网关
    2. 本地网桥的功能依然正常,同一node上的pod依然可以互通。
    3. eth1 无法向外转发(和up的时候效果是一样的)
    
    
    [root@(l2)k8s-master-1 ~]# grep eth1 /etc/cni/net.d/10-terway.conf && ip link set eth1 down
        "master": "eth1"
    [root@(l2)k8s-master-1 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000
        link/ether 00:00:00:83:c5:17 brd ff:ff:ff:ff:ff:ff
        inet 172.32.16.11/16 brd 172.32.255.255 scope global dynamic noprefixroute eth0
           valid_lft 85132245sec preferred_lft 85132245sec
        inet6 fe80::200:ff:fe83:c517/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST> mtu 1400 qdisc fq_codel state DOWN group default qlen 1000
        link/ether 00:00:00:b2:8f:1b brd ff:ff:ff:ff:ff:ff
    4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
        link/ether 00:00:00:9c:1c:c7 brd ff:ff:ff:ff:ff:ff
    5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000
        link/ether 00:00:00:90:05:f0 brd ff:ff:ff:ff:ff:ff
        inet 10.5.208.208/24 brd 10.5.208.255 scope global dynamic noprefixroute eth3
           valid_lft 85132245sec preferred_lft 85132245sec
        inet6 fe80::47ae:2181:3ed0:2fdb/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:67:72:ec:1a brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    7: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
        link/ether 4e:49:bf:d2:ab:fc brd ff:ff:ff:ff:ff:ff
        inet 10.96.0.1/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.96.0.3/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.97.3.203/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.96.165.187/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
    9: ipvl_3@eth1: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue state UNKNOWN group default 
        link/ether 00:00:00:b2:8f:1b brd ff:ff:ff:ff:ff:ff
        inet 172.33.192.10/32 scope host ipvl_3
           valid_lft forever preferred_lft forever
        inet6 fe80::2b2:8f1b/64 scope link 
           valid_lft forever preferred_lft forever
    [root@(l2)k8s-master-1 ~]# 
    [root@(l2)k8s-master-1 ~]# 
    [root@(l2)k8s-master-1 ~]# kubectl  get pod -A -o wide | grep k8s-master-1
    default       busybox1056-k8s-master-1               1/1     Running   0          12d     172.33.0.218   k8s-master-1   <none>           <none>
    default       centos-sshd-cluster-41                 1/1     Running   0          6d17h   172.33.0.201   k8s-master-1   <none>           <none>
    default       centos-sshd-cluster-43                 1/1     Running   0          6d17h   172.33.0.208   k8s-master-1   <none>           <none>
    default       sshd-k8s-master-1                      1/1     Running   0          10d     172.33.0.211   k8s-master-1   <none>           <none>
    kube-system   kube-apiserver-k8s-master-1            1/1     Running   6          14d     172.32.16.11   k8s-master-1   <none>           <none>
    kube-system   kube-controller-manager-k8s-master-1   1/1     Running   11         14d     172.32.16.11   k8s-master-1   <none>           <none>
    kube-system   kube-ovn-cni-zzqpz                     1/1     Running   3          13d     172.32.16.11   k8s-master-1   <none>           <none>
    kube-system   kube-proxy-dbbgv                       1/1     Running   4          14d     172.32.16.11   k8s-master-1   <none>           <none>
    kube-system   kube-scheduler-k8s-master-1            1/1     Running   12         14d     172.32.16.11   k8s-master-1   <none>           <none>
    [root@(l2)k8s-master-1 ~]# ping 172.33.0.218
    PING 172.33.0.218 (172.33.0.218) 56(84) bytes of data.
    64 bytes from 172.33.0.218: icmp_seq=1 ttl=64 time=0.106 ms
    64 bytes from 172.33.0.218: icmp_seq=2 ttl=64 time=0.072 ms
    64 bytes from 172.33.0.218: icmp_seq=3 ttl=64 time=0.070 ms
    64 bytes from 172.33.0.218: icmp_seq=4 ttl=64 time=0.082 ms
    64 bytes from 172.33.0.218: icmp_seq=5 ttl=64 time=0.057 ms
    ^C
    --- 172.33.0.218 ping statistics ---
    5 packets transmitted, 5 received, 0% packet loss, time 4108ms
    rtt min/avg/max/mdev = 0.057/0.077/0.106/0.018 ms
    [root@(l2)k8s-master-1 ~]# 
    [root@(l2)k8s-master-1 ~]# 
    [root@(l2)k8s-master-1 ~]# 
    [root@(l2)k8s-master-1 ~]# kubectl exec -ti sshd-k8s-master-1  -- /bin/sh
    sh-4.2# ping 114.114.114.114
    PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.
    ^C
    --- 114.114.114.114 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 1052ms
    
    sh-4.2# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         172.33.0.1      0.0.0.0         UG    0      0        0 eth0
    172.33.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
    172.33.192.10   0.0.0.0         255.255.255.255 UH    0      0        0 eth0
    sh-4.2# ping 172.33.0.1
    PING 172.33.0.1 (172.33.0.1) 56(84) bytes of data.
    ^C
    --- 172.33.0.1 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 1008ms
    
    sh-4.2# ping 172.33.192.10
    PING 172.33.192.10 (172.33.192.10) 56(84) bytes of data.
    64 bytes from 172.33.192.10: icmp_seq=1 ttl=64 time=0.063 ms
    64 bytes from 172.33.192.10: icmp_seq=2 ttl=64 time=0.126 ms
    ^C
    --- 172.33.192.10 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1048ms
    rtt min/avg/max/mdev = 0.063/0.094/0.126/0.032 ms
    sh-4.2# ping 172.33.0.218
    PING 172.33.0.218 (172.33.0.218) 56(84) bytes of data.
    64 bytes from 172.33.0.218: icmp_seq=1 ttl=64 time=0.058 ms
    64 bytes from 172.33.0.218: icmp_seq=2 ttl=64 time=0.153 ms
    ^C
    --- 172.33.0.218 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1056ms
    rtt min/avg/max/mdev = 0.058/0.105/0.153/0.048 ms
    
    

    相关文章

      网友评论

          本文标题:ipvlan cni的一些使用上的细节

          本文链接:https://www.haomeiwen.com/subject/cwfcfrtx.html