我最近在玩 Google Kubernetes Engine 集群。我对他们的 CNI 有一个疑问。我从 GCP 文档和其他文章中了解到,有一个桥接器,所有 veth 接口都连接到该桥接器。基本上,对于每个容器,都会创建一个 veth 对。它的一端在容器中,另一端连接到桥接设备。当同一节点上的容器相互通信时,数据包交换使用第 2 层桥接设备。这也是 GKE 文档描述的。
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods
https://medium.com/cloudzone/gke-networking-options-explained-demonstrated-5c0253415eba
我在 Google 上创建了一个集群,我可以看到有一个桥接设备 docker0,但是没有与之关联的接口。
gke-xxxxxxxxx /home/uuuuuuu # brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fd0b0cf4 no
gke-xxxxxxxxxx /home/uuuuuuu #
然后我使用 Virtualbox 创建了一个集群,我可以看到接口与桥接设备相关联。
[root@k8s-2 ~]# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.36dae477639c no veth7f6c1f01
vethccd0d71d
vethe63e4285
我想解释的是为什么我在 Google VM 上找不到桥接设备?在这种情况下是否使用了 Linux 内核的特殊功能?
当我检查 Google VM 上的每个 veth 接口时,它们都具有相同的 IP 地址 10.188.2.1
gke-xxxxxxxxxxxxxxxxxxxxx /home/user.name # ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 169.254.123.1 netmask 255.255.255.0 broadcast 169.254.123.255
ether 02:42:fd:0b:0c:f4 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.10.1.19 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4001:aff:fe0a:113 prefixlen 64 scopeid 0x20<link>
ether 42:01:0a:0a:01:13 txqueuelen 1000 (Ethernet)
RX packets 2192921 bytes 1682211226 (1.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1288701 bytes 468627202 (446.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 276348 bytes 153128345 (146.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 276348 bytes 153128345 (146.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth27cee774: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.188.2.1 netmask 255.255.255.255 broadcast 10.188.2.1
inet6 fe80::10b7:98ff:fe2f:2e08 prefixlen 64 scopeid 0x20<link>
ether 12:b7:98:2f:2e:08 txqueuelen 0 (Ethernet)
RX packets 32 bytes 2306 (2.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10 bytes 710 (710.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth6eba4cdf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.188.2.1 netmask 255.255.255.255 broadcast 10.188.2.1
inet6 fe80::c4e3:b0ff:fe5f:63da prefixlen 64 scopeid 0x20<link>
ether c6:e3:b0:5f:63:da txqueuelen 0 (Ethernet)
RX packets 537091 bytes 138245354 (131.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 477870 bytes 122515885 (116.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth8bcf1494: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.188.2.1 netmask 255.255.255.255 broadcast 10.188.2.1
inet6 fe80::70cb:c4ff:fe8c:a747 prefixlen 64 scopeid 0x20<link>
ether 72:cb:c4:8c:a7:47 txqueuelen 0 (Ethernet)
RX packets 50 bytes 3455 (3.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28 bytes 2842 (2.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethbb2135c7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.188.2.1 netmask 255.255.255.255 broadcast 10.188.2.1
inet6 fe80::1469:daff:fea0:8b5b prefixlen 64 scopeid 0x20<link>
ether 16:69:da:a0:8b:5b txqueuelen 0 (Ethernet)
RX packets 223995 bytes 82725559 (78.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 239258 bytes 60203574 (57.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vetheee4e8e3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.188.2.1 netmask 255.255.255.255 broadcast 10.188.2.1
inet6 fe80::ec6c:3bff:fef3:70c2 prefixlen 64 scopeid 0x20<link>
ether ee:6c:3b:f3:70:c2 txqueuelen 0 (Ethernet)
RX packets 311669 bytes 40562747 (38.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 304461 bytes 628195110 (599.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
这些veth接口背后是什么?
提前致谢
答案1
如果网桥已经有接口,brctl show
可以使用该命令查看节点的网桥和接口详细信息。看来您目前还没有向网桥引入任何接口。您可以使用 向网桥添加接口sudo brctl addif docker0 veth0
,并且可以使用相同的命令接收节点中所有必要的网桥和接口详细信息。检查这个文件以供参考。