我的设置:
- 主板有 4 个 PCIe 插槽:PCIe0/1(x16)、PCIe2/3(x8)、PCIe4/5(x16)、PCIe6(x1)
- 两个 I210 以太网卡:一个位于 PCIe6 插槽,一个位于 PCIe2 插槽
- PCIe0 插槽上有一张 Mellanox 卡(可能不相关,因为 Mellanox 卡是环回设置)
尝试使用设置的远程服务器 ping 服务器(在本例中为:10.76.176.193)+ 使用 PCIe2 上的 I210 卡 ping 成功:
ping -I enP2p1s0 10.76.176.193
PING 10.76.176.193 (10.76.176.193) from 10.76.190.205 enP2p1s0: 56(84)
bytes of data.
64 bytes from 10.76.176.193: icmp_seq=1 ttl=63 time=0.535 ms
64 bytes from 10.76.176.193: icmp_seq=2 ttl=63 time=0.361 ms
64 bytes from 10.76.176.193: icmp_seq=3 ttl=63 time=0.316 ms
64 bytes from 10.76.176.193: icmp_seq=4 ttl=63 time=0.334 ms
--- 10.76.176.193 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.316/0.386/0.535/0.089 ms
+ Ping with I210 card on PCIe6 failed:
ping -I enP6p1s0 10.76.176.193
PING 10.76.176.193 (10.76.176.193) from 10.76.190.210 enP6p1s0: 56(84) bytes of data.
--- 10.76.176.193 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 4999ms
ifconfig
输出
enP2p1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.76.190.205 netmask 255.255.255.0 broadcast 10.76.190.255
inet6 fe80::b2c6:fe34:cba8:f02f prefixlen 64 scopeid 0x20<link>
ether a0:36:9f:d7:71:58 txqueuelen 1000 (Ethernet)
RX packets 622406 bytes 734503804 (700.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 559815 bytes 729105591 (695.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x1080000000-10800fffff
enP6p1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.76.190.210 netmask 255.255.255.0 broadcast 10.76.190.255
inet6 fe80::f9f1:9c32:866f:dc1a prefixlen 64 scopeid 0x20<link>
ether a0:36:9f:d7:73:b0 txqueuelen 1000 (Ethernet)
RX packets 36200 bytes 3105207 (2.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1830 bytes 87880 (85.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x680000000-6800fffff
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.50.0.1 netmask 255.255.255.0 broadcast 10.50.0.255
inet6 fe80::f652:14ff:fe0b:d230 prefixlen 64 scopeid 0x20<link>
ether f4:52:14:0b:d2:30 txqueuelen 1000 (Ethernet)
RX packets 354914 bytes 124335944 (118.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 22222959 bytes 33472688082 (31.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp1s0d1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 11.50.0.1 netmask 255.255.255.0 broadcast 11.50.0.255
inet6 fe80::f652:14ff:fe0b:d231 prefixlen 64 scopeid 0x20<link>
ether f4:52:14:0b:d2:31 txqueuelen 1000 (Ethernet)
RX packets 22222919 bytes 33472684938 (31.1 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 354954 bytes 124339088 (118.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 16 bytes 1360 (1.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1360 (1.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
此后,我尝试ifconfig enP2p1s0 down
使用 I210 插槽 PCIe6 重试 ping 成功:
ifconfig enP2p1s0 down
ping -I enP6p1s0 10.76.176.193
PING 10.76.176.193 (10.76.176.193) from 10.76.190.210 enP6p1s0: 56(84) bytes of data.
64 bytes from 10.76.176.193: icmp_seq=1 ttl=63 time=0.400 ms
64 bytes from 10.76.176.193: icmp_seq=2 ttl=63 time=0.380 ms
64 bytes from 10.76.176.193: icmp_seq=3 ttl=63 time=0.339 ms
64 bytes from 10.76.176.193: icmp_seq=4 ttl=63 time=0.301 ms
64 bytes from 10.76.176.193: icmp_seq=5 ttl=63 time=0.321 ms
64 bytes from 10.76.176.193: icmp_seq=6 ttl=63 time=0.360 ms
64 bytes from 10.76.176.193: icmp_seq=7 ttl=63 time=0.308 ms
--- 10.76.176.193 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6000ms
rtt 最小值/平均值/最大值/mdev = 0.301/0.344/0.400/0.036 毫秒
输出ip route
(PASS 情况):
ip route list
default via 10.76.190.1 dev enP6p1s0 proto static metric 101
10.50.0.0/24 dev enp1s0 proto kernel scope link src 10.50.0.1
10.51.0.1 dev enp1s0d1 scope link
10.76.18.11 via 10.76.190.1 dev enP6p1s0 proto dhcp metric 100
10.76.190.0/24 dev enP6p1s0 proto kernel scope link src 10.76.190.210 metric 100
11.50.0.0/24 dev enp1s0d1 proto kernel scope link src 11.50.0.1
11.51.0.1 dev enp1s0 scope link
然后我尝试了一下ifconfig enP2p1s0 up
,再次 ping 失败:
ifconfig enP2p1s0 up
[67249.376503] IPv6: ADDRCONF(NETDEV_UP): enP2p1s0: link is not ready
[root@dhcp-10-76-190-205 mipham]# [67252.028125] igb 0002:01:00.0 enP2p1s0: igb: enP2p1s0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[67252.038150] IPv6: ADDRCONF(NETDEV_CHANGE): enP2p1s0: link becomes ready
ping -I enP6p1s0 10.76.176.193
PING 10.76.176.193 (10.76.176.193) from 10.76.190.210 enP6p1s0: 56(84) bytes of data.
--- 10.76.176.193 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms
输出ip route
(失败情况):
ip route list
default via 10.76.190.1 dev enP2p1s0 proto static metric 100
default via 10.76.190.1 dev enP6p1s0 proto static metric 101
10.50.0.0/24 dev enp1s0 proto kernel scope link src 10.50.0.1
10.51.0.1 dev enp1s0d1 scope link
10.76.18.11 via 10.76.190.1 dev enP6p1s0 proto dhcp metric 100
10.76.190.0/24 dev enP6p1s0 proto kernel scope link src 10.76.190.210 metric 100
10.76.190.0/24 dev enP2p1s0 proto kernel scope link src 10.76.190.205 metric 101
11.50.0.0/24 dev enp1s0d1 proto kernel scope link src 11.50.0.1
看起来这两个端口在某种意义上互相冲突,但我不知道原因以及如何同时运行它们。
有人能告诉我可能是什么原因,以及如何解决这个问题(同时运行两个端口)。提前谢谢
答案1
我无法直接回答你的问题,但或许你可以从以下两个角度来思考:
- 检查并确保两张卡都使用 IPv4(“ifconfig enP2p1s0 up”时收到的 IPv6 消息有点奇怪)
- 也许你想看看网卡绑定,我也正在尝试弄清楚如何使用所有三张以太网卡。^_^