两个网络上的两个 NIC 的 IP 路由

两个网络上的两个 NIC 的 IP 路由

两个网络上的两个 NIC 的 IP 路由

我正在重写这个问题以增加清晰度和社区性。

我有一台服务器,上面有两个不同的网卡,一个是板载 10GB 以太网设备eno2,我已将其设置为 MGMT 网络上的 IP。第二个网卡是带有 4 个 10GB 端口的英特尔卡。我使用 802.ad 链路聚合方法将这些端口绑定和桥接,并为接口br0分配 LAB 网络上的 IP。

文件服务器和GPU服务器目前通过LAB网络进行通信,速度非常快。

所有网络配置都在 netplan 中,但我一直在手动编辑规则以ip尝试找到正确的配置。从那里我可以相应地配置 netplan。

问题

应如何配置 IP 表和规则,以便两个 NIC 都可以访问互联网,并且回复流量来自最初接收流量的接口,即保持在同一个网络上?

这个问题与我能找到的任何东西都非常接近,但我仍然无法使出站流量正常运行。

默认表中的路由

user@server1:~$ ip r s tab 254
default via 192.1.1.1 dev eno2 proto static
10.10.0.0/24 dev br0 proto kernel scope link src 10.10.0.71
192.1.1.0/24 dev eno2 proto kernel scope link src 192.1.1.105
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

第二张表中的路线

user@server1:~$ ip r s tab 102
default via 10.10.0.1 dev br0 proto static

测试

我可以从局域网上的另一台机器(在第三个网络上)ping 服务器

❯ ping -c 1 192.1.1.105
PING 192.1.1.105 (192.1.1.105): 56 data bytes
64 bytes from 192.1.1.105: icmp_seq=0 ttl=63 time=4.716 ms

--- 192.1.1.105 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 4.716/4.716/4.716/0.000 ms
❯ ping -c 1 10.10.0.71
PING 10.10.0.71 (10.10.0.71): 56 data bytes
64 bytes from 10.10.0.71: icmp_seq=0 ttl=63 time=3.832 ms

--- 10.10.0.71 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.832/3.832/3.832/0.000 ms

从服务器我只能 ping 出eno2设备

user@server1:~$ ping -c 1 -I eno2 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 192.1.1.105 eno2: 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=52 time=10.2 ms

--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 10.155/10.155/10.155/0.000 ms
user@server1:~$ ping -c 1 -I br0 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 10.10.0.71 br0: 56(84) bytes of data.
From 10.10.0.71 icmp_seq=1 Destination Host Unreachable

--- 1.1.1.1 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

参考

❯ ssh [email protected]
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-52-generic x86_64)

  System information as of Tue Oct 25 02:34:22 PM UTC 2022

  System load:  0.22998046875      Processes:               601
  Usage of /:   4.1% of 467.89GB   Users logged in:         0
  Memory usage: 0%                 IPv4 address for br0:    10.10.0.71
  Swap usage:   0%                 IPv4 address for eno2:   192.1.1.105
  Temperature:  31.0 C             IPv4 address for virbr0: 192.168.122.1

0 updates can be applied immediately.
user@server1:~$ ip a s | grep \<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: enp129s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
3: enp129s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
4: enp129s0f2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
5: enp129s0f3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
6: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
7: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
user@server1:~$ ip a s | grep 'inet '
    inet 127.0.0.1/8 scope host lo
    inet 192.1.1.105/24 brd 192.1.1.255 scope global eno2
    inet 10.10.0.71/24 brd 10.10.0.255 scope global br0
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

相关内容