编辑:有关我的设置的更多信息,请参阅评论:我的系统由三个网卡组成:enp1s0、interface1 和 interface2。
enp1s0是主板集成网卡,连接互联网。
接口 1 和接口 2 是另外两个 NIC(有两个端口,但仅使用每个端口的第一个端口),它们连接到第二个仅 LAN 网络。它们都连接到同一交换机、同一 VLAN、同一子网。
ip -d link show 的结果是:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 2c:27:d7:19:dd:97 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode none
3: interface1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:31:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
4: interface1-2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:31:38 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
5: interface2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:60:12 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
6: interface2-2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:60:14 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
我正在探索AB的评论这个问题作为我的问题的解决方法。作为总结,因此您不必阅读整个问题:我尝试在两个单独的网络命名空间中设置两个 NIC(接口 1 和接口 2),以处理 IGMP 查询并在每个卡上独立报告,即使这两个卡连接到同一个网络。
我的问题与命名空间的设置有关。我是这样做的:
# Add a new namespace named interface1Namespace
sudo ip netns add interface1Namespace
# Add a new namespace named interface2Namespace
sudo ip netns add interface2Namespace
# Check that both namespaces exist
ip netns list
# Set interface1 to be in the interface1Namespace namespace
sudo ip link set interface1 netns interface1Namespace
# Set interface2 to be in the interface2Namespace namespace
sudo ip link set interface2 netns interface2Namespace
# Give interface1 an IP address.
sudo ip -n interface1Namespace addr add 25.25.40.116/24 dev interface1
# Give interface2 an IP address.
sudo ip -n interface2Namespace addr add 25.25.40.134/24 dev interface2
# Bring up loopback inside the interface1Namespace namespace
sudo ip netns exec interface1Namespace ip link set dev lo up
# Bring up loopback inside the interface2Namespace namespace
sudo ip netns exec interface2Namespace ip link set dev lo up
# Bring up interface1 inside the interface1Namespace namespace
sudo ip netns exec interface1Namespace ip link set dev interface1 up
# Bring up interface2 inside the interface2Namespace namespace
sudo ip netns exec interface2Namespace ip link set dev interface2 up
# Check that the interface1 interface is working in the interface1Namespace namespace
sudo ip netns exec interface1 ifconfig
# Check that the interface2 interface is working in the interface2Namespace namespace
sudo ip netns exec interface2 ifconfig
# Add default gateway for the interface1Namespace namespace
sudo ip netns exec interface1Namespace ip route add default via 25.25.40.1 dev interface1
# Add default gateway for the interface2Namespace namespace
sudo ip netns exec interface2Namespace ip route add default via 25.25.40.1 dev interface2
# Check that the interface1Namespace route table has the default gateway
sudo ip netns exec interface1Namespace ip route show
# Check that the interface2Namespace route table has the default gateway
sudo ip netns exec interface2Namespace ip route show
这是输出:
interface1Namespace
interface2Namespace
interface1 Link encap:Ethernet HWaddr 00:20:fc:32:31:36
inet addr:25.25.40.116 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:47434 errors:0 dropped:5 overruns:0 frame:0
TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4374863 (4.3 MB) TX bytes:8324 (8.3 KB)
interface2 Link encap:Ethernet HWaddr 00:20:fc:32:60:12
inet addr:25.25.40.134 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:44231 errors:0 dropped:7 overruns:0 frame:0
TX packets:172 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4067463 (4.0 MB) TX bytes:25394 (25.3 KB)
default via 25.25.40.1 dev interface1 linkdown
25.25.40.0/24 dev interface1 proto kernel scope link src 25.25.40.116 linkdown
default via 25.25.40.1 dev interface2 linkdown
25.25.40.0/24 dev interface2 proto kernel scope link src 25.25.40.134
如果我运行:
sudo ip netns exec interface1Namespace ip -d link show interface1
它输出:
3: interface1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:31:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
如果我运行:
sudo ip netns exec interface2Namespace ip -d link show interface2
它输出:
5: interface2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
link/ether 00:20:fc:32:60:12 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64
如果我尝试在任一命名空间上 ping 25.25.40.1,我都得不到答案。我缺少什么?
我发现很奇怪的是,interface2 的路由表没有报告 25.25.40/24 上的链路断开...
答案1
这里解释一下: https://access.redhat.com/solutions/53031
这里不太清楚: https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
考虑一台具有 2 个网络接口(接口 A 和接口 B)的计算机。考虑到 Linux 决定使用接口 B 向 IP 地址 X 发送数据包。考虑到接口 A 上从 IP 地址 X 接收到的数据包。Linux 将丢弃该数据包。
除非您
sysctl net.ipv4.conf.all.rp_filter=2
在终端中运行或将该行添加到/etc/sysctl.conf
.
它可以从其他接口上的某个 IP 地址接收数据包,而不是用于将数据包发送到该 IP 地址的接口!