使用 openflow 阻止 libvirt 客户机上的 mac 欺骗

使用 openflow 阻止 libvirt 客户机上的 mac 欺骗

我有几个使用 libvirt 运行的 kvm vm,它们正在使用 openvswitch 桥接器。我需要一种机制来防止客户机上的 mac 欺骗。我尝试了 libvirt 过滤器 no-mac-spoofing 和 clean-traffic,但它们只适用于 linux 桥接器。我想我可以使用 openflow 规则来尝试限制与特定 src mac 不匹配的流量(丢弃所有与 vm xml 上配置的 mac 不同的数据包)。这就是我所拥有的:

[root@t6 /]# ovs-ofctl show vswitch2            
OFPT_FEATURES_REPLY (xid=0x2): dpid:00000030641a7b82
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(eth5): addr:00:30:64:1a:7b:82
     config:     0
     state:      0
     current:    1GB-FD COPPER AUTO_NEG
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG
     speed: 1000 Mbps now, 1000 Mbps max
 2(vnet0): addr:fe:54:00:00:00:11
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 3(vnet1): addr:fe:54:00:00:00:22
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 4(vnet2): addr:fe:54:00:00:00:33
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(vswitch2): addr:00:30:64:1a:7b:82
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max

我想限制来自 vnet2 的数据包流量。因此,我尝试丢弃源 mac 与 libvirt nic 上配置的 mac 不匹配的数据包:

[root@t6 /]# ovs-ofctl add-flow vswitch2 dl_src!=52:54:00:00:00:33,in_port=4,actions=drop
ovs-ofctl: unknown keyword dl_src!

[root@t6 /]# ovs-ofctl add-flow vswitch2 dl_src!52:54:00:00:00:33,in_port=4,actions=drop
-bash: !52: event not found

但正如您所见,! 符号无法识别。我查看了 openflow 的文档,但找不到任何可以帮助我解决此问题的内容... 任何了解 openflow 和 openvswitch 的人都可以帮我解决这个问题吗?我尝试使用 openflow 完成的工作是否可行?老实说,我现在真的很困惑,因为 openflow 文档对于初次使用的用户来说有点难以理解.... 我提前感谢您提供的任何帮助!

答案1

最简单的解决方案是反转你的逻辑:

ovs-vsctl add-flow vswitch2 in_port=4,dl_src=52:54:00:00:00:33,action=NORMAL
ovs-vsctl add-flow vswitch2 in_port=4,action=drop

我已经测试过了,似乎可以正常工作。我有两台虚拟机连接到网桥,ovsbr0地址分别为 192.168.124.10 和 192.168.124.11。第一台机器有这个接口:

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:06:88:31 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet 192.168.124.10/24 scope global eth1
       valid_lft forever preferred_lft forever

我可以成功 ping 通第二个系统 192.168.124.11:

[root@fedora ~]# ping -c2 192.168.124.11
PING 192.168.124.11 (192.168.124.11) 56(84) bytes of data.
64 bytes from 192.168.124.11: icmp_seq=1 ttl=64 time=0.351 ms
64 bytes from 192.168.124.11: icmp_seq=2 ttl=64 time=0.314 ms

--- 192.168.124.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1017ms
rtt min/avg/max/mdev = 0.314/0.332/0.351/0.018 ms

如果我添加这两个 openflow 规则:

host# ovs-ofctl add-flow ovsbr0 in_port=4,dl_src=52:54:00:06:88:31,actions=NORMAL
host# ovs-ofctl add-flow ovsbr0 in_port=4,actions=drop

我仍然可以 ping 第二个系统:

[root@fedora ~]# ping -c2 192.168.124.11
PING 192.168.124.11 (192.168.124.11) 56(84) bytes of data.
64 bytes from 192.168.124.11: icmp_seq=1 ttl=64 time=0.398 ms
64 bytes from 192.168.124.11: icmp_seq=2 ttl=64 time=0.188 ms

--- 192.168.124.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1050ms
rtt min/avg/max/mdev = 0.188/0.293/0.398/0.105 ms

但如果我改变MAC地址:

[root@fedora ~]# ip link set eth1 down
[root@fedora ~]# ip link set eth1 address 52:54:00:06:12:34
[root@fedora ~]# ip link set eth1 up

我发现 ping 不再起作用:

[root@fedora ~]# ping -c2 192.168.124.11
PING 192.168.124.11 (192.168.124.11) 56(84) bytes of data.
From 192.168.124.10 icmp_seq=1 Destination Host Unreachable
From 192.168.124.10 icmp_seq=2 Destination Host Unreachable

--- 192.168.124.11 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1052ms
pipe 2

相关内容