无法在 Ubuntu 22.04.1 服务器主机上为 KVM 虚拟机添加公网 IP 或出现网络问题

无法在 Ubuntu 22.04.1 服务器主机上为 KVM 虚拟机添加公网 IP 或出现网络问题

背景问题

我有一台专用服务器(Ubuntu 22.04.1 LTS),有五个公共 IP,如下所示:

  1. xxx.xxx.51.20 (主 IP)
  2. xxx.xxx.198.104
  3. xxx.xxx.198.105
  4. xxx.xxx.198.106
  5. xxx.xxx.198.107

我想在这个服务器中托管几个 KVM VM,并为一些 VM 分配一个来自主机的公共 IP,或者简单地想象创建多个虚拟专用服务器 (VPS),每个服务器都有公共 IP

如果我没记错的话,我需要创建一个桥接网络。我已经创建了桥接网络,br0并且所有公共 IP 都已分配到那里。

目前,主机的网络配置如下:

cat /etc/netplan/50-cloud-init.yaml

network:
    version: 2
    renderer: networkd
    ethernets:
        eno1:
            dhcp4: false
            dhcp6: false
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1
    bridges:
        br0:
            interfaces: [eno1]
            addresses:
            - xxx.xxx.51.20/32
            - xxx.xxx.198.104/32
            - xxx.xxx.198.105/32
            - xxx.xxx.198.106/32
            - xxx.xxx.198.107/32
            routes:
            - to: default
              via: xxx.xxx.51.1
              metric: 100
              on-link: true
            mtu: 1500
            nameservers:
                addresses: [8.8.8.8]
            parameters:
                stp: true
                forward-delay: 4
            dhcp4: no
            dhcp6: no

通过这样的配置,所有的IP都指向主机,并且可以成功到达主机。

以下是主办方ip a记录:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:59:36:78 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.51.20/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.104/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.105/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.106/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.107/32 scope global br0
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:3678/64 scope link 
       valid_lft forever preferred_lft forever
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether xx:xx:xx:ff:4c:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
9: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
    link/ether xx:xx:xx:6b:01:05 brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:ff:fe6b:105/64 scope link 
       valid_lft forever preferred_lft forever
10: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether xx:xx:xx:16:07:56 brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:ff:fe16:756/64 scope link 
       valid_lft forever preferred_lft forever

然后我创建一个带有两个网络连接的 KVM VM(一个用于 br0,另一个是基于 NAT 的桥接器)

在此处输入图片描述

从 VM 中,我将 netplan 配置如下: cat /etc/netplan/00-installer-config.yaml/:

network:
    version: 2
    ethernets:
        enp1s0:
            addresses:
            - xxx.xxx.198.104/32
            routes:
            - to: default
              via: xxx.xxx.198.1
              metric: 100
              on-link: true
            nameservers:
                addresses:
                - 1.1.1.1
                - 1.1.0.0
                - 8.8.8.8
                - 8.8.4.4
            search: []

        enp7s0:
            dhcp4: true
            dhcp6: true
            match:
                macaddress: xx:xx:xx:16:07:56

此处,虚拟机使用enp1s0静态公共 IP(xxx.xxx.198.104)和enp7s0来自主机的 NAT(192.168.122.xxx)。

从VMip a命令来看,显示VM获得了正确的IP。

问题:

  1. 当我尝试从我的笔记本电脑直接 ssh 到虚拟机公共 IP (xxx.xxx.198.104) 时,似乎我仍然连接到主机,而不是虚拟机。
  2. 从虚拟机中,如果我断开 NAT 网络 (enp7s0),并且仅使用具有公共 IP 的 (enp1s0) 网络,则虚拟机似乎无法连接到互联网。

我是否遗漏了什么?

更新 1

我联系了 DC 提供商,他们有文档可在此处将公共 IP 添加到 VMhttps://docs.ovh.com/gb/en/dedicated/network-bridging/但仅适用于使用 Proxmox 作为主机的情况。我需要创建一个桥接网络,并将指定的 Mac 地址连接到该桥接器,然后从 VM 中我们需要应用公共 IP。

在 Ubuntu 上如何做到这一点?

更新 2

我设法使用以下命令在主机端使用我的公共 IP 创建桥接网络:

sudo ip link add name test-bridge link eth0 type macvlan
sudo ip link set dev test-bridge address MAC_ADDRESS
sudo ip link set test-bridge up
sudo ip addr add ADDITIONAL_IP/32 dev test-bridge

我重复了 4 次,将我所有的公共 IP 添加到主机。

现在我的主机配置如下:

cat /etc/netplan/50-cloud-init.yaml

network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet xxx.xxx.51.20/24 metric 100 brd xx.xx.51.255 scope global dynamic eno1
       valid_lft 84658sec preferred_lft 84658sec
    inet6 xxxx::xxxx:xxxx:fe2a:19d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether xx:xx:xx:e1:2b:ce brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: vmbr1@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:79:26:12 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.104/32 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:2612/64 scope link 
       valid_lft forever preferred_lft forever
7: vmbr2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:f3:e2:85 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.105/32 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:e285/64 scope link 
       valid_lft forever preferred_lft forever
8: vmbr3@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:48:a8:c9 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.106/32 scope global vmbr3
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:a8c9/64 scope link 
       valid_lft forever preferred_lft forever
9: vmbr4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:eb:29:a1 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.107/32 scope global vmbr4
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:29a1/64 scope link 
       valid_lft forever preferred_lft forever

我测试从我的笔记本电脑 ping 所有 IP 地址并且成功了。

在虚拟机端,我编辑网络如下:

sudo virsh edit vm_name并寻找网络接口:

    <interface type='network'>
      <mac address='xx:xx:xx:16:07:56'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:79:26:12'/>
      <source bridge='vmbr1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

现在的问题是,我无法启动虚拟机:

$ sudo virsh start lab
error: Failed to start domain 'vm_name'
error: Unable to add bridge vmbr1 port vnet8: Operation not supported

是不是又遗漏了什么?

更新 3

我发现使用sudo ip link add ...命令只能暂时起作用,服务器重启后它就会丢失。

请指导我进行正确的主机配置和虚拟机端配置。

谢谢

更新 4

我在这里阅读了 Proxmox 配置参考(https://pve.proxmox.com/wiki/Network_Configuration) 并尝试在主机上实现路由配置。

因此,我安装了ifupdown软件包,并创建了下面的配置:

auto lo
iface lo inet loopback

auto eno0
iface eno0 inet static
  address xxx.xxx.51.20/24
  gateway xxx.xxx.51.254 
  post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp


auto vmbr0
iface vmbr0 inet static
  address xxx.xxx.198.104/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0


auto vmbr1
iface vmbr1 inet static
  address xxx.xxx.198.105/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

auto vmbr2
iface vmbr2 inet static
  address xxx.xxx.198.106/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

auto vmbr3
iface vmbr3 inet static
  address xxx.xxx.198.107/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

我还按照这里的建议禁用了 systemd 网络(https://askubuntu.com/a/1052023),命令如下:

sudo systemctl unmask networking
sudo systemctl enable networking
sudo systemctl restart networking
sudo journalctl -xeu networking.service
sudo systemctl stop systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo systemctl disable systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo systemctl mask systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

通过此配置,从虚拟机的角度来看,可以读取虚拟机获得 2 个 IP(一个私有 IP 和一个公共 IP)。

然而,我遇到了另一个问题:

  1. sudo systemctl status networking总是失败。日志如下:
$ sudo systemctl status networking.service
× networking.service - Raise network interfaces
     Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2022-12-13 18:51:27 UTC; 7min ago
       Docs: man:interfaces(5)
   Main PID: 1000 (code=exited, status=1/FAILURE)
        CPU: 631ms

Dec 13 18:51:25 hostname ifup[1054]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:25 hostname ifup[1066]: RTNETLINK answers: File exists
Dec 13 18:51:25 hostname ifup[1000]: ifup: failed to bring up eno1
Dec 13 18:51:26 hostname ifup[1132]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:26 hostname ifup[1215]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1298]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1381]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 18:51:27 hostname systemd[1]: networking.service: Failed with result 'exit-code'.
Dec 13 18:51:27 hostname systemd[1]: Failed to start Raise network interfaces.

journalctl

$ sudo journalctl -xeu networking.service
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 13 18:49:08 hostname systemd[1]: networking.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit networking.service has entered the 'failed' state with result 'exit-code'.
Dec 13 18:49:08 hostname systemd[1]: Failed to start Raise network interfaces.
░░ Subject: A start job for unit networking.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has finished with a failure.
░░ 
░░ The job identifier is 3407 and the job result is failed.
-- Boot 50161a44ec43452692ce64fca20cce9d --
Dec 13 18:51:25 hostname systemd[1]: Starting Raise network interfaces...
░░ Subject: A start job for unit networking.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has begun execution.
░░ 
░░ The job identifier is 49.
Dec 13 18:51:25 hostname ifup[1054]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:25 hostname ifup[1066]: RTNETLINK answers: File exists
Dec 13 18:51:25 hostname ifup[1000]: ifup: failed to bring up eno1
Dec 13 18:51:26 hostname ifup[1132]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:26 hostname ifup[1215]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1298]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1381]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStart= process belonging to unit networking.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 13 18:51:27 hostname systemd[1]: networking.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit networking.service has entered the 'failed' state with result 'exit-code'.
Dec 13 18:51:27 hostname systemd[1]: Failed to start Raise network interfaces.
░░ Subject: A start job for unit networking.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has finished with a failure.
░░ 
░░ The job identifier is 49 and the job result is failed.

ip a主机上的结果:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet xxx.xxx.51.20/24 brd xx.xxx.51.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:19d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:b3:96:06 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.104/24 brd xxx.xxx.198.255 scope global vmbr0
       valid_lft forever preferred_lft forever
5: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:a8:1a:49 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.105/24 brd xxx.xxx.198.255 scope global vmbr1
       valid_lft forever preferred_lft forever
6: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:d8:82:25 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.106/24 brd xxx.xxx.198.255 scope global vmbr2
       valid_lft forever preferred_lft forever
7: vmbr3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:4d:aa:31 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.107/24 brd xxx.xxx.198.255 scope global vmbr3
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:8c:f2:63:f5 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
  1. 尽管虚拟机可以读取 2 个接口(私有 IP 和公共 IP),但我无法从笔记本电脑访问虚拟机公共 IP,似乎在主机端,连接未转发到虚拟机。

  2. 如果上述所有问题都是由于网络服务无法启动引起的,有什么解决办法吗?

  3. 如果 netplan / systemd 网络是在 Ubuntu 22.04 上配置网络的正确方法,那么在这种情况下桥接的正确 netplan 配置是怎样的?

$ cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eno0
iface eno0 inet static
  address xxx.xxx.51.20/24
  gateway xxx.xxx.51.254 
  post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp


auto vmbr0
iface vmbr0 inet static
  address xxx.xxx.198.104/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0


auto vmbr1
iface vmbr1 inet static
  address xxx.xxx.198.105/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

auto vmbr2
iface vmbr2 inet static
  address xxx.xxx.198.106/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

auto vmbr3
iface vmbr3 inet static
  address xxx.xxx.198.107/24
  bridge-ports none
  bridge-stp off
  bridge-fd 0

答案1

该命令ip link add ...添加了一个接口。如果您将等效配置添加到 netplan,它将是持久的。示例:

ip link add xxx0 type  bridge
ip addr add 192.168.1.20/32 dev xxx0 

在 netplan 中查看:

network:
  version: 2
  ethernets:
    enp2s0:
      critical: true
      dhcp4: false

  bridges:
    xxx0:
      addresses:
        - 192.168.1.20/26

      interfaces:
        - enp2s0

附加评论:

我不知道您的使用情况,但我是 Linux 新手,只有 25 年的经验,我绝不会通过桥接接口公开虚拟机。如果您只想公开单个服务,您可能会发现使用带有私有桥接器的 KVM 和单个端口的 iptables 端口转发的更好(和更安全)的解决方案。或者您可以看看 docker 并完全摆脱虚拟机。(强烈推荐)

答案2

经过几周的反复试验,针对这种情况,我找到了一个可行的解决方案。

要了解桥梁应该是什么样子,首先我们需要知道我们可以使用哪种桥梁类型。通常有 3 种类型,如下所述这里

  1. 默认桥
  2. 路由桥
  3. 伪装/基于 NAT 的桥接

下面对每一个进行完整解释:

1. 默认桥

默认桥接器意味着虚拟机的工作方式就像直接连接到网络一样。

默认桥

就我而言,在仔细检查了旧ifupdown配置和ip link add命令后,我发现在 OVH 专用服务器环境(不是生态环境)中,我们可以使用具有预定义 MAC 地址的默认桥接器。

如果您使用旧版ifupdown

主机的网络配置如下:

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address xxx.xxx.51.20/24
        gateway xxx.xxx.51.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

然后运行:

sudo systemctl restart networking

如您所见,主机的专用 IP 地址包含在桥接配置中。

对于虚拟机/客户端配置,它与记录的相同这里

如果你使用netplan

主机的网络配置如下:

$ sudo cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    renderer: networkd
    ethernets:
        eno1:
            dhcp4: false
            dhcp6: false

    bridges:
        brname:
            addresses: [ xxx.xxx.51.20/24 ]
            interfaces: [ eno1 ]
            routes:
            - to: default
              via: xxx.xxx.51.1
              metric: 100
              on-link: true
            mtu: 1500
            nameservers:
                addresses: [ 213.186.33.99 ]
            parameters:
                stp: true
                forward-delay: 4
            dhcp4: no
            dhcp6: no

然后运行:

sudo netplan generate
sudo netplan --debug apply

对于虚拟机/客户端配置,它与记录的相同这里

2. 路由桥

大多数托管服务提供商不支持默认桥接网络。出于安全原因,一旦检测到单个接口上有多个 MAC 地址,他们就会禁用网络。

一些提供商允许您通过其管理界面注册其他 MAC。这可以避免此问题,但配置起来可能很麻烦,因为您需要为每个虚拟机注册一个 MAC。您可以通过单个接口“路由”所有流量来避免此问题。这可确保所有网络数据包都使用相同的 MAC 地址。

路由网络

如果您使用旧版ifupdown

主机的网络配置如下:

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address  xxx.xxx.51.20/24
        gateway  xxx.xxx.51.1
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmbr0
iface vmbr0 inet static
        address  xxx.xxx.198.104/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

对于虚拟机/客户端配置,它与记录的相同这里

如果你使用netplan

因为随着这个案件的发生,netplan 尚不支持 macvlan 桥接,我们需要一个解决方法并改变systemd网络:

  • 我们需要在上创建一些文件/etc/systemd/network,因此导航到目录:
$ cd /etc/systemd/network
  • 定义网桥的vlan,请更改cat为您的编辑器
$ cat 00-vmvlan0.netdev 
[NetDev]
Name=vmvlan0
Kind=macvlan
# Optional MAC address, or other options
MACAddress=xx:xx:xx:xx:26:12

[MACVLAN]
Mode=bridge

$ cat 00-vmvlan0.network 
[Match]
Name=vmvlan0

[Network]
Address=xxx.xxx.198.104/32
IPForward=yes
ConfigureWithoutCarrier=yes
  • 定义桥梁
$ cat 00-vmbr0.netdev 
[NetDev]
Name=vmbr0
Kind=bridge

$ cat 00-vmbr0.network 
[Match]
Name=vmbr0

[Network]
MACVLAN=vmvlan0

  • 链接桥、vlan 和接口:
$ cat 10-netplan-eno1.link 
[Match]
MACAddress=xx:xx:xx:xx:19:d0

[Link]
Name=eno1
WakeOnLan=off

$ cat 10-netplan-eno1.network 
[Match]
MACAddress=xx:xx:xx:xx:19:d0
Name=eno1

[Network]
DHCP=ipv4
LinkLocalAddressing=ipv6
MACVLAN=vmvlan0

[DHCP]
RouteMetric=100
UseMTU=true


  • 我们可以使用默认的 netplan 配置:
$ sudo cat /etc/netplan/50-cloud-init.yaml
network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1

对于虚拟机/客户端配置,它与记录的相同这里

3.使用NAT桥接:

libvirt 安装附带的默认桥接器是基于 NAT 的桥接器,因此它应该默认工作。

在这种情况下,我们应该确保主机知道它连接了多个公共 IP。所以:

$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            addresses:
            - xxx.xxx.50.20/32
            - xxx.xxx.198.104/32
            - xxx.xxx.198.105/32
            - xxx.xxx.198.106/32
            - xxx.xxx.198.107/32
            match:
                macaddress: a4:bf:01:2a:19:d0
            set-name: eno1

然后,假设虚拟机已经获得一个静态 IP(例如:192.168.1.10),那么我们只需要将连接转发到该 IP,如@dummyuser 回答中所述:

iptables -A FORARD -d xxx.xxx.198.104/32 -p tcp -m tcp -j DNAT --to-destination 192.168.1.10

相关内容