我需要一些有关服务器、网络规划和网桥的帮助。
我想在我的 ubuntu 18.04 服务器下设置一个 kvm,以便从外部访问。所以我需要一个网桥。
因此,我在本地 VirtualBox 托管的服务器上模拟了网络配置,在那里我能够使用网桥设置正在运行的网络配置。但我必须在 Virtual Box 设置中启用混杂模式。
如果我将设置移植到我的 dedicatet hetzner ubuntu 服务器,该服务器的互联网连接就会中断。
有人能给我一些建议吗?
下面是屏蔽的 netplan 配置,它没有运行(netplan generate && netplan apply 命令成功运行)。
默认工作网络配置:
network:
version: 2
renderer: networkd
ethernets:
enp2s0:
addresses:
- [IP4]
- [IP6]
routes:
- on-link: true
to: 0.0.0.0/0
via: [another IP4]
gateway6: fe80::1
nameservers:
addresses:
- [another IP4]
- [another IP4]
- [another IP4]
- [another IP6]
- [another IP6]
- [another IP6]
现在我的‘bridged-config’不起作用:
network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: false
bridges:
br0:
interfaces: [enp2s0]
addresses:
- [IP4]
- [IP6]
routes:
- on-link: true
to: 0.0.0.0/0
via: [another IP4]
gateway6: fe80::1
nameservers:
addresses:
- [another IP4]
- [another IP4]
- [another IP4]
- [another IP6]
- [another IP6]
- [another IP6]
应用配置后我有一些输出:
ifconfig:
br0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet [IP4] netmask 255.255.255.255 broadcast 0.0.0.0
inet6 [IP6] prefixlen 64 scopeid 0x0<global>
ether 06:54:dd:62:e6:af txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:2bff:febd:df03 prefixlen 64 scopeid 0x20<link>
ether 02:42:2b:bd:df:03 txqueuelen 0 (Ethernet)
RX packets 17 bytes 760 (760.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1088 (1.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp2s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 44:8a:5b:d4:4f:46 txqueuelen 1000 (Ethernet)
RX packets 12143 bytes 1170508 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13113 bytes 2062454 (2.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 4076 bytes 773559 (773.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4076 bytes 773559 (773.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth5932247: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::289c:28ff:fef6:93f0 prefixlen 64 scopeid 0x20<link>
ether 2a:9c:28:f6:93:f0 txqueuelen 0 (Ethernet)
RX packets 17 bytes 998 (998.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 20 bytes 1448 (1.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:da:13:11 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
网络状态
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
br0 1500 0 0 0 0 0 0 0 0 BMU
docker0 1500 17 0 0 0 16 0 0 0 BMRU
enp2s0 1500 12143 0 0 0 13113 0 0 0 BMU
lo 65536 4092 0 0 0 4092 0 0 0 LRU
veth5932 1500 17 0 0 0 20 0 0 0 BMRU
virbr0 1500 0 0 0 0 0 0 0 0 BMU
ip r-应用桥之前:
default via [IP4] dev enp2s0 proto static onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
ip r-使用应用桥:
default via [IP4] dev br0 proto static onlink linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
知识产权
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master br0 state DOWN group default qlen 1000
link/ether 44:8a:5b:d4:4f:46 brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:da:13:11 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:da:13:11 brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:51:4d:d0:31 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:51ff:fe4d:d031/64 scope link
valid_lft forever preferred_lft forever
7: vethecd1ee3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 9a:e0:6b:4c:5b:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::98e0:6bff:fe4c:5bae/64 scope link
valid_lft forever preferred_lft forever
14: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 06:54:dd:62:e6:af brd ff:ff:ff:ff:ff:ff
inet [IP4]]/32 scope global br0
valid_lft forever preferred_lft forever
inet6 [IP6]/64 scope global
valid_lft forever preferred_lft forever
netplan --debug 生成
** (generate:24019): DEBUG: 17:06:17.920: Processing input file /etc/netplan/01-netcfg.yaml..
** (generate:24019): DEBUG: 17:06:17.930: starting new processing pass
** (generate:24019): DEBUG: 17:06:17.930: We have some netdefs, pass them through a final round of validation
** (generate:24019): DEBUG: 17:06:17.930: enp2s0: setting default backend to 1
** (generate:24019): DEBUG: 17:06:17.930: Configuration is valid
** (generate:24019): DEBUG: 17:06:17.930: br0: setting default backend to 1
** (generate:24019): DEBUG: 17:06:17.930: Configuration is valid
** (generate:24019): DEBUG: 17:06:17.930: Generating output files..
** (generate:24019): DEBUG: 17:06:17.930: NetworkManager: definition enp2s0 is not for us (backend 1)
** (generate:24019): DEBUG: 17:06:17.930: NetworkManager: definition br0 is not for us (backend 1)
netplan——调试应用
DEBUG:command generate: running ['/lib/netplan/generate']
DEBUG:netplan generated networkd configuration changed, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:enp2s0 not found in {}
DEBUG:br0 not found in {}
DEBUG:Merged config:
network:
bonds: {}
bridges:
br0:
addresses:
- [IP4]]/32
- [IP6]/64
dhcp4: false
dhcp6: false
gateway6: fe80::1
interfaces:
- enp2s0
nameservers:
addresses:
- [another IP4]
- [another IP4]
- [another IP4]
- [another IP6]
- [another IP6]
- [another IP6]
parameters:
forward-delay: 4
stp: true
routes:
- on-link: true
to: 0.0.0.0/0
via: [another IP4]
ethernets:
enp2s0:
dhcp4: false
dhcp6: false
vlans: {}
wifis: {}
DEBUG:Skipping non-physical interface: lo
DEBUG:Skipping composite member enp2s0
DEBUG:Skipping non-physical interface: virbr0
DEBUG:Skipping non-physical interface: virbr0-nic
DEBUG:Skipping non-physical interface: docker0
DEBUG:Skipping non-physical interface: vethecd1ee3
DEBUG:{}
DEBUG:netplan triggering .link rules for lo
DEBUG:netplan triggering .link rules for enp2s0
DEBUG:netplan triggering .link rules for virbr0
DEBUG:netplan triggering .link rules for virbr0-nic
DEBUG:netplan triggering .link rules for docker0
DEBUG:netplan triggering .link rules for vethecd1ee3
不幸的是这篇文章不能解决我的问题:https://stackoverflow.com/a/61910941/9601604