LXC 容器无法 ping 8.8.8.8 或其他外部 IP

LXC 容器无法 ping 8.8.8.8 或其他外部 IP

我在我的 Mac 笔记本电脑上通过 UTM 在我的“Ubuntu 23.04”VM 上运行 LXC 容器,但无法在我的 LXC 容器中 ping 8.8.8.8。

我通过 snap 下载了 lxc

注意:我搜索了很多信息来修复问题,但仍然无法解决问题。
我可以在虚拟机上 ping 通我的 LXC 容器,反之亦然。
我可以在 LXC 容器上 ping 通我的 LXC 容器的默认 getway。
我已执行 systemctl restart snap 服务或将 ipv4 和防火墙设置为 false 和 true ,但问题仍然存在。

我的 LXC 容器信息:

lxc config show ubuntu --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Ubuntu focal arm64 (20240118_07:42)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20240118_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 2c855bd13a6d33ff3ea6a9adcf9f6454da4314cd4629d988c98b7da91e00eb09
  volatile.cloud-init.instance-id: 57f1f7c4-33fb-40cb-9bc3-9f4a75bc881c
  volatile.eth0.host_name: veth84f5e9b3
  volatile.eth0.hwaddr: 00:16:3e:d8:ca:72
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 0c87c07d-4693-4344-bc7d-cb5d95f1747e
  volatile.uuid.generation: 0c87c07d-4693-4344-bc7d-cb5d95f1747e
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
 default
stateful: false
description: ""
lxc exec ubuntu -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:d8:ca:72 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.4.135.180/24 brd 10.4.135.255 scope global dynamic eth0
       valid_lft 1989sec preferred_lft 1989sec
    inet6 fd42:e368:9853:dcf7:216:3eff:fed8:ca72/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fed8:ca72/64 scope link 
       valid_lft forever preferred_lft forever
lxc exec ubuntu -- ip r
default via 10.4.135.1 dev eth0 proto dhcp src 10.4.135.180 metric 100 
10.4.135.0/24 dev eth0 proto kernel scope link src 10.4.135.180 
10.4.135.1 dev eth0 proto dhcp scope link src 10.4.135.180 metric 100

容器中的 ip 路由获取:

lxc exec ubuntu -- ip route get 8.8.8.8
8.8.8.8 via 10.4.135.1 dev eth0 src 10.4.135.180 uid 0 
    cache

我的虚拟机信息:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether d6:ae:f7:20:02:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.3/24 metric 100 brd 192.168.64.255 scope global dynamic enp0s1
       valid_lft 72951sec preferred_lft 72951sec
    inet6 fdbd:2af3:625b:81be:d4ae:f7ff:fe20:2a4/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591887sec preferred_lft 604687sec
    inet6 fe80::d4ae:f7ff:fe20:2a4/64 scope link 
       valid_lft forever preferred_lft forever
3: br-251cc27d7151: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:59:7f:0b:21 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-251cc27d7151
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:23:b4:8c:aa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: br-df1c5081bc7d: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:34:97:a8:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-df1c5081bc7d
       valid_lft forever preferred_lft forever
22: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
25: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:6e:79:97 brd ff:ff:ff:ff:ff:ff
    inet 10.4.135.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:e368:9853:dcf7::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe6e:7997/64 scope link 
       valid_lft forever preferred_lft forever
27: veth84f5e9b3@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether a2:36:90:15:0e:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
ip r
default via 192.168.64.1 dev enp0s1 proto dhcp src 192.168.64.3 metric 100 
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 linkdown 
10.4.135.0/24 dev lxdbr0 proto kernel scope link src 10.4.135.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-df1c5081bc7d proto kernel scope link src 172.18.0.1 linkdown 
172.19.0.0/16 dev br-251cc27d7151 proto kernel scope link src 172.19.0.1 linkdown 
192.168.64.0/24 dev enp0s1 proto kernel scope link src 192.168.64.3 metric 100 
192.168.64.1 dev enp0s1 proto dhcp scope link src 192.168.64.3 metric 100

我的虚拟机的路由表:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 enp0s1
10.0.3.0        0.0.0.0         255.255.255.0   U     0      0        0 lxcbr0
10.4.135.0      0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-df1c5081bc7d
172.19.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-251cc27d7151
192.168.64.0    0.0.0.0         255.255.255.0   U     100    0        0 enp0s1
_gateway        0.0.0.0         255.255.255.255 UH    100    0        0 enp0s1

从虚拟机监听 lxdbr0,输出:

sudo tcpdump -ni lxdbr0 icmp

tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on lxdbr0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:01:50.099598 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 68, length 64
23:01:51.126634 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 69, length 64
23:01:52.147930 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 70, length 64
23:01:53.174693 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 71, length 64
23:01:54.195179 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 72, length 64
23:01:55.219536 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 73, length 64
23:01:56.242768 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 74, length 64
23:01:57.267456 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 75, length 64
23:01:58.295056 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 76, length 64

我已经在我的虚拟机上执行了以下命令

iptables -A FORWARD -p all -i lxdbr0 -j ACCEPT

或者我尝试禁用我的docker
不幸的是,仍然没有工作

iptables 输出:

iptables -t nat -L -n -v --line-numbers

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1       78 11520 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        5  1196 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
2        0     0 MASQUERADE  all  --  *      !br-df1c5081bc7d  172.18.0.0/16        0.0.0.0/0           
3        0     0 MASQUERADE  all  --  *      !br-251cc27d7151  172.19.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
2        0     0 RETURN     all  --  br-df1c5081bc7d *       0.0.0.0/0            0.0.0.0/0           
3        0     0 RETURN     all  --  br-251cc27d7151 *       0.0.0.0/0            0.0.0.0/0           

lxdbr0 网络信息:

lxc network show lxdbr0

config:
  ipv4.address: 10.4.135.1/24
  ipv4.firewall: "true"
  ipv4.nat: "true"
  ipv6.address: fd42:e368:9853:dcf7::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/ubuntu
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

答案1

尝试将以下内容添加到您的规则集中:

iptables -t nat -A POSTROUTING -i ldxbr0 -j MASQUERADE

通常,这是由 LXD 自动完成的,但使用 Docker 和系统上的其他网桥可能会删除这些 NAT 规则。(这就是为什么我的系统有多个 VM 网络、LXD 网桥等,它们都被手动添加到系统上的 NAT 表中,但这只是我作为高级用户的做法)


对于那些想要解释上述命令的参数的人:

  • -t nat- 使用 NAT 表而不是过滤器/规则表。
  • -A POSTROUTING- 这是 NAT 等的传出流量,涉及伪装成您的系统的 LXD 桥接器等。
  • -i lxdbr0– 仅当传入流量源接口为时才执行此操作lxdbr0
  • -j MASQUERADE- 自动将 NAT 作为传出接口,该接口应该是您具有互联网链接的设备。

相关内容