Wireguard 隧道缓慢且断断续续

Wireguard 隧道缓慢且断断续续

询问后这个问题我设置了一个 wireguard vpn,它将所有流量从我的本地局域网转发到远程服务器。从 wireguard 客户端主机连接速度很快。但是,局域网上的客户端连接速度要慢得多,并且会断开很多连接。跟踪路由显示客户端和局域网客户端都通过 VPN 连接并正确退出

在 wireguard 客户端主机上,我的 ping 值很低,但速度还不错

curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
Retrieving speedtest.net configuration...
Testing from Spectrum (68.187.109.97)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bertram Communications (Iron Ridge, WI) [185.33 km]: 598.9 ms
Testing download speed................................................................................
Download: 4.65 Mbit/s
Testing upload speed................................................................................................
Upload: 4.97 Mbit/s

但这段代码只是挂在局域网客户端上,甚至无法下载运行所需的脚本。一些简单的网站可以加载,但任何实质性的内容都会超时。

我该如何开始调试?我的第一个想法是我的 iptables 规则可能配置错误


# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:84:56:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.104/24 brd 192.168.1.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:ebff:fe84:56f5/64 scope link 
       valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:27:eb:d1:03:a0 brd ff:ff:ff:ff:ff:ff
4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:27:eb:84:56:f5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute eth0.2
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:ebff:fe84:56f5/64 scope link 
       valid_lft forever preferred_lft forever
5: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1120 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 192.168.99.17/24 scope global wg0
       valid_lft forever preferred_lft forever

# ip -4 route show table all
default dev wg0 table 51820 scope link 
default via 192.168.1.1 dev eth0 src 192.168.1.104 metric 202 mtu 1200 
10.0.0.0/24 dev eth0.2 proto dhcp scope link src 10.0.0.1 metric 204 mtu 1200 
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.104 metric 202 mtu 1200 
192.168.99.0/24 dev wg0 proto kernel scope link src 192.168.99.17 
broadcast 10.0.0.0 dev eth0.2 table local proto kernel scope link src 10.0.0.1 
local 10.0.0.1 dev eth0.2 table local proto kernel scope host src 10.0.0.1 
broadcast 10.0.0.255 dev eth0.2 table local proto kernel scope link src 10.0.0.1 
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
broadcast 192.168.1.0 dev eth0 table local proto kernel scope link src 192.168.1.104 
local 192.168.1.104 dev eth0 table local proto kernel scope host src 192.168.1.104 
broadcast 192.168.1.255 dev eth0 table local proto kernel scope link src 192.168.1.104 
broadcast 192.168.99.0 dev wg0 table local proto kernel scope link src 192.168.99.17 
local 192.168.99.17 dev wg0 table local proto kernel scope host src 192.168.99.17 
broadcast 192.168.99.255 dev wg0 table local proto kernel scope link src 192.168.99.17 

# ip -4 rule show
0:  from all lookup local 
32764:  from all lookup main suppress_prefixlength 0 
32765:  not from all fwmark 0xca6c lookup 51820 
32766:  from all lookup main 
32767:  from all lookup default 

# ip -6 route show table all
::1 dev lo proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0.2 proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fe80::ba27:ebff:fe84:56f5 dev eth0.2 table local proto kernel metric 0 pref medium
local fe80::ba27:ebff:fe84:56f5 dev eth0 table local proto kernel metric 0 pref medium
ff00::/8 dev eth0 table local metric 256 pref medium
ff00::/8 dev eth0.2 table local metric 256 pref medium

# ip -6 rule show
0:  from all lookup local 
32766:  from all lookup main 

# wg
interface: wg0
  public key: XR9UASLZXCjRZKa9MnmBxebfP6jxfBaaQOa5BJEFsX8=
  private key: (hidden)
  listening port: 48767
  fwmark: 0xca6c

peer: M37O/lE0ZWZ0uzYVGu17ZAZmdbnLyd5RuiAVvF/bqwE=
  endpoint: 68.187.109.97:51820
  allowed ips: 0.0.0.0/0
  latest handshake: 2 minutes, 20 seconds ago
  transfer: 2.42 MiB received, 8.45 MiB sent

# ip netconf
inet lo forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet eth0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet wlan0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet eth0.2 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet wg0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet all forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet default forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 lo forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 eth0 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 wlan0 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 eth0.2 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 all forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 default forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 

# iptables-save
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d 192.168.99.17/32 ! -i wg0 -m addrtype ! --src-type LOCAL -m comment --comment "wg-quick(8) rule for wg0" -j DROP
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p udp -m comment --comment "wg-quick(8) rule for wg0" -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff
-A POSTROUTING -p udp -m mark --mark 0xca6c -m comment --comment "wg-quick(8) rule for wg0" -j CONNMARK --save-mark --nfmask 0xffffffff --ctmask 0xffffffff
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i wg0 -o eth0.2 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth0.2 -o wg0 -j ACCEPT
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o wg0 -j MASQUERADE
COMMIT
# Completed on Thu Apr  2 19:11:02 2020

答案1

WireGuard 的默认 MTU 为 1420,而其他设备的通常大小为 1492 或 1500。

这将导致任何认为正在向 WireGuard 发送完整数据包的设备实际上发送多个 WireGuard 数据包,因为数据包将被分成两个,第二个数据包几乎是空的。

由于 TCP/IP 中的主要因素是数据包的数量,因为每个数据包都需要同步和确认,这会减慢所有通信的速度。

解决方案是将 WireGuard 设置为与网络其余部分相同的 MTU 大小。

有关详细信息,请参阅:For more information, see:

答案2

对我来说,我必须将 MTU 设置得更低(至 1400)。

我使用的命令是:

sudo ip link set dev wg0 mtu 1400

此外,如果您想检查是否存在 MTU 错误,并且您通过双栈连接(= IPv4 + IPv6)进行连接,请使用 IPv6 而不是 IPv4 进行连接,那么 - 如果它与 MTU 相关 - 则问题应该不再出现。

答案3

确保您允许 ICMP 类型 3 代码 4(需要分片),因为这会启用 PMTUD 机制。

相关内容