为什么我的环回设备上有非 127.xxx 流量?

为什么我的环回设备上有非 127.xxx 流量?

我遇到了一个令我相当头疼的问题,虽然我找到了解决方案,但我不明白为什么这个问题会发生。

我有一个 nginx 反向代理,它将请求路由到 3 个服务器中的 1 个。所有服务器都是节点服务器,其中两个是快速服务器,一个是定制服务器。

问题:当我的防火墙启动时, 这第一的两台 Express 服务器总正常运行时间中的请求数504网关超时之后,文件就可以快速无误地送达,即使经过好几天的时间。

我注意到,当我完全关闭防火墙时,问题就消失了,因此我能够将问题缩小到防火墙上。我的服务器运行的是 FreeBSD,使用 PF 作为防火墙。以下是相关的 pf.conf:

iface = "vtnet0"
loopback = "lo0"
public_ip = "132.148.77.28/32"
localnet = "127.0.0.1"

nat on $iface from any to any -> $public_ip # added by cloud server provider
pass out quick on $iface proto { tcp udp } from port ntp keep state
block log all
pass in on $iface proto tcp from $localnet to port mail
pass in log on $iface proto tcp to $public_ip port { ssh http } keep state
pass in log on $loopback proto tcp to $localnet port { 5555 5556 5557 } keep state
pass out log all keep state

我知道vtnet0设备上的任何流量都将被转换为使用公共 IP。

我使用以下命令运行了 tcpdump: sudo tcpdump -c 10 -vvv -t -i pflog0 -e -n tcp

我在 pflog0 上捕获了 10 个数据包,使用三倍详细程度,抑制时间戳,打印允许或阻止数据包的 pf 规则,将 ip 地址和端口保留为数字,并且仅使用 tcp 捕获流量。

以下是第一个请求的有趣结果,以及后续请求的结果:

$ sudo tcpdump -c 10 -vvv -t -i pflog0 -e -n tcp
tcpdump: WARNING: pflog0: no IPv4 address assigned
tcpdump: listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 65535 bytes

rule 8..16777216/0(match): pass in on vtnet0: (tos 0x0, ttl 52, id 53488, offset 0, flags [DF], proto TCP (6), length 60)
    67.197.156.119.45274 > 132.148.77.28.80: Flags [S], cksum 0x76df (correct), seq 356096480, win 29200, options [mss 1460,sackOK,TS val 63916304 ecr 0,nop,wscale 7], length 0

rule 9..16777216/0(match): pass out on lo0: (tos 0x0, ttl 64, id 2406, offset 0, flags [DF], proto TCP (6), length 60)
    127.0.0.1.28850 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x180a), seq 2069090201, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33367168 ecr 0], length 0

rule 5..16777216/0(match): pass in on lo0: (tos 0x0, ttl 64, id 2406, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->3354)!)
    127.0.0.1.28850 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x180a), seq 2069090201, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33367168 ecr 0], length 0

rule 9..16777216/0(match): pass out on lo0: (tos 0x0, ttl 64, id 2414, offset 0, flags [DF], proto TCP (6), length 60)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0x1728), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33367544 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2414, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8ded)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0x1728), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33367544 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2416, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8deb)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0x0b6a), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33370550 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2418, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8de9)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0xfee6), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33373753 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2420, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8de7)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0xf265), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33376954 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2422, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8de5)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0xe5c8), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33380183 ecr 0], length 0

rule 2..16777216/0(match): **block in on lo0**: (tos 0x0, ttl 64, id 2424, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->8de3)!)
    132.148.77.28.48488 > 132.148.77.28.80: Flags [S], cksum 0xa38f (incorrect -> 0xd943), seq 3973611725, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 33383388 ecr 0], length 0
10 packets captured
11 packets received by filter
0 packets dropped by kernel

$ sudo tcpdump -c 10 -vvv -t -i pflog0 -e -n tcp
tcpdump: WARNING: pflog0: no IPv4 address assigned
tcpdump: listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 65535 bytes

rule 8..16777216/0(match): pass in on vtnet0: (tos 0x0, ttl 52, id 408, offset 0, flags [DF], proto TCP (6), length 60)
    67.197.156.119.45482 > 132.148.77.28.80: Flags [S], cksum 0x2e49 (correct), seq 3314040182, win 29200, options [mss 1460,sackOK,TS val 64109806 ecr 0,nop,wscale 7], length 0

rule 9..16777216/0(match): pass out on lo0: (tos 0x0, ttl 64, id 2446, offset 0, flags [DF], proto TCP (6), length 60)
    127.0.0.1.19450 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x603e), seq 629617255, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141175 ecr 0], length 0

rule 5..16777216/0(match): pass in on lo0: (tos 0x0, ttl 64, id 2446, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->332c)!)
    127.0.0.1.19450 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x603e), seq 629617255, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141175 ecr 0], length 0

rule 9..16777216/0(match): pass out on lo0: (tos 0x0, ttl 64, id 2459, offset 0, flags [DF], proto TCP (6), length 60)
    127.0.0.1.54320 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x8ec2), seq 2137792783, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141359 ecr 0], length 0

rule 5..16777216/0(match): pass in on lo0: (tos 0x0, ttl 64, id 2459, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->331f)!)
    127.0.0.1.54320 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0x8ec2), seq 2137792783, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141359 ecr 0], length 0

rule 8..16777216/0(match): pass in on vtnet0: (tos 0x0, ttl 52, id 13349, offset 0, flags [DF], proto TCP (6), length 60)
    67.197.156.119.45484 > 132.148.77.28.80: Flags [S], cksum 0x9e57 (correct), seq 2310473985, win 29200, options [mss 1460,sackOK,TS val 64109860 ecr 0,nop,wscale 7], length 0

rule 8..16777216/0(match): pass in on vtnet0: (tos 0x0, ttl 52, id 32739, offset 0, flags [DF], proto TCP (6), length 60)
    67.197.156.119.45486 > 132.148.77.28.80: Flags [S], cksum 0x5b67 (correct), seq 2464105159, win 29200, options [mss 1460,sackOK,TS val 64109860 ecr 0,nop,wscale 7], length 0

rule 8..16777216/0(match): pass in on vtnet0: (tos 0x0, ttl 52, id 55497, offset 0, flags [DF], proto TCP (6), length 60)
    67.197.156.119.45488 > 132.148.77.28.80: Flags [S], cksum 0x832f (correct), seq 3354322413, win 29200, options [mss 1460,sackOK,TS val 64109860 ecr 0,nop,wscale 7], length 0

rule 9..16777216/0(match): pass out on lo0: (tos 0x0, ttl 64, id 2474, offset 0, flags [DF], proto TCP (6), length 60)
    127.0.0.1.60189 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0xe2c0), seq 1589892785, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141386 ecr 0], length 0

rule 5..16777216/0(match): pass in on lo0: (tos 0x0, ttl 64, id 2474, offset 0, flags [DF], proto TCP (6), length 60, bad cksum 0 (->3310)!)
    127.0.0.1.60189 > 127.0.0.1.5556: Flags [S], cksum 0xfe30 (incorrect -> 0xe2c0), seq 1589892785, win 65535, options [mss 16344,nop,wscale 6,sackOK,TS val 34141386 ecr 0], length 0
10 packets captured
14 packets received by filter
0 packets dropped by kernel

我在数据包信息之间添加了换行符以提高可读性。您会注意到在第一个转储中“规则 2”正在阻止数据包,这仅在上游服务器重新启动后的第一个请求中发生。

怎么回事?您还会注意到,这些被阻止的数据包位于环回接口上,但它们来自并发往公共 IP 地址。我认为它们不应该被 NAT,因为它们不在 vtnet0 接口上。即便如此,为什么后续请求会采用不同的路径来促进从 nginx 到上游服务器的通信?

我发现的修复方法是将我的 pf.conf 中的以下行更改 pass in log on $iface proto tcp to $public_ip port { ssh http } keep state 为: pass in log proto tcp to $public_ip port { ssh http } keep state

我删除了该on $iface部分,这样防火墙甚至会允许数据包通过环回接口上的 $public_ip 端口 80。我还对放松该规则的安全隐患略有担忧 - 这只是因为我不再完全了解这些不同网络接口之间的流量是如何运行的。

此时我甚至不确定问题出在我的防火墙上还是 nginx 上,只是当防火墙被禁用时,问题就消失了。

请注意,nginx 将proxy_pass请求分成三个不同的server块,http://127.0.0.1:[port]其中端口是 5555、5556 或 5557 之一。

答案1

经过大量研究,正确的解决方法不是修改我提到的行,而是完全删除该行并在 pf.conf 的选项块中添加set skip on lo0(在宏之后和数据包过滤规则之前)。这样可以完全阻止 pf 触及环回接口。

至于非 127.xxx ip 出现在环回设备上,我有一个理论。正如@Richard Smith 所建议的,我运行netstat -rn并注意到以下两行:

Destination      Gateway Flags Netif
132.148.77.28    link#1  UHS   lo0 
132.148.77.28/32 link#1  U     vtnet0

我进行了一些挖掘,发现这两个条目是通过 添加ifconfig到 ifconfig 输出中列出的第一个以太网接口 vtnet0 的别名。因此,很自然地,流量似乎真的可以通过环回路由到 132.148.77.28,正如此路由所示。这只是意味着它将使用 vtnet0 作为网关来执行此操作。现在,我不确定发生了什么,当它到达 vtnet0 作为网关时,它被 pf 规则进行了 NAT 处理 - 但我不确定为什么 tcpdump 仍然会显示它来自的旧接口 (lo0)。我仍然觉得很奇怪,监听 127.0.0.1 的上游服务器会选择打开 132.148.77.28 上的端口,第一次尝试响应代理服务器,然后在后续请求中,只使用 127.0.0.1 并让反向代理“降级”进行通信。我的理论是,路由算法“记住”了之前的路径不好(即被阻止),因此尝试了不同的方法。我不会将我的答案标记为已接受的答案,因为我真的需要一个训练有素的专业人士来验证我的轻量级理论是否有效。

相关内容