我刚买了一台新的 10Gbps 服务器,有 8 个 CPU 核心、64GB RAM 和 1TB NVMe
OS Centos 7.9 kernel 3.10.0-1160.36.2.el7.x86_64 also tried kernel-ml 5.13
SELinux is disabled.
firewalld and irqbalance stopped
我已经使用 iperf3 进行了网络测试,确认速度约为 9.5 Gbps。
然后进行另一项测试,使用 10 x 1Gbps 服务器从服务器下载静态文件,服务器能够轻松地将几乎完整的 10Gbps 推送到 10 台服务器。
因此,我们将服务器投入生产,使用 Nginx 为下载静态文件的客户端提供服务。它能够提供稳定的性能,直到达到约 2,000 个连接,然后性能开始显着下降。我发现当连接增加时流量会下降,因此为超过 4,000 个连接提供服务只能提供 2Gbps!
最令人困惑的是,CPU 几乎处于空闲状态,RAM 可用,由于 NVMe 和大 RAM,IO 使用率很低,但当服务器有数千个连接时,所有服务 HTTP、FTP、SSH 的速度都会变慢,甚至 yum 更新也需要很长时间才能响应。这似乎是网络或数据包拥塞,或者是内核或网卡的一些限制。
我尝试了大多数调整技巧
ifconfig eth0 txqueuelen 20000
ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:16:3e:c2:f5:21 txqueuelen 20000 (Ethernet)
RX packets 26012067560 bytes 1665662731749 (1.5 TiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30684216747 bytes 79033055227212 (71.8 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tc -s -d qdisc 显示设备 eth0
qdisc mq 1: root
Sent 7733649086021 bytes 1012203012 pkt (dropped 0, overlimits 0 requeues 169567)
backlog 4107556b 2803p requeues 169567
qdisc pfifo_fast 0: parent 1:8 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 2503685906926 bytes 1714686297 pkt (dropped 0, overlimits 0 requeues 1447)
backlog 4107556b 2803p requeues 1447
qdisc pfifo_fast 0: parent 1:7 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 532876060762 bytes 366663805 pkt (dropped 0, overlimits 0 requeues 7790)
backlog 0b 0p requeues 7790
qdisc pfifo_fast 0: parent 1:6 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 563510390106 bytes 387948990 pkt (dropped 0, overlimits 0 requeues 9694)
backlog 0b 0p requeues 9694
qdisc pfifo_fast 0: parent 1:5 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 563033712946 bytes 387564038 pkt (dropped 0, overlimits 0 requeues 10259)
backlog 0b 0p requeues 10259
qdisc pfifo_fast 0: parent 1:4 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 562982455659 bytes 387451904 pkt (dropped 0, overlimits 0 requeues 10706)
backlog 0b 0p requeues 10706
qdisc pfifo_fast 0: parent 1:3 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 559557988260 bytes 385263948 pkt (dropped 0, overlimits 0 requeues 9983)
backlog 0b 0p requeues 9983
qdisc pfifo_fast 0: parent 1:2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 528903326344 bytes 364105031 pkt (dropped 0, overlimits 0 requeues 7718)
backlog 0b 0p requeues 7718
qdisc pfifo_fast 0: parent 1:1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 1919099245018 bytes 1313486295 pkt (dropped 0, overlimits 0 requeues 111970)
backlog 0b 0p requeues 111970
ethtool -k eth0
Features for eth0:
rx-checksumming: on [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: off
tx-tcp-mangleid-segmentation: off
udp-fragmentation-offload: on
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
busy-poll: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-sctp-segmentation: off [fixed]
rx-gro-hw: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
系统控制-p
vm.max_map_count = 1048575
net.ipv4.tcp_timestamps = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_syncookies = 0
net.ipv4.conf.all.log_martians = 1
vm.swappiness = 10
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65536
net.core.netdev_max_backlog = 250000
fs.file-max = 100000
net.ipv4.ip_local_port_range = 13000 65000
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0
net.ipv4.tcp_slow_start_after_idle = 0
net.core.rmem_max = 2147483647
net.core.rmem_default = 2147483647
net.core.wmem_max = 2147483647
net.core.wmem_default = 2147483647
net.core.optmem_max = 2147483647
net.ipv4.tcp_rmem = 4096 87380 2147483647
net.ipv4.tcp_wmem = 4096 65536 2147483647
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_keepalive_time = 60
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 5
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 5
net.netfilter.nf_conntrack_max = 655360
net.netfilter.nf_conntrack_tcp_timeout_established = 10800
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256680
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 100000
cpu time (seconds, -t) unlimited
max user processes (-u) 100000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
nginx.conf
worker_processes auto;
worker_rlimit_nofile 100000;
thread_pool default threads=256 max_queue=65536;
events {
worker_connections 65536;
worker_aio_requests 65536;
multi_accept on;
accept_mutex on;
use epoll;
}
http {
server_tokens off;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
keepalive_requests 1000;
send_timeout 10;
aio threads=default;
sendfile on;
sendfile_max_chunk 512k;
open_file_cache max=100000 inactive=10m;
open_file_cache_valid 10m;
open_file_cache_min_uses 10;
open_file_cache_errors on;
gzip off;
}
所以问题是:如何为 10k 个连接提供 10Gbps 流量下载静态文件?这是 Linux 或 nginx 还是硬件的问题?
答案1
Brandon 已经回答了。打开 irqbalance。运行 numad 并进行调整。除非您有需要它的特定工作负载,否则不要尝试进行调整。部署前测试 2000-10000 个请求的 wrk 测试结果在哪里?这个问题不应该在生产中出现。它显然可以通过测试发现。实际使用通常会发现不常见的错误,但许多/大多数配置和应用程序错误可以在测试期间识别和纠正。有许多关于 irq affinity 的文档。我怀疑您的用例能否比使用内置的调整工具做得更好。很可能,您的手动调整效果会更差。
答案2
输出top
表明您的内核被来自所有传入连接的软中断淹没。连接传入的速度如此之快,以至于网卡触发的硬件中断排队软中断的速度比内核处理它们的速度要快。这就是为什么您的 CPU、RAM 和 IO 使用率如此之低的原因;系统不断被传入连接中断。您这里需要的是负载平衡器。