我们设置了 Nginx 和 Lua(openresty 包)作为文件共享服务器的本地缓存节点,我们将文件分成“每 50MB”的块(通过这方法)并将它们存储在缓存中以提高其效率。在低流量时,它工作正常,但随着缓存文件和负载的增加(即使它不是很高),缓存将变得无响应,因为大多数时候系统购买率 >80%。那么在这种情况下,什么可能是性能杀手
虽然我们尝试调整了几个参数(例如缓存目录级别、RAID 参数),但我还没有给出最佳解决方案
附言:当缓存中只有 10000 个文件并且服务器上的连接数约为 300/s 时,症状就会开始出现
缓存服务器规格
1xCPU 2.5 Ghz 12 Cores
128GB RAM
10x500GB Samsung SSD RAID0 (128KB chuck s) storage
linux Os -CentOS 6.6 64bit
File system ext4 4k block
Nginx 配置
worker_processes auto;
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include /usr/local/openresty/nginx/conf/mime.types;
proxy_cache_path /mnt/cache/ levels=2:2:2 keys_zone=default:1000m loader_threshold=100 loader_files=2000
loader_sleep=10 inactive=1y max_size=3500000m;
proxy_temp_path /mnt/temp2 2 2;
client_body_temp_path /mnt/temp 2 2;
limit_conn_zone $remote_addr$uri zone=addr:100m;
map $request_method $disable_cache {
HEAD 1;
default 0;
}
lua_package_path "/opt/ranger/external/lua-resty-http/lib/?.lua;/opt/ranger/external/nginx_log_by_lua/?.lua;/opt/ranger/external/bitset/lib/?.lua;;";
lua_shared_dict file_dict 50M;
lua_shared_dict log_dict 100M;
lua_shared_dict cache_dict 100M;
lua_shared_dict chunk_dict 100M;
proxy_read_timeout 20s;
proxy_send_timeout 25s;
reset_timedout_connection on;
init_by_lua_file '/opt/ranger/init.lua';
# Server that has the lua code and will be accessed by clients
server {
listen 80 default;
server_name _;
server_name_in_redirect off;
set $ranger_cache_status $upstream_cache_status;
lua_check_client_abort on;
lua_code_cache on;
resolver ----;
server_tokens off;
resolver_timeout 1s;
location / {
try_files $uri $uri/ index.html;
}
location ~* ^/download/ {
lua_http10_buffering off;
content_by_lua_file '/opt/ranger/content.lua';
log_by_lua_file '/opt/ranger/log.lua';
limit_conn addr 2;
}
}
# Server that works as a backend to the lua code
server {
listen 8080;
server_tokens off;
resolver_timeout 1s;
location ~* ^/download/(.*?)/(.*?)/(.*) {
set $download_uri $3;
set $download_host $2;
set $download_url http://$download_host/$download_uri?$args;
proxy_no_cache $disable_cache;
proxy_cache_valid 200 1y;
proxy_cache_valid 206 1y;
proxy_cache_key "$scheme$proxy_host$uri$http_range";
proxy_cache_use_stale error timeout http_502;
proxy_cache default;
proxy_cache_min_uses 1;
proxy_pass $download_url;
}
}
}
答案1
感谢@myaut 的指导,我查了一下, _spin_lock_irqsave
结果发现它与内核本身有关,而与 Nginx 无关。
根据这文章中提到,可以通过禁用修复该问题的 RedHat Transparent Huge Page 功能来解决该问题。
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled