tornado
我有一个新闻网站,它由 4 个实例运行,并nginx
作为它们前面的反向代理。
页面被渲染并缓存,memcached
因此通常响应时间比日志显示的要3 ms
短tornado
。
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 3.41ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 1.96ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.48ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 4.09ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.49ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.25ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.39ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.93ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.08ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.72ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.02ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.74ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.85ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.60ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.83ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.65ms
ab
当我在并发级别 测试此网站时,1000
响应时间约为0.8
几秒。以下是基准测试结果:
Document Length: 12036 bytes
Concurrency Level: 1000
Time taken for tests: 7.974 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 10000
Total transferred: 122339941 bytes
HTML transferred: 120549941 bytes
Requests per second: 1254.07 [#/sec] (mean)
Time per request: 797.407 [ms] (mean)
Time per request: 0.797 [ms] (mean, across all concurrent requests)
Transfer rate: 14982.65 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 7 20.8 0 86
Processing: 57 508 473.9 315 7014
Waiting: 57 508 473.9 315 7014
Total: 143 515 471.5 321 7014
Percentage of the requests served within a certain time (ms)
50% 321
66% 371
75% 455
80% 497
90% 1306
95% 1354
98% 1405
99% 3009
100% 7014 (longest request)
我可以~1200
通过并发连接处理请求数/秒1000
,并且当我对并发连接执行相同的基准测试时,100
我可以再次处理大约1200
请求数/秒,但响应时间会下降到~80 ms
。
当涉及到实际的1000
并发连接时,用户将面临0.8
数秒的响应时间,我认为这是一个糟糕的值。
我的问题是为什么并发级别增加时响应时间也会增加?
这是我的nginx
配置
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
worker_rlimit_nofile 65536;
events {
worker_connections 65536;
use epoll;
}
http {
upstream frontends {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
server 127.0.0.1:8084;
}
access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
proxy_read_timeout 200;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml application/atom+xml;
gzip_disable "msie6";
proxy_next_upstream error;
server {
listen 80;
client_max_body_size 1M;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
location ^~ /static/ {
root /var/www;
if ($query_string) {
expires max;
}
}
}
}
答案1
当你针对以下情况进行性能测试时,是否会得到相同的结果:
location /perftest/ {
return 200;
}
请添加你的 nginx.conf 和服务器 {} - 块