我有一个带有 expressjs 的后端,它对一些端点做出了部分响应。
伪代码如下:
function (req, res, next) {
res.writeHead(200, { 'Content-Type': 'application/json' });
request.a.file(function response (middleChunk) {
res.write(middleChunk);
},
function final (endingChunk) {
res.end(endingChunk);
});
}
直接使用 curl -v 到 expressjs 实例,效果很好,可以逐步显示消息,并且到底这endingChunk
;
但是我并没有在我的主机上直接使用 expressjs,而是使用了 nginx 制作的反向代理,它的配置如下:
server {
listen 80;
server_name funnyhost;
root /directory;
location ~/api/.* {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_read_timeout 600s;
proxy_buffering off;
}
}
穿上后proxy_buffering off
服务器会做出一些响应:
~$ curl -v "domain.com/api/endpoint?testme=true"
* About to connect() to domain.com port 80 (#0)
* Trying XX.XX.XX.XX... connected
> GET /api/endpoint?testme=true HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: domain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Transfer-Encoding: chunked
< Connection: keep-alive
< Date: Mon, 12 Aug 2013 17:43:45 GMT
< Server: nginx/1.5.3
< X-Powered-By: Express
<
但是 Nginx 会等待完整的进度,直到 expressjs 代码到达res.end()
后才将数据发送到客户端!
我已经到了绝望的地步,我浪费了好几个小时来做这项工作:(希望有人能帮忙
附加信息
我在用着:
~# nginx -v
nginx version: nginx/1.5.3
ubuntu 12.04 LTS 服务器,node.js v0.10.15,express 3.3.5
按照要求
# nginx -V
nginx version: nginx/1.5.3
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-http_spdy_module --with-ipv6 --with-mail --with-mail_ssl_module --with-openssl=/build/buildd/nginx-1.5.3/debian/openssl-1.0.1e --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.5.3/debian/modules/nginx-cache-purge
消息
我试图控制各种可能性,所以我使用了这个脚本https://gist.github.com/mrgamer/6222708
以及以下 nginx 配置文件https://gist.github.com/mrgamer/6222734
在本地主机中发出请求,一切都顺利进行,向我的远程 VPS 发出请求时,响应标头具有不同的顺序和不同的行为;响应被一起打印出来。
我的本地电脑和远程 VPS 都装有带有 chris lea nginx PPA(launchpad.net/~chris-lea/+archive/nginx-devel)的 ubuntu 12.04,出于测试目的,我在两者上都执行了:
~# sudo aptitude purge nginx nginx-common && sudo aptitude install nginx -y
“奇怪”的行为如下所列
本地主机测试标题的顺序正确,响应正确渐进
~$ curl -v localhost.test
* About to connect() to localhost.test port 80 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost.test
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.5.3
< Date: Tue, 13 Aug 2013 16:04:19 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
<
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Chunked transfer encoding test</title></head><body><h1>Chunked transfer encoding test</h1><h5>This is a chunked response after 2 seconds. Should be displayed before 5-second chunk arrives.</h5>
* Connection #0 to host localhost.test left intact
* Closing connection #0
<h5>This is a chunked response after 5 seconds. The server should not close the stream before all chunks are sent to a client.</h5></body></html>
远程测试
~$ curl -v domain.com
* About to connect() to domain.com port 80 (#0)
* Trying XX.YY.ZZ.HH... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: domain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Connection: keep-alive
< Date: Tue, 13 Aug 2013 16:06:22 GMT
< Server: nginx/1.5.3
<
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Chunked transfer encoding test</title></head><body><h1>Chunked transfer encoding test</h1><h5>This is a chunked response after 2 seconds. Should be displayed before 5-second chunk arrives.</h5>
* Connection #0 to host test.col3.me left intact
* Closing connection #0
<h5>This is a chunked response after 5 seconds. The server should not close the stream before all chunks are sent to a client.</h5></body></html>
答案1
很遗憾地告诉你,这是个人关系问题:)
我使用的是 3G 网络,我的提供商(意大利 TIM,呸!)使用一些重新排序标头和缓存响应的透明代理。
他们还阻止了端口 80 上的 websocket,因此他们无法为客户提供有用的体验也就不足为奇了。
抱歉,服务器故障!