我即将部署一个全新的 node.js 应用程序,我需要一些帮助来进行设置。
我现在的设置方式如下。
我正在运行 Varnishexternal_ip:80
我有 Nginx 在后台运行internal_ip:80
两者都在监听端口 80,一个是内部端口,一个是外部端口。
注意:node.js 应用程序运行于WebSockets
现在我有了新的 node.js 应用程序,它将监听端口 8080。
我可以将 varnish 设置在 nginx 和 node.js 前面吗?
Varnish 必须将 websocket 代理到端口 8080,但是 css、js 等静态文件必须通过端口 80 到达 nignx。
Nginx 不支持开箱即用的 websockets,否则我会进行如下设置:
清漆-> nignx-> node.js
答案1
刚刚建立了一个与您描述基本相同的项目,我将分享我的方法 - 不能保证它是“最好的”,但它确实有效。
我的服务器堆栈是
- Varnish (v3.0.2) - 所有接口,端口 80
- Nginx (v1.0.14) - 本地接口,端口 81
- Node.js (v0.6.13) - 本地接口,端口 1337
- 操作系统是 CentOS 6.2(或类似版本)
我的 Node.js 应用程序使用 Websockets(sockets.io - v0.9.0)和 Express(v2.5.8) - 并使用 forever 启动。(同一台服务器上还有其他站点 - 主要是使用相同 Nginx 和 Varnish 实例的 PHP)。
我的方法的基本意图如下:
- 用于 websocket 和“常规”数据的单一公共端口/地址
- 使用 Varnish 缓存一些资产
- 直接从 nginx 提供(未缓存的)静态资产
- 将“网页”请求传递给 nginx,并从其代理传递到 Node.js
- 将 Web 套接字请求直接(从 Varnish)传递到 Node.js(绕过 nginx)。
Varnish 配置- /etc/varnish/default.vcl:
#Nginx - on port 81
backend default {
.host = "127.0.0.1";
.port = "81";
.connect_timeout = 5s;
.first_byte_timeout = 30s;
.between_bytes_timeout = 60s;
.max_connections = 800;
}
#Node.js - on port 1337
backend nodejs{
.host = "127.0.0.1";
.port = "1337";
.connect_timeout = 1s;
.first_byte_timeout = 2s;
.between_bytes_timeout = 60s;
.max_connections = 800;
}
sub vcl_recv {
set req.backend = default;
#Keeping the IP addresses correct for my logs
if (req.restarts == 0) {
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
#remove port, if included, to normalize host
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
#Part of the standard Varnish config
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.request != "GET" && req.request != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
#Taken from the Varnish help on dealing with Websockets - pipe directly to Node.js
if (req.http.Upgrade ~ "(?i)websocket") {
set req.backend = nodejs;
return (pipe);
}
###Removed some cookie manipulation and compression settings##
if(req.http.Host ~"^(www\.)?example.com"){
#Removed some redirects and host normalization
#Requests made to this path, even if XHR polling still benefit from piping - pass does not seem to work
if (req.url ~ "^/socket.io/") {
set req.backend = nodejs;
return (pipe);
}
#I have a bunch of other sites which get included here, each in its own block
}elseif (req.http.Host ~ "^(www\.)?othersite.tld"){
#...
}
#Part of the standard Varnish config
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
#Everything else, lookup
return (lookup);
}
sub vcl_pipe {
#Need to copy the upgrade for websockets to work
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
set bereq.http.Connection = "close";
return (pipe);
}
#All other functions should be fine unmodified (for basic functionality - most of mine are altered to my purposes; I find that adding a grace period, in particular, helps.
Nginx 配置 - /etc/nginx/*/example.com.conf:
server {
listen *:81;
server_name example.com www.example.com static.example.com;
root /var/www/example.com/web;
error_log /var/log/nginx/example.com/error.log info;
access_log /var/log/nginx/example.com/access.log timed;
#removed error page setup
#home page
location = / {
proxy_pass http://node_js;
}
#everything else
location / {
try_files $uri $uri/ @proxy;
}
location @proxy{
proxy_pass http://node_js;
}
#removed some standard settings I use
}
upstream node_js {
server 127.0.0.1:1337;
server 127.0.0.1:1337;
}
我并不是特别讨厌重复使用 proxy_pass 语句,但遗憾的是,我还没有找到更简洁的替代方案。一种方法可能是使用一个 location 块明确指定静态文件扩展名,并将 proxy_pass 语句放在任何 location 块之外。
/etc/nginx/nginx.conf 中的一些设置:
set_real_ip_from 127.0.0.1;
real_ip_header X-Forwarded-For;
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$request_time $upstream_response_time $pipe';
port_in_redirect off;
在我的其他服务器块和设置中,我还在 nginx 配置中启用了 gzip 和 keepalive。(顺便说一句,我相信 Nginx 有一个 TCP 模块可以启用 websockets - 但是,我喜欢使用“原始”版本的软件(及其相关的存储库),所以这对我来说不是一个真正的选择)。
此设置的先前版本导致 Varnish 中的管道出现不寻常的“阻塞”行为。本质上,一旦建立了管道套接字连接,下一个请求将被延迟,直到管道超时(最多 60 秒)。我还没有看到此设置再次出现同样的问题 - 但如果您看到类似的行为,我会很感兴趣。