Docker:通过 openvpn 隧道的 nginx-proxy

Docker:通过 openvpn 隧道的 nginx-proxy

我正在尝试使用 DigitalOcean VPS 作为 openVPN 服务器,通过子域(例如 nextcloud.example.com)访问托管在我家庭网络上的服务(例如 nextcloud)。

我已设置以下内容:

  • [工作] Digital Ocean VPS 上的 kylemanna/openvpn docker
  • [工作] 将我的家用 pfSense 路由器作为 VPN 客户端连接到 Digital Ocean VPS
  • [工作] 在我的家庭网络上设置 nextcloud 服务
  • [工作] 连接到 VPN 后,我可以在设备之间 ping 通,还可以通过内部 IP 访问 nextcloud 服务
  • [不起作用] jwilder/nginx-proxy 通过 Docker VPN 隧道将 nextcloud.example.com 路由到 nextcloud 的内部 IP

我曾尝试为 nextcloud.example.com 添加一个 virtual_host 文件,以便 nginx-proxy 将请求路由到 openvpn 端口 3000,然后在 openvpn 容器内使用 iptables 将端口 3000 上的所有请求转发到内部 nextcloud IP。

我真的非常感激任何帮助,因为说实话我有点陷入困境了?

kylemanna/openvpn - iptables 配置转发

user@Debianwebhost:~$ docker exec -it vpn bash
bash-4.4# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       tcp  --  anywhere             192.168.0.99         tcp dpt:http to:172.17.0.2:3000
SNAT       udp  --  anywhere             192.168.0.99         udp dpt:http to:172.17.0.2:3000

nginx-proxy 虚拟主机配置

user@Debianwebhost:/etc/nginx/vhost.d$ cat nextcloud.example.com
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}

nginx-代理 nginx.conf

user@Debianwebhost:/etc/nginx$ cat nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

nginx-proxy 默认配置文件

user@Debianwebhost:/etc/nginx/conf.d$ cat default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver [hidden ips, but there are 2 of them];
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}

答案1

我找到了一个解决方案,基本上我必须添加一个

ip route add 192.168.0.0/24 via 172.19.0.50 

告诉 VPS,任何对 192.168.0.99(我的内部网络)的请求都需要通过带有 VPN 服务器(172.19.0.50)的 docker 容器进行路由。

一旦请求进入 VPN 服务器 docker,它就知道如何处理它,因为我已经在 pfSense 客户端 (/etc/openvpn/ccd/client) 中指定了以下内容,以使 VPN 知道对这些 IP 的任何请求都应通过此客户端:

iroute 192.168.0.0 255.255.255.0

除此之外,我还必须在 openVPN 配置 (/etc/openvpn/openvpn.conf) 中指定以下内容

### Route Configurations Below
route 192.168.254.0 255.255.255.0
route 192.168.0.0 255.255.255.0

### Push Configurations Below
push "route 192.168.0.0 255.255.255.0"

然后当然打开任何需要的防火墙。

相关内容