我正在尝试使用一个前端 VIP 和两个后端 Web 服务器进行 haproxy 设置。我希望后端是主动/被动的,这样所有请求都会转到服务器 #1,除非服务器 #1 发生故障,然后发送到服务器 #2。当服务器 #1 恢复正常时,请留在服务器 #2 上,直到服务器 #2 发生故障。
我按照下面的指南使用 stick 表来实现,它本来可以正常工作,但现在似乎停止了,我不知道为什么。当我的服务器发生故障时,它会正确地发送到备份服务器,但当发生故障的服务器重新上线时,它会将流量发送到新修复的服务器,而不是停留在备份服务器上。
https://www.haproxy.com/blog/emulating-activepassing-application-clustering-with-haproxy/
服务器。这意味着如果你想要分割客户端...
我正在运行 HAProxy 1.8.17。这是 haproxy.cfg 的净化副本。有什么想法吗?
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
# turn on stats unix socket
stats socket /var/lib/haproxy/stats mode 600 level admin
stats timeout 2m
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# Load Balancer Stick-Table Sync
#---------------------------------------------------------------------
peers lb_peers
peer lb1 10.255.0.4:9969
peer lb2 10.255.0.5:9969
#---------------------------------------------------------------------
# Stats interface
#---------------------------------------------------------------------
listen stats
bind 10.255.0.3:8080
mode http
log global
maxconn 10
timeout client 100s
timeout server 100s
timeout connect 100s
timeout queue 100s
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth <REMOVED>
stats uri /haproxy?stats
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend solarwinds_http_fe
mode http
bind 10.255.0.3:80
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
default_backend solarwinds_be
frontend solarwinds_https_fe
mode http
bind 10.255.0.3:443 ssl crt /etc/ssl/solarwinds/solarwinds.pem
http-request set-header X-Forwarded-Proto https if { ssl_fc }
default_backend solarwinds_be
#---------------------------------------------------------------------
# Active/Passive backend
#---------------------------------------------------------------------
backend solarwinds_be
stick-table type ip size 1 nopurge peers lb_peers
stick on dst
redirect scheme https if !{ ssl_fc }
option httpchk HEAD /Orion/Login.aspx HTTP/1.1\r\nHost:\ <REMOVED>
server bru-monweb01 10.255.0.6:80 check fall 3 fastinter 5s downinter 5s rise 6
server bru-monweb02 10.255.0.7:80 check fall 3 fastinter 5s downinter 5s rise 6 backup
答案1
我没有使用 peers,但在 Haproxy 1.9.7 上遇到了同样的问题。我通过修改以下行来修复它:博客条目在 MySQL 示例中,它并不坚持目标 IP,而是一个整数:
backend mybackend
stick-table type integer size 1k nopurge
stick on int(1)
# the rest of the backend definition
变化是,我没有指定size
为,而是使用了。1
1k
答案2
这里有一份指南:
https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/
示例配置:
backend mysql
mode tcp
stick-table type integer size 1 expire 1d
stick on int(1)
server primary 192.168.122.60:3306 check on-marked-down shutdown-sessions
server backup 192.168.122.61:3306 check backup on-marked-down shutdown-sessions
使用此配置,我们仅在 stick 表中存储单个条目,其中键为 1,值是活动服务器的 server_id。现在,如果主服务器发生故障,备份服务器的 server_id 将覆盖 stick 表中的值,并且即使主服务器重新上线,所有请求仍将继续发送到备份服务器。当您准备好让集群恢复正常运行时,可以通过将备份节点循环到维护模式或通过 Runtime API 来撤消此操作。