我们已经设置了 3 台服务器运行保持活跃。我们开始注意到发生了一些我们无法解释的随机重新选举,所以我来这里寻求建议。
这是我们的配置:
掌握:
global_defs {
notification_email {
[email protected]
}
notification_email_from keepalived@hostname
smtp_server example.com:587
smtp_connect_timeout 30
router_id some_rate
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 2
weight 2
}
vrrp_instance VIP_61 {
interface bond0
virtual_router_id 61
state MASTER
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass PASSWORD
}
virtual_ipaddress {
X.X.X.X
X.X.X.X
X.X.X.X
}
track_script {
chk_nginx
}
}
备份1:
global_defs {
notification_email {
[email protected]
}
notification_email_from keepalived@hostname
smtp_server example.com:587
smtp_connect_timeout 30
router_id some_rate
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 2
weight 2
}
vrrp_instance VIP_61 {
interface bond0
virtual_router_id 61
state MASTER
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass PASSWORD
}
virtual_ipaddress {
X.X.X.X
X.X.X.X
X.X.X.X
}
track_script {
chk_nginx
}
}
备份2:
global_defs {
notification_email {
[email protected]
}
notification_email_from keepalived@hostname
smtp_server example.com:587
smtp_connect_timeout 30
router_id some_rate
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 2
weight 2
}
vrrp_instance VIP_61 {
interface bond0
virtual_router_id 61
state MASTER
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass PASSWORD
}
virtual_ipaddress {
X.X.X.X
X.X.X.X
X.X.X.X
}
track_script {
chk_nginx
}
}
我时不时会看到这种情况发生(在日志中搜索):
掌握:
Jan 6 18:30:15 lb-public01 Keepalived_vrrp[24380]: VRRP_Instance(VIP_61) Received lower prio advert, forcing new election
Jan 6 18:30:16 lb-public01 Keepalived_vrrp[24380]: VRRP_Instance(VIP_61) Received lower prio advert, forcing new election
Jan 6 18:32:37 lb-public01 Keepalived_vrrp[24380]: VRRP_Instance(VIP_61) Received lower prio advert, forcing new election
备份1:
Jan 6 18:30:16 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Transition to MASTER STATE
Jan 6 18:30:16 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Received higher prio advert
Jan 6 18:30:16 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Entering BACKUP STATE
Jan 6 18:32:37 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) forcing a new MASTER election
Jan 6 18:32:38 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Transition to MASTER STATE
Jan 6 18:32:38 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Received higher prio advert
Jan 6 18:32:38 lb-public02 Keepalived_vrrp[26235]: VRRP_Instance(VIP_61) Entering BACKUP STATE
备份2:
Jan 6 18:32:36 lb-public03 Keepalived_vrrp[14255]: VRRP_Script(chk_nginx) succeeded
Jan 6 18:32:37 lb-public03 Keepalived_vrrp[14255]: VRRP_Instance(VIP_61) Transition to MASTER STATE
Jan 6 18:32:37 lb-public03 Keepalived_vrrp[14255]: VRRP_Instance(VIP_61) Received higher prio advert
Jan 6 18:32:37 lb-public03 Keepalived_vrrp[14255]: VRRP_Instance(VIP_61) Entering BACKUP STATE
因此 MASTER 收到 LOWER PRIO 广告,并开始新的选举。为什么?看起来 BACKUP 短暂转换为 MASTER(基于日志),然后故障恢复到 BACKUP 状态。我完全不知道为什么会发生这种情况,所以任何提示都非常欢迎。
此外,我发现单播补丁在 keepalived 中,然而我不清楚它是否支持超过 1 个单播对等体 - 在我们的例子中,我们有一个由 3 台机器组成的集群,因此我们需要超过 1 个单播对等体。
如能就这些问题提供任何提示,我们将不胜感激!
答案1
问题在于,您对备份节点使用了默认状态 MASTER。它们应该处于 BACKUP 状态。
vrrp_instance VIP_61 {
interface bond0
virtual_router_id 61
state BACKUP
priority 98
...
希望这能解开你的谜团。