drbd corosync 集群第二个节点一直尝试成为主节点

drbd corosync 集群第二个节点一直尝试成为主节点

我们面临着 drbd corosync 集群的问题。

在一个主节点上,所有资源(mysql 服务、drbd)都运行正常。但第二个节点一直试图成为主节点。


第二个节点的错误日志如下所示:

lrmd: [25272]: info: RA output: (mysql-drbd:0:promote:stderr) 0: State change failed: (-1) Multiple primaries not allowed by config 

Oct  1 16:39:39 node2 lrmd: [25272]: info: RA output: (mysql-drbd:0:promote:stderr) 0: State change failed: (-1) Multiple primaries not allowed by config 

Oct  1 16:39:39 node2 lrmd: [25272]: info: RA output: (mysql-drbd:0:promote:stderr) Command 'drbdsetup 0 primary' terminated with exit code 11 

Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Called drbdadm -c /etc/drbd.conf primary mysql-disk

Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Called drbdadm -c /etc/drbd.conf primary mysql-disk

Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Exit code 11
Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Exit code 11
Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Command output: 
Oct  1 16:39:39 node2 drbd[25416]: ERROR: mysql-disk: Command output: 

corosync 主/从状态不完美。请参阅下面的 corosync 状态。

Node1

[root@node1 ~]# crm status
============
Last updated: Thu Oct  2 09:01:30 2014
Stack: openais

Current DC: node1 - partition WITHOUT quorum

Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3

2 Nodes configured, 2 expected votes 4 Resources configured.

============

Online: [ node1 ]

OFFLINE: [ node2 ]

 mysql-vip      (ocf::heartbeat:IPaddr2):       Started node1

 Master/Slave Set: mysql-drbd-ms

     Masters: [ node1 ]

     Stopped: [ mysql-drbd:1 ]

 mysql-fs       (ocf::heartbeat:Filesystem):    Started node1

 mysql-server   (ocf::heartbeat:mysql): Started node1

You have new mail in /var/spool/mail/root

节点2

[root@node2 ~]# crm status
============
Last updated: Thu Oct  2 09:03:04 2014
Stack: openais

Current DC: node2 - partition WITHOUT quorum

Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3
2 Nodes configured, 2 expected votes 4 Resources configured.
============

Online: [ node2 ]

OFFLINE: [ node1 ]


 Master/Slave Set: mysql-drbd-ms

     mysql-drbd:0       (ocf::linbit:drbd):     Slave node2 (unmanaged) FAILED

     Stopped: [ mysql-drbd:1 ]


Failed actions:
    mysql-drbd:0_promote_0 (node=node2, call=7, rc=-2, status=Timed Out): unknown exec error
    mysql-drbd:0_stop_0 (node=node2, call=13, rc=6, status=complete): not configured

两个节点上的 DRBD 状态均显示良好

节点 1(主节点):

[root@node1 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by 

[email protected], 2010-06-04 08:04:09

m:res         cs         ro                 ds                 p  mounted  fstype

0:mysql-disk  Connected  Primary/Secondary  UpToDate/UpToDate  C

节点2(辅助):

[root@node2 ~]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by 

[email protected], 2010-06-04 08:04:09

m:res         cs         ro                 ds                 p  mounted  fstype

0:mysql-disk  Connected  Secondary/Primary  UpToDate/UpToDate  C

答案1

发生这种情况的原因是,你没有配置集群隔离(stonith),现在你的集群处于裂脑状态

 Now you have a cluster with two DC and every node are trying to start the resource

答案2

看来您每个节点上的 corosync 无法相互通信。这就是为什么每个节点都将其节点标记为在线。

我建议尝试使用单播而不是多播选项。

  1. 在两个节点上停止 corosync。
  2. 更新支持单播 1.4.1 的 corosync 版本
  3. 更改您的 corosync 配置并添加以下内容:
  4. 启动 corosync

成员 {

                    memberaddr: <node1 IP>
            }
            member {
                    memberaddr: <node2 IP>
            }
            ringnumber: 0
            bindnetaddr: <Network address of your nodes> 
            mcastport: 5405
    } 

传输:udpu

请评论以下行:

代理地址

允许端口 5404 和 5405 通过 Iptable 防火墙并在两个节点上启动 corosync。

谢谢。

相关内容