首先,我不是 Linux 专家,我一直在遵循教程并在 Google 的帮助下进行操作,到目前为止一切顺利,但目前我遇到了一个问题。
我正在使用 CentOS 6.5 和 DRBD 版本 8.4.4。
我有两个运行 Pacemaker 的节点,到目前为止一切正常,我设置了 DRBD,并且可以手动将一个节点设置为主节点并挂载 DRBD 资源,因此这也是正常的。
现在我创建了一个起搏器资源来控制 DRBD,但它无法将两个节点中的任何一个提升为主节点,这也会阻止它被挂载。
pcs 状态如下所示:
Cluster name: hydroC
Last updated: Wed Jun 25 14:19:49 2014
Last change: Wed Jun 25 14:02:25 2014 via crm_resource on hynode1
Stack: cman
Current DC: hynode1 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
4 Resources configured
Online: [ hynode1 hynode2 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started hynode1
Master/Slave Set: MSdrbdDATA [drbdDATA]
Slaves: [ hynode1 hynode2 ]
ShareDATA (ocf::heartbeat:Filesystem): Stopped
由于没有主控,ShareData 保持停止状态
我最初遵循的是这个教程:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_configure_the_cluster_for_drbd.html
起搏器配置如下:
Cluster Name: hydroC
Corosync Nodes:
Pacemaker Nodes:
hynode1 hynode2
Resources:
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=10.0.0.100 cidr_netmask=32
Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
Master: MSdrbdDATA
Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify
=true
Resource: drbdDATA (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=r0
Operations: monitor interval=60s (drbdDATA-monitor-interval-60s)
Resource: ShareDATA (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd3 directory=/share/data fstype=ext4
Operations: monitor interval=60s (ShareDATA-monitor-interval-60s)
Stonith Devices:
Fencing Levels:
Location Constraints:
Ordering Constraints:
promote MSdrbdDATA then start ShareDATA (Mandatory) (id:order-MSdrbdDATA-Share
DATA-mandatory)
Colocation Constraints:
ShareDATA with MSdrbdDATA (INFINITY) (with-rsc-role:Master) (id:colocation-Sha
reDATA-MSdrbdDATA-INFINITY)
Cluster Properties:
cluster-infrastructure: cman
dc-version: 1.1.10-14.el6_5.3-368c726
no-quorum-policy: ignore
stonith-enabled: false
从那时起,我尝试了不同的事情,例如设置位置约束或使用不同的资源设置...我从另一个教程中获得了这一点:
Master: MSdrbdDATA
Meta Attrs: master-max=1 master-node-max=1 clone-max=2 notify=true target-role
=Master is-managed=true clone-node-max=1
Resource: drbdDATA (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=r0 drbdconf=/etc/drbd.conf
Meta Attrs: migration-threshold=2
Operations: monitor interval=60s role=Slave timeout=30s (drbdDATA-monitor-int
erval-60s-role-Slave)
monitor interval=59s role=Master timeout=30s (drbdDATA-monitor-in
terval-59s-role-Master)
start interval=0 timeout=240s (drbdDATA-start-interval-0)
stop interval=0 timeout=240s (drbdDATA-stop-interval-0)
但结果是一样的,没有任何节点晋升为主节点。
我将非常感激任何能够指导我解决问题的帮助,先谢谢了。
答案1
确保您的 DRBD 设备运行正常。如果您# cat /proc/drbd
查看其状态,是否会看到以下内容:cs:Connected
、ro:Secondary/Secondary
以及最重要的ds:UpToDate/UpToDate
是 ?
如果没有UpToDate
数据,DRBD 的资源代理将不会提升设备。如果您只是创建了设备的元数据,并且还没有强制单个节点进入角色Primary
,你会看到你的磁盘状态是:ds:Inconsistent/Inconsistent
。你需要运行以下命令来告诉 DRBD 哪个节点应该成为SyncSource
集群的:# drbdadm primary r0 --force
在正常情况下,这是唯一一次必须强制将 DRBD 置于主状态;因此,--force
之后请忘记该标志;)