我的集群中有两个节点,分别配备 drbd+pacemaker+corosync。当第一个节点发生故障时,第二个节点承担服务并且一切正常,但是当我们必须故障回复(节点 1 重新上线)时,它会显示一些错误并且集群停止工作。
它是一个 CentOS 6 集群,具有内核 2.6.32-504.12.2.el6.x86_64 和以下软件包:
kmod-drbd83-8.3.16-3、drbd83-utils-8.3.16-1、corosynclib-1.4.7-1、corosync-1.4.7-1、pacemaker-1.1.12-4、pacemaker-cluster-libs-1.1.12-4、pacemaker-libs-1.1.12-4、pacemaker-cli-1.1.12-4。
Drbd 配置:
resource r0
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
max-buffers 512;
sndbuf-size 0;
}
syncer {
rate 100M;
verify-alg sha1;
}
on XXX2 {
device minor 1;
disk /dev/sdb;
address xx.xx.xx.xx:7789;
meta-disk internal;
}
on XXX1 {
device minor 1;
disk /dev/sdb;
address xx.xx.xx.xx:7789;
meta-disk internal;
}
}
Corosync:
compatibility: whitetank
totem {
version: 2
secauth: on
interface {
member {
memberaddr: xx.xx.xx.1
}
member {
memberaddr: xx.xx.xx.2
}
ringnumber: 0
bindnetaddr: xx.xx.xx.1
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
debug: on
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
起搏器:
node XXX1 \
attributes standby=off
node XXX2 \
attributes standby=off
primitive drbd_res ocf:linbit:drbd \
params drbd_resource=r0 \
op monitor interval=29s role=Master \
op monitor interval=31s role=Slave
primitive failover_ip IPaddr2 \
params ip=172.16.2.49 cidr_netmask=32 \
op monitor interval=30s nic=eth0 \
meta is-managed=true
primitive fs_res Filesystem \
params device="/dev/drbd1" directory="/data" fstype=ext4 \
meta is-managed=true
primitive res_exportfs_export1 exportfs \
params fsid=1 directory="/data/export" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
op monitor interval=40s \
op stop interval=0 timeout=120s \
op start interval=0 timeout=120s \
meta is-managed=true
primitive res_exportfs_export2 exportfs \
params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
op monitor interval=40s \
op stop interval=0 timeout=120s \
op start interval=0 timeout=120s \
meta is-managed=true
primitive res_exportfs_root exportfs \
params clientspec="*" options="rw,async,fsid=root,insecure,no_subtree_check,no_root_squash,no_all_squash" directory="/data" fsid=0 unlock_on_stop=false wait_for_leasetime_on_stop=false \
operations $id=res_exportfs_root-operations \
op monitor interval=30 start-delay=0 \
meta
group rg_export fs_res res_exportfs_export1 res_exportfs_export2 failover_ip
ms drbd_master_slave drbd_res \
meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
clone cl_exportfs_root res_exportfs_root \
meta
colocation c_nfs_on_root inf: rg_export cl_exportfs_root
colocation fs_drbd_colo inf: rg_export drbd_master_slave:Master
order fs_after_drbd Mandatory: drbd_master_slave:promote rg_export:start
order o_root_before_nfs inf: cl_exportfs_root rg_export:start
property cib-bootstrap-options: \
expected-quorum-votes=2 \
last-lrm-refresh=1427814473 \
stonith-enabled=false \
no-quorum-policy=ignore \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)"
错误:
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xxx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms
res_exportfs_export2_stop_0 on xx.xx.xx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms
还有其他日志可以供我检查吗?
我检查了第二个节点 /dev/drbd1 在故障回复时没有卸载。如果我重新启动 NFS 服务并应用规则,一切都会正常。
编辑:感谢 Dok,它现在可以工作了,我只需要将时间调整为 120 秒并设置启动超时!
答案1
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
显示您的 res_exportfs2 资源因超时而无法停止。可能只是需要更长的超时时间。尝试为该资源配置停止超时时间,如下所示:
primitive res_exportfs_export2 exportfs \
params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=true \
op monitor interval=30s \
op stop interval=0 timeout=60s
如果超时没有帮助,请在错误显示的时间检查消息日志和/或 corosync.log 以寻找线索(2015 年 3 月 31 日 12:53:04)。