我正在运行并drbd83
计划与它们一起使用。一段时间后,我面临脑裂问题。ocfs2
centos 5
packemaker
drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by [email protected], 2012-05-07 11:56:36
1: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:112281991 dr:797551 al:99 bm:6401 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:60
我无法将我的 drbd 切换为辅助。
drbdadm secondary r0
1: State change failed: (-12) Device is held open by someone
Command 'drbdsetup 1 secondary' terminated with exit code 11
我的drbd
资源配置:
resource r0 {
syncer {
rate 1000M;
verify-alg sha1;
}
disk {
on-io-error detach;
}
handlers {
pri-lost-after-sb "/usr/lib/drbd/notify-split-brain.sh root";
}
net {
allow-two-primaries;
after-sb-0pri discard-younger-primary;
after-sb-1pri call-pri-lost-after-sb;
after-sb-2pri call-pri-lost-after-sb;
}
startup { become-primary-on both; }
on serving_4130{
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.4.130:7789;
meta-disk internal;
}
on MT305-3182 {
device /dev/drbd1;
disk /dev/xvdb1;
address 192.168.3.182:7789;
meta-disk internal;
}
}
ocfs2 状态的状态:
service ocfs2 status
Configured OCFS2 mountpoints: /data
lsof
显示有一个与drbd相关的进程。
lsof | grep drbd
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
drbd1_wor 7782 root cwd DIR 253,0 4096 2 /
drbd1_wor 7782 root rtd DIR 253,0 4096 2 /
drbd1_wor 7782 root txt unknown /proc/7782/exe
这是一个无效的符号链接:
# ls -l /proc/7782/exe
ls: cannot read symbolic link /proc/7782/exe: No such file or directory
lrwxrwxrwx 1 root root 0 May 4 09:56 /proc/7782/exe
# ps -ef | awk '$2 == "7782" { print $0 }'
root 7782 1 0 Apr22 ? 00:00:20 [drbd1_worker]
请注意,这个过程用方括号括起来:
args COMMAND command with all its arguments as a string. Modifications to the arguments may be shown. The
output in this column may contain spaces. A process marked <defunct> is partly dead, waiting to
be fully destroyed by its parent. Sometimes the process args will be unavailable; when this
happens, ps will instead print the executable name in brackets.
因此,最后一个问题是:在这种情况下,我们如何手动恢复 DRBD无需重启?
回复@andreask:
我的分区表:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
35G 6.9G 27G 21% /
/dev/xvda1 99M 20M 74M 22% /boot
tmpfs 1.0G 0 1.0G 0% /dev/shm
/dev/drbd1 100G 902M 100G 1% /data
设备名称:
# dmsetup ls --tree -o inverted
(202:2)
├─VolGroup00-LogVol01 (253:1)
└─VolGroup00-LogVol00 (253:0)
注意块设备(253:0
),它与输出相同lsof
:
# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID vCd152-amVZ-GaPo-H9Zs-TIS0-KI6j-ej8kYi
LV Write Access read/write
LV Status available
# open 1
LV Size 35.97 GB
Current LE 1151
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
回复@Doug:
# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 39.88 GB
PE Size 32.00 MB
Total PE 1276
Alloc PE / Size 1276 / 39.88 GB
Free PE / Size 0 / 0
VG UUID OTwzII-AP5H-nIbH-k2UA-H9nw-juBv-wcvmBq
更新时间:2013 年 5 月 17 日星期五 16:08:16 ICT
如果文件系统仍然处于挂载状态...哦,好吧,将其卸载。不是偷懒,而是真的。
我确定 OCFS2 已经被卸载了。
如果涉及 nfs,请尝试
killall -9 nfsd killall -9 lockd echo 0 > /proc/fs/nfsd/threads
不,NFS 没有参与。
如果涉及 lvm/dmsetup/kpartx/multipath/udev,请尝试
dmsetup ls --tree -o inverted
并检查是否存在来自 drbd 的依赖项。
从上面的输出可以看出,LVM 与 DRBD 无关:
pvdisplay -m
--- Physical volume ---
PV Name /dev/xvda2
VG Name VolGroup00
PV Size 39.90 GB / not usable 20.79 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 1276
Free PE 0
Allocated PE 1276
PV UUID 1t4hkB-p43c-ABex-stfQ-XaRt-9H4i-51gSTD
--- Physical Segments ---
Physical extent 0 to 1148:
Logical volume /dev/VolGroup00/LogVol00
Logical extents 0 to 1148
Physical extent 1149 to 1275:
Logical volume /dev/VolGroup00/LogVol01
Logical extents 0 to 126
fdisk -l
Disk /dev/xvda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/xvda1 * 1 13 104391 83 Linux
/dev/xvda2 14 5221 41833260 8e Linux LVM
Disk /dev/xvdb: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/xvdb1 1 13054 104856223+ 83 Linux
如果涉及循环/加密循环/等,请检查其中一个是否仍在访问它们。
如果正在使用某些虚拟化技术,请关闭/销毁在其生命周期内可能访问该 drbd 的所有容器/虚拟机。
不,不是的。
有时它只是 udev 或者同等程序在进行竞赛。
我已经禁用该multipath
规则,甚至停止了udevd
,但什么变化都没有。
有时它是一个 unix 域套接字或类似物仍然保持打开状态(不一定会出现在 lsof/fuser 中)。
如果是的话,我们如何才能找出这个unix套接字?
更新时间:2013 年 5 月 22 日星期三 22:10:41 ICT
以下是通过以下方式转储时 DRBD 工作进程的堆栈跟踪:神奇的 SysRq 键:
kernel: drbd1_worker S ffff81007ae21820 0 7782 1 7795 7038 (L-TLB)
kernel: ffff810055d89e00 0000000000000046 000573a8befba2d6 ffffffff8008e82f
kernel: 00078d18577c6114 0000000000000009 ffff81007ae21820 ffff81007fcae040
kernel: 00078d18577ca893 00000000000002b1 ffff81007ae21a08 000000017a590180
kernel: Call Trace:
kernel: [<ffffffff8008e82f>] enqueue_task+0x41/0x56
kernel: [<ffffffff80063002>] thread_return+0x62/0xfe
kernel: [<ffffffff80064905>] __down_interruptible+0xbf/0x112
kernel: [<ffffffff8008ee84>] default_wake_function+0x0/0xe
kernel: [<ffffffff80064713>] __down_failed_interruptible+0x35/0x3a
kernel: [<ffffffff885d461a>] :drbd:.text.lock.drbd_worker+0x2d/0x43
kernel: [<ffffffff885eca37>] :drbd:drbd_thread_setup+0x127/0x1e1
kernel: [<ffffffff800bab82>] audit_syscall_exit+0x329/0x344
kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
kernel: [<ffffffff885ec910>] :drbd:drbd_thread_setup+0x0/0x1e1
kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
我不确定这个 OCFS2 心跳区域是否会阻止 DRBD 切换到辅助区域:
kernel: o2hb-C3E41CA2 S ffff810002536420 0 9251 31 3690 (L-TLB)
kernel: ffff810004af7d20 0000000000000046 ffff810004af7d30 ffffffff80063002
kernel: 1400000004000000 000000000000000a ffff81007ec307a0 ffffffff80319b60
kernel: 000935c260ad6764 0000000000000fcd ffff81007ec30988 0000000000027e86
kernel: Call Trace:
kernel: [<ffffffff80063002>] thread_return+0x62/0xfe
kernel: [<ffffffff8006389f>] schedule_timeout+0x8a/0xad
kernel: [<ffffffff8009a41d>] process_timeout+0x0/0x5
kernel: [<ffffffff8009a97c>] msleep_interruptible+0x21/0x42
kernel: [<ffffffff884b3b0b>] :ocfs2_nodemanager:o2hb_thread+0xd2c/0x10d6
kernel: [<ffffffff80063002>] thread_return+0x62/0xfe
kernel: [<ffffffff800a329f>] keventd_create_kthread+0x0/0xc4
kernel: [<ffffffff884b2ddf>] :ocfs2_nodemanager:o2hb_thread+0x0/0x10d6
kernel: [<ffffffff800a329f>] keventd_create_kthread+0x0/0xc4
kernel: [<ffffffff80032632>] kthread+0xfe/0x132
kernel: [<ffffffff8005dfb1>] child_rip+0xa/0x11
kernel: [<ffffffff800a329f>] keventd_create_kthread+0x0/0xc4
kernel: [<ffffffff80032534>] kthread+0x0/0x132
kernel: [<ffffffff8005dfa7>] child_rip+0x0/0x11
答案1
我不确定这个 OCFS2 心跳区域是否会阻止 DRBD 切换到辅助区域:
也许吧。你试过杀死那个区域吗?这指导?
# /etc/init.d/o2cb offline serving
Stopping O2CB cluster serving: Failed
Unable to stop cluster as heartbeat region still active
好的,首先您应该列出 OCFS2 卷及其标签和 uuid:
# mounted.ocfs2 -d
Device FS Stack UUID Label
/dev/sdb1 ocfs2 o2cb C3E41CA2BDE8477CA7FF2C796098633C data_ocfs2
/dev/drbd1 ocfs2 o2cb C3E41CA2BDE8477CA7FF2C796098633C data_ocfs2
其次,检查您是否有关于此设备的任何参考:
# ocfs2_hb_ctl -I -d /dev/sdb1
C3E41CA2BDE8477CA7FF2C796098633C: 1 refs
尝试杀死它:
# ocfs2_hb_ctl -K -d /dev/sdb1 ocfs2
然后停止集群堆栈:
# /etc/init.d/o2cb stop
Stopping O2CB cluster serving: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
并使设备重新发挥次要作用:
# drbdadm secondary r0
# drbd-overview
1:r0 StandAlone Secondary/Unknown UpToDate/DUnknown r-----
现在您可以照常恢复裂脑:
# drbdadm -- --discard-my-data connect r0
# drbd-overview
1:r0 WFConnection Secondary/Unknown UpToDate/DUnknown C r-----
在另一个节点上(裂脑幸存者):
# drbdadm connect r0
# drbd-overview
1:r0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---- /data ocfs2 100G 1.9G 99G 2%
[>....................] sync'ed: 3.2% (753892/775004)K delay_probe: 28
关于裂脑受害者:
# /etc/init.d/o2cb start
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster serving: OK
# /etc/init.d/ocfs2 start
Starting Oracle Cluster File System (OCFS2) [ OK ]
验证此挂载点是否已启动并正在运行:
# df -h /data/
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1 100G 1.9G 99G 2% /data
答案2
DRBD 无法降级资源的一个常见原因是活动设备映射器设备……例如卷组。您可以使用以下命令进行检查:
dmsetup ls --tree -o inverted
答案3
对我来说原因是multipathd
。
我遵循了 Pacemaker 的官方文档直到第 8.4 节末尾,没有安装 OCFS2 或 GFS2(在 Ubuntu 20.04 上),并且无法将 DRBD 主机从主主机降级为辅助主机:
eric@host1:~$ sudo drbdadm status
r0 role:Primary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate
eric@host1:~$ sudo drbdadm secondary r0
1: State change failed: (-12) Device is held open by someone
Command 'drbdsetup-84 secondary 1' terminated with exit code 11
检查谁在打开它:
eric@host1:~$ sudo fuser -m /dev/drbd1
/dev/drbd1: 558
eric@host1:~$ ps aux | grep 558
root 558 0.0 0.8 345856 18124 ? SLsl 07:04 0:00 /sbin/multipathd -d -s
停止multipathd
:
eric@host1:~$ sudo systemctl stop multipathd
Warning: Stopping multipathd.service, but it can still be activated by: multipathd.socket
eric@host1:~$ sudo systemctl stop multipathd.socket
eric@host1:~$ sudo fuser -m /dev/drbd1
现在尝试再次降级为次要:
eric@host1:~$ sudo drbdadm secondary r0
eric@host1:~$ sudo drbdadm status
r0 role:Secondary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate
成功!
重新启动之前multipathd
,将您的 DRBD 设备(/dev/drbd1
在我的情况下)添加到多路径的黑名单中:
将此部分添加到底部/etc/multipath.conf
:
blacklist {
devnode "^drbd[0-9]"
}
重新启动 multipathd.socket,然后重新启动 multipathd
eric@host1:~$ sudo systemctl start multipathd.socket
eric@host1:~$ sudo systemctl start multipathd