我正在使用包含多个 LV 的单个 VG 来恢复 mdadm RAID 1 上的 PV。
底层设备有几个坏扇区(一个只有几个,另一个则非常多),并且一个愚蠢的拼写错误使得必须通过 grep 设备来恢复 LVM 配置。幸运的是我找到了它,恢复后的配置看起来和原来的配置一样。
唯一的问题是逻辑卷没有有效的文件系统。使用 e2sl,我发现目标文件系统的超级块之一位于错误的逻辑卷中。遗憾的是我不知道如何纠正或规避这个问题。
root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/tmp | grep 131072000
Found: block 20711426 (cyl 1369, head 192, sector 50), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/home | grep 131072000
Found: block 2048 (cyl 0, head 32, sector 32), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 526336 (cyl 34, head 194, sector 34), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 1050624 (cyl 69, head 116, sector 36), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 1574912 (cyl 104, head 38, sector 38), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 2099200 (cyl 138, head 200, sector 40), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 6293504 (cyl 416, head 56, sector 56), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 6817792 (cyl 450, head 218, sector 58), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 12584960 (cyl 832, head 81, sector 17), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 20973568 (cyl 1387, head 33, sector 49), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 32507904 (cyl 2149, head 238, sector 30), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 63440896 (cyl 4195, head 198, sector 22), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
Found: block 89655296 (cyl 5929, head 139, sector 59), 131072000 blocks, 129988776 free blocks, 4096 block size, (null)
^C
我感觉距离再次访问我的文件系统以恢复一些未备份的数据仅一步之遥。
LVM 配置:
root@rescue ~ # pvs
PV VG Fmt Attr PSize PFree
/dev/md1 vg0 lvm2 a-- 2.71t 767.52g
root@rescue ~ # vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 5 0 wz--n- 2.71t 767.52g
root@rescue ~ # lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
backup vg0 -wi-a--- 500.00g
container vg0 -wi-a--- 500.00g
home vg0 -wi-a--- 500.00g
root vg0 -wi-a--- 500.00g
tmp vg0 -wi-a--- 10.00g
VG配置:
# Generated by LVM2 version 2.02.95(2) (2012-03-06): Sun Oct 13 23:56:33 2013
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgs'"
creation_host = "rescue" # Linux rescue 3.10.12 #29 SMP Mon Sep 23 13:18:39 CEST 2013 x86_64
creation_time = 1381701393 # Sun Oct 13 23:56:33 2013
vg0 {
id = "7p0Aiw-pBpd-rn6Y-geFb-jyZe-gide-Anc9ag"
seqno = 19
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "GBIwI4-AxBa-6faf-aLfB-UZiP-iSS9-FaOrhH"
device = "/dev/md1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 5824875134 # 2.71242 Terabytes
pe_start = 384
pe_count = 711044 # 2.71242 Terabytes
}
}
logical_volumes {
root {
id = "1e3gvq-IJnX-Aimz-ziiY-zucE-soCO-YU2ayp"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 128000 # 500 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
tmp {
id = "px8JAy-JnkP-Amry-uHtf-lCUB-rfdx-Z8y11y"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 128000
]
}
}
home {
id = "e0AZbd-22Ss-RLrF-TgvF-CSDN-Nw6w-Gj7dal"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 128000 # 500 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 130560
]
}
}
backup {
id = "ZXNcbK-gYKj-LJfm-f193-Ozsi-Rm3Y-kZL37c"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "new.bountin.net"
creation_time = 1341852222 # 2012-07-09 18:43:42 +0200
segment_count = 1
segment1 {
start_extent = 0
extent_count = 128000 # 500 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 258560
]
}
}
container {
id = "X9wheh-3ADB-Fiau-j7SR-pcH9-hXne-K2NVAc"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "new.bountin.net"
creation_time = 1341852988 # 2012-07-09 18:56:28 +0200
segment_count = 1
segment1 {
start_extent = 0
extent_count = 128000 # 500 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 386560
]
}
}
}
}
答案1
对于任何有类似问题的人:
我使用 e2sl [1] 实际上直接从 RAID 设备之一查找文件系统的候选对象,并使用循环设备 [2] 跳过 LVM 和软件 RAID 来安装文件系统。我不得不稍微调整一下偏移量(超级块位置到分区开头的偏移量为 1KB!),但最终我设法做到了。
从那里开始,救援就变得非常简单:将循环设备安装到安装点,然后所有内容都可以复制。
[2] mount --loop 并查看 losetup