我遇到了一个奇怪的问题,我搜索的所有页面都与我的问题不太相符。
基本上,我无法访问由两个 1TB WD Red 磁盘(下面 fdisk 检查中的 sdb 和 sdc)组成的小型 raid 1 阵列。
以下是常规检查(如果我错过了一项,请告诉我):
磁盘管理 您可能需要滚动下面的框才能看到所有内容,而且我不知道所有的循环是什么......
$> sudo fdisk -l
Disk /dev/loop0: 140.7 MiB, 147496960 bytes, 288080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 13 MiB, 13619200 bytes, 26600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 3.7 MiB, 3878912 bytes, 7576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 91 MiB, 95408128 bytes, 186344 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop4: 2.3 MiB, 2355200 bytes, 4600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop5: 14.5 MiB, 15208448 bytes, 29704 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop6: 34.6 MiB, 36216832 bytes, 70736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop7: 88.5 MiB, 92778496 bytes, 181208 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 28352AE2-4322-4627-9BE2-DFBEDBAFF1BF
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 468860927 467810304 223.1G Linux filesystem
GPT PMBR size mismatch (1953519879 != 1953525167) will be corrected by w(rite).
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 84416481-C343-40E7-A8EB-3680B26FEF19
Device Start End Sectors Size Type
/dev/sdb1 2048 1953519615 1953517568 931.5G Linux filesystem
GPT PMBR size mismatch (1953519879 != 1953525167) will be corrected by w(rite).
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 84416481-C343-40E7-A8EB-3680B26FEF19
Device Start End Sectors Size Type
/dev/sdc1 2048 1953519615 1953517568 931.5G Linux filesystem
Disk /dev/sdd: 119.2 GiB, 128035676160 bytes, 250069680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4A8AA6CA-61E4-43A2-B616-EAD50214A106
Device Start End Sectors Size Type
/dev/sdd1 2048 999423 997376 487M EFI System
/dev/sdd2 999424 17000447 16001024 7.6G Linux swap
GPT PMBR size mismatch (1953519879 != 1953519615) will be corrected by w(rite).
Disk /dev/md126: 931.5 GiB, 1000202043392 bytes, 1953519616 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/md126p1 1 1953519879 1953519879 931.5G ee GPT
Partition 1 does not start on physical sector boundary.
状态统计
$> cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active (auto-read-only) raid1 sdb[1] sdc[0]
976759808 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdc[1](S) sdb[0](S)
5552 blocks super external:imsm
unused devices: <none>
配置文件
$> sudo cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY metadata=imsm UUID=fe0bb25b:d021df67:4d7fe09f:a30a6e08
ARRAY /dev/md/Volume1 container=fe0bb25b:d021df67:4d7fe09f:a30a6e08 member=0 UUID=3d2e36ef:e2314e97:11933fe5:f38135b1
ARRAY /dev/md/0 metadata=1.2 UUID=7d7acef8:cde50639:d9c04370:fbf727c6 name=chugster:0
# This configuration was auto-generated on Wed, 07 Aug 2019 00:10:23 +0100 by mkconf
mdadm -E /dev/sdb
$> sudo mdadm -E /dev/sdb
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : c1155891
Family : c1155891
Generation : 000000d2
Attributes : All supported
UUID : fe0bb25b:d021df67:4d7fe09f:a30a6e08
Checksum : 03482b05 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk00 Serial : WD-WXV1E74D9L1F
State : active
Id : 00000002
Usable Size : 1953519616 (931.51 GiB 1000.20 GB)
[Volume1]:
UUID : 3d2e36ef:e2314e97:11933fe5:f38135b1
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Sector Size : 512
Array Size : 1953519616 (931.51 GiB 1000.20 GB)
Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
Sector Offset : 0
Num Stripes : 7630936
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off
Disk01 Serial : WD-WXV1E747PDZD
State : active
Id : 00000003
Usable Size : 1953519616 (931.51 GiB 1000.20 GB)
mdadm -E /dev/sdc
$> sudo mdadm -E /dev/sdc
/dev/sdc:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : c1155891
Family : c1155891
Generation : 000000d2
Attributes : All supported
UUID : fe0bb25b:d021df67:4d7fe09f:a30a6e08
Checksum : 03482b05 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : WD-WXV1E747PDZD
State : active
Id : 00000003
Usable Size : 1953519616 (931.51 GiB 1000.20 GB)
[Volume1]:
UUID : 3d2e36ef:e2314e97:11933fe5:f38135b1
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Sector Size : 512
Array Size : 1953519616 (931.51 GiB 1000.20 GB)
Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
Sector Offset : 0
Num Stripes : 7630936
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off
Disk00 Serial : WD-WXV1E74D9L1F
State : active
Id : 00000002
Usable Size : 1953519616 (931.51 GiB 1000.20 GB)
mdadm 详细扫描
$> sudo mdadm --detail --scan
ARRAY /dev/md/imsm0 metadata=imsm UUID=fe0bb25b:d021df67:4d7fe09f:a30a6e08
ARRAY /dev/md/Volume1 container=/dev/md/imsm0 member=0 UUID=3d2e36ef:e2314e97:11933fe5:f38135b1
简单介绍一下背景情况,sdc 因缺少超级块而失败,但我在某处读到过一些内容,允许我使用 sdb 的 uuid“修补”sdc。因此,现在“mdadm -E /dev/sdc”显示信息,而不是说缺少超级块。我不确定我所做的是否正确。
如果我尝试组装 raid,它会说 /dev/md127 在 mdadm.conf 中不存在。如果我尝试重新生成 mdadm.conf,它不会添加 /dev/md127。
基本上,我不知道如何重组磁盘阵列,也不知道它为什么会失败。磁盘实用程序显示两个磁盘都没有问题。
如果一切都失败了,我可以从阵列中删除 md127,用一个磁盘(md126)安装阵列,删除当前 sdc 上的所有分区,然后将其添加回阵列吗?
非常感激你的帮助。
安德鲁
编辑1 了解这一切都发生在我重新安装操作系统时 - 从 14.4 升级到 18.4 - 也许会有所帮助。
编辑2
我刚刚注意到我可以检查 sdb1 但不能检查 sdc1:
$> sudo mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d7acef8:cde50639:d9c04370:fbf727c6
Name : chugster:0 (local to host chugster)
Creation Time : Tue Aug 6 23:38:40 2019
Raid Level : linear
Raid Devices : 2
Avail Dev Size : 1953253376 (931.38 GiB 1000.07 GB)
Used Dev Size : 0
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : beeda35f:a7c7f529:33e2c551:4bc87bfc
Update Time : Tue Aug 6 23:38:40 2019
Bad Block Log : 512 entries available at offset 8 sectors
Checksum : f2302886 - correct
Events : 0
Rounding : 0K
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
$> sudo mdadm --examine /dev/sdc1
mdadm: cannot open /dev/sdc1: No such file or directory
我认为 /dev/sdc 出了点问题。鉴于 /dev/sdc1 不存在,我不确定如何从阵列中删除 /dev/sdc。另外,我假设我从 md127 中删除它,但感觉不对,也许我应该尝试从 /dev/md/Volume1 中删除它?另一件让我担心的事情是 /proc/mdstat 表明 md126 的超级块在 md127 上,还是我读错了?
编辑3 进行了更正
答案1
我真的很讨厌假突袭。这是用户倾向于使用的硬件功能,因为他们认为硬件 == 更好,但它实际上只会使存储设置复杂化并使其更脆弱。假突袭唯一重要的时候是当您想要双启动并在多个操作系统之间共享同一卷时。否则,远离它就像躲避瘟疫一样。
对我来说,真正突出的是,您有标记为文件系统的分区,这些分区似乎跨越了整个磁盘,但您将整个块设备分配给 RAID。这就是您损坏数据的方式。它可能在某个时候被挂载,或者在启动时对其运行了 fsck 并“修复了它”,这时您的超级块就被损坏了。
对分配给 RAID 的磁盘进行分区是可以的,只需确保将它们标记为 FD 类型(linux raid 自动检测),这样就不会发生此类冲突。文件系统位于 MD 设备上。
此时。我将从 USB 磁盘启动。将阵列联机。强制删除“sdc”,用零填充整个内容,然后将其添加回阵列以进行完全重新同步。
或者重新开始。你说你有一个备份。拆解阵列,将超级块清零,或者只是 dd if=/dev/zero of=/... 这次只使用 md,没有假 raid。我建议你在每个磁盘上创建一个跨越所有空间的分区并将其标记为 FD,这样就不会再发生这种情况了。
https://www.tecmint.com/create-raid1-in-linux/
祝你好运。
关于假突袭的附注。 https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf
“Linux* 中推荐的软件 RAID 实现是开源 MD RAID 包。英特尔增强了 MD RAID 以支持 RST 元数据和 OROM,并且它已得到英特尔针对服务器平台的验证和支持。OEM 越来越希望英特尔在 Windows 和 Linux 双启动环境中扩展对移动、台式机和工作站平台上 RST 的验证和支持”
内容是“硬件供应商很懒,不想处理操作系统,所以他们想预先构建‘带有 RAID’的系统,并假装他们为客户增加了价值”