我有两个具有相同分区表(sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb
)的磁盘,它们mdadm
拒绝合并为 RAID-1 阵列。
$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: /dev/sdb1 not large enough to join array
对于这里发生的事情有什么想法吗?
细节
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 976627712 (931.38 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Aug 25 11:56:19 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : hostname:0 (local to host hostname)
UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Events : 459187
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 1 1 active sync /dev/sda1
$ sudo parted /dev/sda unit s print
Model: XXX (scsi)
Disk /dev/sda: 1953525168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2048s 1948268543s 1948266496s primary raid
$ sudo parted /dev/sdb unit s print
Model: XXX (scsi)
Disk /dev/sdb: 1953519616s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2048s 1948268543s 1948266496s primary raid
$ sudo mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Name : hostname:0 (local to host hostname)
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1948004352 (928.88 GiB 997.38 GB)
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 1953255424 (931.38 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=18446744073704300544 sectors
State : clean
Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 25 12:39:03 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : f47ecd0c - correct
Events : 459193
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
$ sudo mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Name : hostname:0 (local to host hostname)
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1948004352 (928.88 GiB 997.38 GB)
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 1953255424 (931.38 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=18446744073704300544 sectors
State : clean
Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 25 10:03:24 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 4e58ad84 - correct
Events : 81346
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
答案1
您可能最初使用整个磁盘构建阵列,没有任何分区。后来您可能添加了分区表,这会让事情变得混乱。MADM 需要整个磁盘。
您应该考虑元数据格式,它可能无法判断 RAID 元数据是否适用于整个磁盘或仅适用于分区。您可以使用较新的元数据格式重建阵列。
您可以创建一个新的 RAID 阵列,将第二个驱动器作为唯一活动卷,覆盖整个分区,减去几 MB 作为安全裕度,以防此问题再次发生。然后将所有数据从旧阵列复制到新阵列。
最后擦除原始阵列,然后将旧磁盘添加到新阵列。
答案2
我也遇到过这个问题。罪魁祸首是我重新排列了磁盘上的一些分区,但分区的大小(太小)反映了已经删除的分区。我不得不触发对当前磁盘分区表的重新读取,并扩大分区。
$ sudo partprobe /dev/sdb # to re-read current partition table of the device