该 Ubuntu Server 16.04 机器有以下磁盘:
sudo fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7ac0eeb9
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 3886718975 3886716928 1.8T fd Linux raid autodetect
/dev/sda2 3886721022 3907028991 20307970 9.7G 5 Extended
/dev/sda5 3886721024 3907028991 20307968 9.7G fd Linux raid autodetect
Partition 2 does not start on physical sector boundary.
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xc9b50d2d
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 3886718975 3886716928 1.8T fd Linux raid autodetect
/dev/sdb2 3886721022 3907028991 20307970 9.7G 5 Extended
/dev/sdb5 3886721024 3907028991 20307968 9.7G fd Linux raid autodetect
Partition 2 does not start on physical sector boundary.
Disk /dev/md1: 9.7 GiB, 10389291008 bytes, 20291584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md0: 1.8 TiB, 1989864849408 bytes, 3886454784 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
因此,我有两个物理 1.8TB 驱动器,每个驱动器有三个分区和两个 raid(/dev/md0 和 /dev/md1)。
如果我这样做,cat /proc/mdstat
我会得到:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0]
1943227392 blocks super 1.2 [2/1] [U_]
bitmap: 10/15 pages [40KB], 65536KB chunk
md1 : active raid1 sda5[0]
10145792 blocks super 1.2 [2/1] [U_]
而且,如果我查看每个 RAID 内部,我会发现:
sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Mar 20 06:41:14 2018
Raid Level : raid1
Array Size : 1943227392 (1853.21 GiB 1989.86 GB)
Used Dev Size : 1943227392 (1853.21 GiB 1989.86 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Dec 5 19:38:00 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : impacs:0
UUID : 619c5551:3e475969:80882df7:7da3f864
Events : 166143
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
2 0 0 2 removed
和
sudo mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Mar 20 06:41:40 2018
Raid Level : raid1
Array Size : 10145792 (9.68 GiB 10.39 GB)
Used Dev Size : 10145792 (9.68 GiB 10.39 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sun Dec 2 00:57:07 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : impacs:1
UUID : 1b9a0dc4:cc30cd7e:274fefd9:55266436
Events : 81
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
2 0 0 2 removed
看起来 /dev/sdb1 不是 /dev/md0 的一部分。我如何才能安全地将其添加到该 raid?
编辑:我必须补充一点,这个 raid 是在安装时使用 Ubuntu Server 安装程序创建的,并且我很确定我选择了两个 1.8TB 磁盘作为阵列的一部分。
编辑:最后,故障的驱动器被更换,一切和 RAID 重建都没有问题,现在一切都正常。
答案1
您的驱动器已从 RAID 中脱机,因为它出现故障并抛出错误。请更换驱动器。然后,您可以在新驱动器上创建新分区并将其重新添加到阵列中。