我有一台服务器,其中 2 个磁盘处于 Raid-1 状态。磁盘 1 出现故障,由于某些原因,我不得不重新启动服务器,以下是之后的当前状态。现在,我无法从 MDARRAY 中移除故障磁盘,而且当我在服务器上插入新磁盘时,它不允许我启动服务器,即使是从旧磁盘也是如此。我需要帮助,如何清理从 MDARRAY 中移除的磁盘。
root@compute1:/dev# mdadm --detail /dev/md0 /dev/md0
Version : 1.2 Creation Time : Thu Aug 19 21:36:30 2021 Raid Level : raid1 Array Size : 409280 (399.75 MiB 419.10 MB) Used Dev Size : 409280 (399.75 MiB 419.10 MB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent
Update Time : Wed Jun 22 02:58:21 2022
State : clean, degraded
Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0
Name : bootstrap:0
UUID : c6e17655:3fbf8f0b:5e4d3285:69fcd05e
Events : 128
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3
root@compute1:/dev# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md5 : active raid1 sda8[1] 790043648 blocks super 1.2 [2/1] [_U] bitmap: 4/6 pages [16KB], 65536KB chunk
md2 : active raid1 sda5[1] 52396032 blocks super 1.2 [2/1] [_U]
md4 : active raid1 sda7[1] 10477568 blocks super 1.2 [2/1] [_U]
md1 : active raid1 sda4[1] 52396032 blocks super 1.2 [2/1] [_U]
md3 : active raid1 sda6[1] 31440896 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sda3[1] 409280 blocks super 1.2 [2/1] [_U]
unused devices:
I have tried using "mdadm /dev/md1 --remove failed and mdadm /dev/md1 --remove detached" but it didn't work.