mdadm RAID 阵列缺少设备

mdadm RAID 阵列缺少设备

我正在运行具有三个 RAID1 阵列的家庭服务器,每个阵列有两个驱动器:md0md1md127。意外重新启动后,md0并且md1仅在一台设备上运行,并且我无法添加回相应的第二台设备。

这些是数组:

  • md0: /dev/sdg1, /dev/sdi1
  • md1: /dev/sdf1, /dev/sdd1
  • md127: /dev/sdh, /dev/sdb1

这是输出lsblk

NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda         8:0    0   3.7T  0 disk  
sdb         8:16   0   1.8T  0 disk  
└─sdb1      8:17   0   1.8T  0 part  
  └─md127   9:127  0   1.8T  0 raid1 
sdc         8:32   0 465.8G  0 disk  
└─sdc1      8:33   0 465.8G  0 part  
sdd         8:48   0 931.5G  0 disk  
└─sdd1      8:49   0 931.5G  0 part  
sde         8:64   0   2.7T  0 disk  
└─sde1      8:65   0   2.7T  0 part  
sdf         8:80   0 931.5G  0 disk  
└─sdf1      8:81   0 931.5G  0 part  
  └─md1     9:1    0 931.4G  0 raid1 
sdg         8:96   0 465.8G  0 disk  
└─sdg1      8:97   0 465.8G  0 part  
  └─md0     9:0    0 465.6G  0 raid1 /
sdh         8:112  0   1.8T  0 disk  
└─sdh1      8:113  0   1.8T  0 part  
  └─md127   9:127  0   1.8T  0 raid1 
sdi         8:128  1 465.8G  0 disk  
└─sdi1      8:129  1 465.8G  0 part  
sdj         8:144  1  29.8G  0 disk  
└─sdj1      8:145  1  29.8G  0 part  [SWAP]

这是有问题的两个阵列的状态:

/dev/md0:
        Version : 1.2
  Creation Time : Sat Apr 13 18:47:42 2019
     Raid Level : raid1
     Array Size : 488244864 (465.63 GiB 499.96 GB)
  Used Dev Size : 488244864 (465.63 GiB 499.96 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Sep  1 20:12:17 2019
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : GC01SRVR:0  (local to host GC01SRVR)
           UUID : 21f37e07:aea8dfec:e78b69d7:46f70c4d
         Events : 399869

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       97        1      active sync   /dev/sdg1


/dev/md1:
        Version : 1.2
  Creation Time : Sat Apr 13 18:50:33 2019
     Raid Level : raid1
     Array Size : 976628736 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Sep  1 20:02:33 2019
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : GC01SRVR:1  (local to host GC01SRVR)
           UUID : e0e49af7:7a791be6:168f01ab:6e84ba17
         Events : 899185

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       81        1      active sync   /dev/sdf1

这是输出mdadm.conf:它让我怀疑,因为只列出了两个数组。

# definitions of existing MD arrays
ARRAY /dev/md/1  metadata=1.2 UUID=e0e49af7:7a791be6:168f01ab:6e84ba17 name=GC01SRVR:1
ARRAY /dev/md/0  metadata=1.2 UUID=21f37e07:aea8dfec:e78b69d7:46f70c4d name=GC01SRVR:0

考虑到 md0 是操作系统所在的位置,如何将丢失的驱动器添加回各自的阵列?

相关内容