mdadm RAID5 不活动且驱动器丢失

mdadm RAID5 不活动且驱动器丢失

我有一个使用 mdadm 设置的 raid5,必须更换一个损坏的硬盘。它工作了一段时间,但由于某种原因,几周后就变得不活跃了。

我很确定丢失的驱动器/dev/sdb也是我替换的驱动器:

# sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : REMOTENAME:0  (local to host REMOTENAME)
              UUID : 59f98bf3:274707c2:2d79bc60:f0217294
            Events : 212054

    Number   Major   Minor   RaidDevice

       -       8       64        -        /dev/sde
       -       8       32        -        /dev/sdc
       -       8       48        -        /dev/sdd

接下来/proc/mdstat包含:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdc[1](S) sde[4](S) sdd[2](S)
      23441683464 blocks super 1.2
       
unused devices: <none>

硬盘状态似乎正常,因为没有smartctl错误/dev/sdb

最后mdadm --examine /dev/sdb得到:

/dev/sdb:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

编辑: 感谢大家的评论!实际上,我并不是那个 RAID 的设置者,更像是幸运地继承了它的责任。我很乐意听取你们对下一次设置的建议 :)

您说得对,现在显示的是 raid0。我找到了一个保存了输出的文本文件,mdadm --detail /dev/md0可能对您有帮助?

# sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 24 13:22:48 2018
        Raid Level : raid5
        Array Size : 23441682432 (22355.73 GiB 24004.28 GB)
     Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Mon Sep  4 07:36:57 2023
             State : clean, checking
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
            Layout : left-symmetric
        Chunk Size : 512K
Consistency Policy : bitmap
      Check Status : 77% complete
              Name : REMOTENAME:0  (local to host REMOTENAME)
              UUID : 59f98bf3:274707c2:2d79bc60:f0217294
            Events : 212051
    Number   Major   Minor   RaidDevice State
       5       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

可能相关,我发现该硬盘/dev/sdb和磁盘阵列中的其他硬盘之间存在一些差异:

# sudo lshw -class disk
...
  *-disk
       description: ATA Disk
       product: WDC WD80EFZZ-68B
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/sdb
       version: 0A81
       serial: ---
       size: 7452GiB (8001GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=a931e3f0-c80c-447b-b4f6-e3a06b9b51a7 logicalsectorsize=512 sectorsize=4096
  *-disk
       description: ATA Disk
       product: WDC WD80EFZX-68U
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sdc
       version: 0A83
       serial: ---
       size: 7452GiB (8001GB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
...

此前,没有partitioned提供有关旧驱动器的信息/dev/sdb

  *-disk
       description: ATA Disk
       product: WDC WD80EFZX-68U
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/sdb
       version: 0A83
       serial: ---
       size: 7452GiB (8001GB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096

编辑2:

由于阵列处于非活动状态,我使用 成功将其重新激活sudo mdadm --run /dev/md0。您说得对,不知何故,可能是在重新启动后,/dev/sdb阵列退出/被删除了?...

# sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 24 13:22:48 2018
        Raid Level : raid5
        Array Size : 23441682432 (22355.73 GiB 24004.28 GB)
     Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Nov  2 14:10:31 2023
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : REMOTENAME:0  (local to host REMOTENAME)
              UUID : 59f98bf3:274707c2:2d79bc60:f0217294
            Events : 212077

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

我现在的问题是:在将磁盘/dev/sdb再次添加到阵列之前,我是否应该重新格式化该磁盘?

还有一个附加问题,正如有人指出的那样:您可以将现有的 raid5 安全地转换为 raid1 或 raid6 吗?

相关内容