如何恢复 mdadm RAID5 阵列上的日志超级块

如何恢复 mdadm RAID5 阵列上的日志超级块

昨天我的 3 个磁盘的 raid5 阵列突然被卸载。检查后发现它似乎已经降级,其中一个磁盘的事件计数比其他磁盘少得多。

我开始运行:sudo mdadm --assemble --force /dev/md0 /dev/sd[bcd]1- 但是,我犯了一个错误:在运行命令时,磁盘/dev/sdb被命名为,因此该命令仅针对且/dev/sde运行。该命令将最低事件计数磁盘重置为两者中的最高计数(我认为这是 600 个事件的差异)。这让我感到惊讶,就像过去的 IIRC 一样,不会重置大差异的事件计数。/dev/sdc1/dev/sdd1--force

最后,我最终将剩余的磁盘(这不是事件计数最低的磁盘)与 同步sudo mdadm /dev/md0 --add /dev/sdb1,就像我过去曾做过几次一样。同步在几个小时后按预期完成,没有错误。但从那时起,我就无法安装阵列了。

以下是该系统的详细信息:

操作系统:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 19.10
Release:    19.10
Codename:   eoan

RAID阵列

sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Apr  4 23:46:03 2019
        Raid Level : raid5
        Array Size : 1953257472 (1862.77 GiB 2000.14 GB)
     Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Feb  6 13:16:05 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : pc:0  (local to host pc)
              UUID : 96c2bb9a:8318792c:843e1e85:820cd123
            Events : 21280

    Number   Major   Minor   RaidDevice State
       5       8       17        0      active sync   /dev/sdb1
       4       8       33        1      active sync   /dev/sdc1
       3       8       49        2      active sync   /dev/sdd1
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid5 sdc1[4] sdd1[3] sdb1[5]
      1953257472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

相关部分 lsblk -f

sdb                                                                                                      
└─sdb1                  linux_raid_member pc:0 96c2bb9a-8318-792c-843e-1e85820cd123                  
  └─md0                 ext4              RaidData 88936c12-1664-4d4d-9d33-8aabd1efe0ab                  
sdc                                                                                                      
└─sdc1                  linux_raid_member pc:0 96c2bb9a-8318-792c-843e-1e85820cd123                  
  └─md0                 ext4              RaidData 88936c12-1664-4d4d-9d33-8aabd1efe0ab                  
sdd                                                                                                      
└─sdd1                  linux_raid_member pc:0 96c2bb9a-8318-792c-843e-1e85820cd123                  
  └─md0                 ext4              RaidData 88936c12-1664-4d4d-9d33-8aabd1efe0ab   

相关输出LSDRV工具:

PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
├scsi 1:0:0:0 ATA      WDC WD10EZEX-08W
│└sdb 931.51g [8:16] Empty/Unknown
│ └sdb1 931.51g [8:17] Empty/Unknown
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 64k Chunk {None}
│                   Empty/Unknown
├scsi 2:0:0:0 ATA      WDC WD10EZRZ-22H
│└sdc 931.51g [8:32] Empty/Unknown
│ └sdc1 931.51g [8:33] Empty/Unknown
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 64k Chunk {None}
│                   Empty/Unknown
└scsi 3:0:0:0 ATA      WDC WD10EZEX-00K
 └sdd 931.51g [8:48] Empty/Unknown
  └sdd1 931.51g [8:49] Empty/Unknown
   └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 64k Chunk {None}
                    Empty/Unknown

RAID成员状态:

sudo mdadm --examine /dev/sd[bcd]
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   1953521664 sectors at         2048 (type 83)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   1953521664 sectors at         2048 (type 83)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :   1953521664 sectors at         2048 (type 83)

尝试安装时出错,不确定错误的含义:

sudo mount /dev/md0
mount: /media/dpm/Dades: mount(2) system call failed: Structure needs cleaning.

Superblock 似乎已损坏?:

sudo e2fsck -v /dev/md0
e2fsck 1.45.3 (14-Jul-2019)
Journal superblock is corrupt.
Fix<y>? cancelled!
e2fsck: The journal superblock is corrupt while checking journal for RaidData
e2fsck: Cannot proceed with file system check

RaidData: ********** WARNING: Filesystem still has errors **********

我不太确定fsck会做什么或者此时运行它是否安全。任何建议将不胜感激,谢谢!

相关内容