如何恢复损坏的软件 RAID5 阵列 (mdadm)?

如何恢复损坏的软件 RAID5 阵列 (mdadm)?

在安装 kvm 和 qemu 并重新启动之前,我有一个工作的 RAID5 阵列。之后系统无法启动,因为 /dev/md0 无法安装。

运行 cat /proc/mdstat 给出:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb[2](S) sda[0](S)
      1953263024 blocks super 1.2
       
unused devices: <none>

我的阵列中确实有 sda、sdb 和 sdc,但看起来只有 sda 和 sdb 在那里,但现在作为备用。

检查每个磁盘会给出:

sudo mdadm --examine /dev/sda
/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e25ff5c6:90186486:4f001b87:27056b4a
           Name : SAN1:0  (local to host SAN1)
  Creation Time : Sat Jul 16 17:13:01 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
  Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=2480 sectors
          State : clean
    Device UUID : 16904f75:c2ddd8b0:75025adb:0a09effa

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 20 18:59:56 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 8d2ba8a7 - correct
         Events : 4167

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

sudo mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e25ff5c6:90186486:4f001b87:27056b4a
           Name : SAN1:0  (local to host SAN1)
  Creation Time : Sat Jul 16 17:13:01 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
  Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=2480 sectors
          State : clean
    Device UUID : 02a449d0:be934563:ff4293f3:42e4ed52

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 20 18:59:56 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : aca8e53 - correct
         Events : 4167

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

sudo mdadm --examine /dev/sdc
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)

sudo mdadm --examine --scan展示

ARRAY /dev/md/0  metadata=1.2 UUID=e25ff5c6:90186486:4f001b87:27056b4a name=SAN1:0

我的 mdadm.conf 如下所示:

 mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#level=raid5 
#num-devices=3

# This configuration was auto-generated on Fri, 15 Jul 2022 20:17:00 +0100 by mkconf
ARRAY /dev/md0 uuid=e25ff5c6:90186486:4f001b87:27056b4a

有什么想法如何修复阵列吗?理想情况下,如果可能的话,我希望无需返回备份即可完成此操作。

我试过:

sudo mdadm --stop /dev/md0
sudo mdadm --assemble /dev/md0 /dev/sda /dev/sdb /dev/sdc --verbose

我得到了:

mdadm: looking for devices for /dev/md0
mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has no superblock - assembly aborted

更新 1:好的,我跑了mdadm -D /dev/md1,它回来时已降级。那还不错..我只需要添加第三个磁盘回来..

更新 2:在看似成功的重建之后,我重新启动并遇到了同样的问题,为了再次修复它,我尝试了:

lex@SAN1:/etc/apt$ sudo mdadm --assemble /dev/md0
alex@SAN1:/etc/apt$ sudo mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 2
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 2

              Name : SAN1:0  (local to host SAN1)
              UUID : e25ff5c6:90186486:4f001b87:27056b4a
            Events : 6058

    Number   Major   Minor   RaidDevice

       -       8        0        -        /dev/sda
       -       8       16        -        /dev/sdb
alex@SAN1:/etc/apt$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb[2](S) sda[0](S)
      1953263024 blocks super 1.2
       
unused devices: <none>
alex@SAN1:/etc/apt$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0
alex@SAN1:/etc/apt$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>
alex@SAN1:/etc/apt$ sudo mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
alex@SAN1:/etc/apt$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives (out of 3).
alex@SAN1:/etc/apt$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid5 sda[0] sdb[2]
      1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>

更新2a:

我尝试了以下方法:

sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.6

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): x

Expert command (? for help): z
About to wipe out GPT on /dev/sdc. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N): y

再次添加到阵列中,让我们看看重建和再次重新启动后 2 小时内会发生什么。:(

知道磁盘 3 出了什么问题吗?

谢谢

答案1

好吧,我想我找到了问题。

运行以下命令后:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

我现在有:

ARRAY /dev/md0 metadata=1.2 name=SAN1:0 UUID=e25ff5c6:90186486:4f001b87:27056b4a

代替:

ARRAY /dev/md0 uuid=e25ff5c6:90186486:4f001b87:27056b4a

重新启动后一切似乎都很好。

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid5 sda[0] sdb[2] sdc[3]
      1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>

我希望这可以帮助处于同样情况的其他人。

相关内容