MDADM RAID 10 恢复

MDADM RAID 10 恢复

很确定我的软 RAID 1+0 中有两个几乎出现故障的磁盘。错了两个。在预定的周日重新同步期间发生断电。

sdc 有一些 SMART 错误。在失败之前,写入速度在恢复过程中已达到最大(花了五天)。 sdb 在 dmesg 中出现 I/O 错误。

[643133.480937] blk_update_request: I/O error, dev sdb, sector 264192 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[643133.482327] Buffer I/O error on dev md0, logical block 0, async page read
[643133.484141] sd 0:0:1:0: [sdb] tag#239 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK cmd_age=0s
[643133.484146] sd 0:0:1:0: [sdb] tag#239 CDB: Read(16) 88 00 00 00 00 00 00 04 08 80 00 00 00 08 00 00

我有一个替换磁盘。有数据恢复的途径吗?

我目前的想法是尝试(一行):

mdadm /dev/md0 --re-add /dev/sdc -f /dev/sdb
mdadm - v4.1 - 2018-10-01
Linux muppet 5.10.0-0.bpo.3-amd64 #1 SMP Debian 5.10.13-1~bpo10+1 (2021-02-11) x86_64 GNU/Linux
mdadm --query --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Dec 12 18:44:46 2020
        Raid Level : raid10
        Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
     Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Oct 17 19:57:52 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : muppet:0  (local to host muppet)
              UUID : 3994a888:d772756f:63befaa7:45346419
            Events : 742778

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       -       0        0        1      removed
       2       8       48        2      active sync set-A   /dev/sdd
       4       8       80        3      active sync set-B   /dev/sdf

       1       8       32        -      spare   /dev/sdc
mdadm: No md superblock detected on /dev/sdb.
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x9
     Array UUID : 3994a888:d772756f:63befaa7:45346419
           Name : muppet:0  (local to host muppet)
  Creation Time : Sat Dec 12 18:44:46 2020
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=688 sectors
          State : clean
    Device UUID : 8038ceb4:fd44ce8d:96bcd0ab:ddc4c8a3

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Oct 17 19:57:52 2021
  Bad Block Log : 512 entries available at offset 40 sectors - bad blocks present.
       Checksum : fa5975d2 - correct
         Events : 742778

         Layout : near=2
     Chunk Size : 512K

   Device Role : spare
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 3994a888:d772756f:63befaa7:45346419
           Name : muppet:0  (local to host muppet)
  Creation Time : Sat Dec 12 18:44:46 2020
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=688 sectors
          State : clean
    Device UUID : 679494f1:f315586f:07526923:21ede8c3

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Oct 17 19:57:52 2021
  Bad Block Log : 512 entries available at offset 40 sectors
       Checksum : b0624cdd - correct
         Events : 742778

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 3994a888:d772756f:63befaa7:45346419
           Name : muppet:0  (local to host muppet)
  Creation Time : Sat Dec 12 18:44:46 2020
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=688 sectors
          State : clean
    Device UUID : aa006689:9542308f:08c03842:dba207b7

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Oct 17 19:57:52 2021
  Bad Block Log : 512 entries available at offset 40 sectors
       Checksum : 79fa097f - correct
         Events : 742778

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
md0 : active raid10 sdc[1](S) sdf[4] sdd[2] sdb[0]
      7813772288 blocks super 1.2 512K chunks 2 near-copies [4/3] [U_UU]
      bitmap: 3/59 pages [12KB], 65536KB chunk

unused devices: <none>
mount /dev/md0 /mnt/md10/
mount: /mnt/md10: can't read superblock on /dev/md0.
fsck.ext4 -n /dev/md0
e2fsck 1.44.5 (15-Dec-2018)
fsck.ext4: Input/output error while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

答案1

dmesg 错误是肤浅的 - 通过重新安装驱动器来纠正

故障驱动器发生故障,已删除。数据备份然后热插拔。阵列在 2-3 小时内重建,没有出现任何问题。

相关内容