mdadm - 不足以启动阵列

mdadm - 不足以启动阵列

我正在尝试从 MyBookLiveDuo 3TB*2 RAID1 设置恢复数据,该设置不响应 http/ssh 访问。我从机箱中取出一个驱动器,并通过 SATA 将其连接到一台 Linux 笔记本电脑。我已经运行了以下内容。

dmesg 说它是 /dev/sdc:

[125141.929807] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen
[125141.929811] ata4: irq_stat 0x00000040, connection status changed
[125141.929813] ata4: SError: { CommWake DevExch }
[125141.929827] ata4: hard resetting link
[125147.695101] ata4: link is slow to respond, please be patient (ready=0)
[125151.729658] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[125151.909228] ata4.00: ATA-9: WDC WD30EZRX-00D8PB0, 80.00A80, max UDMA/133
[125151.909231] ata4.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[125151.909888] ata4.00: configured for UDMA/133
[125151.925597] ata4: EH complete
[125151.925652] scsi 3:0:0:0: Direct-Access     ATA      WDC WD30EZRX-00D 0A80 PQ: 0 ANSI: 5
[125151.925828] sd 3:0:0:0: [sdc] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
[125151.925830] sd 3:0:0:0: [sdc] 4096-byte physical blocks
[125151.925833] sd 3:0:0:0: Attached scsi generic sg2 type 0
[125151.925971] sd 3:0:0:0: [sdc] Write Protect is off
[125151.925973] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[125151.925995] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[125152.002603]  sdc: sdc1 sdc2 sdc3 sdc4
[125152.003134] sd 3:0:0:0: [sdc] Attached SCSI disk
[125152.220477] md: bind<sdc4>
[125152.232731] md: bind<sdc3>
[125152.233892] md/raid1:md126: active with 1 out of 2 mirrors
[125152.233912] md126: detected capacity change from 0 to 512741376
[125152.234269]  md126: unknown partition table

尝试直接挂载 sdc4 不起作用:

$ sudo mount /dev/sdc4 /home/xxx/wdc -t auto
mount: unknown filesystem type 'linux_raid_member'

Mdstat 仅显示 sdc3 为活动状态,但 sdc4 为非活动状态:

$ sudo cat /proc/mdstat
Personalities : [linear] [raid1] 
md126 : active (auto-read-only) raid1 sdc3[2]
      500724 blocks super 1.0 [2/1] [_U]

md127 : inactive sdc4[0](S)
      2925750264 blocks super 1.0

unused devices: <none>

Parted 显示 sdc4 缺少文件系统:

$ sudo parted -l
Model: ATA WDC WD30EZRX-00D (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 3      15.7MB  528MB   513MB                primary  raid
 1      528MB   2576MB  2048MB  ext3         primary  raid
 2      2576MB  4624MB  2048MB  ext3         primary  raid
 4      4624MB  3001GB  2996GB               primary  raid

Gdisk“v”(验证)没有发现任何问题:

$ sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): FA502922-25C1-4759-AAF8-3D1DDA73F5C4
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 31597 sectors (15.4 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1         1032192         5031935   1.9 GiB     FD00  primary
   2         5031936         9031679   1.9 GiB     FD00  primary
   3           30720         1032191   489.0 MiB   FD00  primary
   4         9031680      5860532223   2.7 TiB     FD00  primary

Command (? for help): v

No problems found. 31597 free sectors (15.4 MiB) available in 2
segments, the largest of which is 30686 (15.0 MiB) in size.

Command (? for help): q

尝试组装和扫描仅显示 /dev/sdc3 处于活动状态:

$ sudo mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md126
mdadm: stopped /dev/md127
$ sudo cat /proc/mdstat
Personalities : [linear] [raid1] 
unused devices: <none>
$ sudo mdadm --assemble --scan
mdadm: /dev/md/MyBookLiveDuo:3 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/MyBookLiveDuo:2 has been started with 1 drive (out of 2).
mdadm: /dev/md/MyBookLiveDuo:3 assembled from 1 drive - not enough to start the array.
$ sudo cat /proc/mdstat
Personalities : [linear] [raid1] 
md127 : active raid1 sdc3[2]
      500724 blocks super 1.0 [2/1] [_U]

unused devices: <none>

E2fsck 在超级块中显示错误的幻数:

$ sudo e2fsck /dev/sdc3
e2fsck 1.42.9 (4-Feb-2014)
/dev/sdc3 is in use.
e2fsck: Cannot continue, aborting.


$ sudo e2fsck /dev/sdc4
e2fsck 1.42.9 (4-Feb-2014)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdc4

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

$ 

尝试挂载 md127 认为它是 NTFS ???

$ sudo mount /dev/md127 /home/xxx/wdc -t auto
NTFS signature is missing.
Failed to mount '/dev/md127': Invalid argument
The device '/dev/md127' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

关于如何进一步安装 /dev/sdc4 并恢复数据有什么建议吗?

编辑1: 检查 sdc4 的输出如下。它表示 RAID 级别 = 线性。

$ sudo mdadm --examine /dev/sdc4
/dev/sdc4:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 374e689e:3bfd050c:ab0b0dce:2d50f5fd
           Name : MyBookLiveDuo:3
  Creation Time : Mon Sep 16 14:53:47 2013
     Raid Level : linear
   Raid Devices : 2

 Avail Dev Size : 5851500528 (2790.21 GiB 2995.97 GB)
  Used Dev Size : 0
   Super Offset : 5851500528 sectors
          State : clean
    Device UUID : 9096f74b:0a8f2b61:93347be3:6d3b6c1b

    Update Time : Mon Sep 16 14:53:47 2013
       Checksum : 77aa5963 - correct
         Events : 0

       Rounding : 0K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)

检查 sdc3 的输出如下,mdstat 表示该输出处于活动状态。它表示 RAID 级别 = RAID1。

$ sudo mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 7c040c5e:9c30ac6d:e534a129:20457e22
           Name : MyBookLiveDuo:2
  Creation Time : Wed Dec 31 19:01:40 1969
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1001448 (489.07 MiB 512.74 MB)
     Array Size : 500724 (489.07 MiB 512.74 MB)
   Super Offset : 1001456 sectors
          State : clean
    Device UUID : 1d9fe3e3:d5ac7387:d9ededba:88ca24a5

    Update Time : Sun Jul  3 11:53:31 2016
       Checksum : 31589560 - correct
         Events : 101


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md126 : active (auto-read-only) raid1 sdc3[2]
      500724 blocks super 1.0 [2/1] [_U]

md127 : inactive sdc4[0](S)
      2925750264 blocks super 1.0

unused devices: <none>

分区是:

$ cat /proc/partitions 
major minor  #blocks  name

   8        0 1953514584 sda
   8        1     102400 sda1
   8        2 1953411072 sda2
   8       16 1953514584 sdb
   8       17     248832 sdb1
   8       18          1 sdb2
   8       21 1953263616 sdb5
 252        0 1953261568 dm-0
 252        1 1919635456 dm-1
 252        2   33488896 dm-2
   8       32 2930266584 sdc
   8       33    1999872 sdc1
   8       34    1999872 sdc2
   8       35     500736 sdc3
   8       36 2925750272 sdc4
   9      126     500724 md126

编辑 2:(来自另一个机箱的不同 RAID 磁盘)

使用来自不同 WDLiveDuo 机箱的良好磁盘,以相同 RAID1 方式配置(2*3TB 磁盘),检查 sdc3 和 sdc4 的输出显示两者的 RAID 级别 = RAID1。因此,对于列出的坏磁盘多于,sdc3 显示正确的 RAID 级别 = RAID1,但不知何故,sdc4 没有配置正确的 RAID 级别。有没有办法将 RAID 级别更改/修复为 RAID1,然后它允许我加载它并从坏磁盘中获取数据?

$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md126 : inactive sdc4[2](S)
      2925750136 blocks super 1.0

md127 : inactive sdc3[1](S)
      500724 blocks super 1.0

unused devices: <none>
$
$ cat /proc/partitions 
major minor  #blocks  name

   8        0 1953514584 sda
   8        1     102400 sda1
   8        2 1953411072 sda2
   8       16 1953514584 sdb
   8       17     248832 sdb1
   8       18          1 sdb2
   8       21 1953263616 sdb5
 252        0 1953261568 dm-0
 252        1 1919635456 dm-1
 252        2   33488896 dm-2
   8       32 2930266584 sdc
   8       33    1999872 sdc1
   8       34    1999872 sdc2
   8       35     500736 sdc3
   8       36 2925750272 sdc4
$
$ sudo mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 7c040c5e:9c30ac6d:e534a129:20457e22
           Name : MyBookLiveDuo:2
  Creation Time : Wed Dec 31 19:01:40 1969
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1001448 (489.07 MiB 512.74 MB)
     Array Size : 500724 (489.07 MiB 512.74 MB)
   Super Offset : 1001456 sectors
          State : clean
    Device UUID : e0963cfc:7ba16214:94e24c90:32988d39

    Update Time : Mon Jul  4 13:20:48 2016
       Checksum : 412bab99 - correct
         Events : 288


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)
$
$ sudo mdadm --examine /dev/sdc4
/dev/sdc4:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ac48cc98:1d450838:dd0b0364:61b3168e
           Name : MyBookLiveDuo:3
  Creation Time : Sat Jul  2 19:57:43 2016
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5851500272 (2790.21 GiB 2995.97 GB)
     Array Size : 2925750136 (2790.21 GiB 2995.97 GB)
   Super Offset : 5851500528 sectors
          State : clean
    Device UUID : 1233b655:2f5fd745:c1c71658:f05af045

    Update Time : Mon Jul  4 13:21:15 2016
       Checksum : b83b08dc - correct
         Events : 54432


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)
$
$ sudo mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md126
mdadm: stopped /dev/md127
$
$ sudo mdadm --assemble --scan
mdadm: /dev/md/MyBookLiveDuo:3 has been started with 1 drive (out of 2).
mdadm: /dev/md/MyBookLiveDuo:2 has been started with 1 drive (out of 2).
$
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid1 sdc3[1]
      500724 blocks super 1.0 [2/1] [_U]

md127 : active raid1 sdc4[2]
      2925750136 blocks super 1.0 [2/1] [_U]

unused devices: <none>
$
$ sudo parted -l /dev/sdc
Model: ATA WDC WD30EFRX-68E (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 3      15.7MB  528MB   513MB                primary  raid
 1      528MB   2576MB  2048MB  ext3         primary  raid
 2      2576MB  4624MB  2048MB  ext3         primary  raid
 4      4624MB  3001GB  2996GB  ext4         primary  raid
Error: /dev/md126: unrecognised disk label                                

Model: Linux Software RAID Array (md)
Disk /dev/md127: 2996GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  2996GB  2996GB  ext4
$
$ sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): BDE03D26-EB85-4348-A4E0-A229FC01EE93
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 31597 sectors (15.4 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1         1032192         5031935   1.9 GiB     FD00  primary
   2         5031936         9031679   1.9 GiB     FD00  primary
   3           30720         1032191   489.0 MiB   FD00  primary
   4         9031680      5860532223   2.7 TiB     FD00  primary

Command (? for help): v

No problems found. 31597 free sectors (15.4 MiB) available in 2
segments, the largest of which is 30686 (15.0 MiB) in size.

Command (? for help): q
$
$ sudo mount /dev/md127 /home/xxx/wdc -t ext4
mount: wrong fs type, bad option, bad superblock on /dev/md127,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

答案1

输出mdadm --examine /dev/sdc4显示了 raid 设备未上线的原因:

 Raid Level : linear

线性 RAID 设备需要 RAID 组中的所有设备都能够启动。

https://raid.wiki.kernel.org/index.php/RAID_setup#Linear_mode

相关内容