无法在 Ubuntu 上重新挂载现有的 RAID10

无法在 Ubuntu 上重新挂载现有的 RAID10

我看到了类似的问题,但没有找到解决我的问题的方法。断电后,RAID10 中的一个(4 个磁盘)似乎出现故障。我将该阵列设为活动阵列,但无法安装它。总是出现相同的错误:

 mount: you must specify the filesystem type

因此,当我输入时,

  mdadm --detail /dev/md0

  /dev/md0:
    Version : 00.90.03
  Creation Time : Tue Sep  1 11:00:40 2009
   Raid Level : raid10
   Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
  Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
  Raid Devices : 4
  Total Devices : 3
  Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Mon Jun 11 09:54:27 2012
      State : clean, degraded
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0

     Layout : near=2, far=1
 Chunk Size : 64K

       UUID : 1a02e789:c34377a1:2e29483d:f114274d
     Events : 0.166

Number   Major   Minor   RaidDevice State
   0       8       16        0      active sync   /dev/sdb
   1       0        0        1      removed
   2       8       48        2      active sync   /dev/sdd
   3       8       64        3      active sync   /dev/sde

在 /etc/mdadm/mdadm.conf 我有

 by default, scan all partitions (/proc/partitions) for MD superblocks.
 alternatively, specify devices to scan, using wildcards if desired.
 DEVICE partitions

 auto-create devices with Debian standard permissions
 CREATE owner=root group=disk mode=0660 auto=yes

 automatically tag new arrays as belonging to the local system
 HOMEHOST <system>

 instruct the monitoring daemon where to send mail alerts
 MAILADDR root

  definitions of existing MD arrays
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d

This file was auto-generated...

因此,我的问题是,如何安装 md0 阵列(md1 已安装且没有问题)以保留现有数据?还有一件事,fdisk -l 命令给出以下结果:

 Disk /dev/sdb: 750.1 GB, 750156374016 bytes
 255 heads, 63 sectors/track, 91201 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
 Disk identifier: 0x660a6799

 Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1       88217   708603021   83  Linux
/dev/sdb2           88218       91201    23968980    5  Extended
/dev/sdb5           88218       91201    23968948+  82  Linux swap / Solaris

Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008f8ae

 Device Boot      Start         End      Blocks   Id  System
 /dev/sdc1               1       88217   708603021   83  Linux
 /dev/sdc2           88218       91201    23968980    5  Extended
 /dev/sdc5           88218       91201    23968948+  82  Linux swap / Solaris

 Disk /dev/sdd: 750.1 GB, 750156374016 bytes
 255 heads, 63 sectors/track, 91201 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
 Disk identifier: 0x4be1abdb

  Device Boot      Start         End      Blocks   Id  System

  Disk /dev/sde: 750.1 GB, 750156374016 bytes
  255 heads, 63 sectors/track, 91201 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0xa4d5632e

  Device Boot      Start         End      Blocks   Id  System

  Disk /dev/sdf: 750.1 GB, 750156374016 bytes
  255 heads, 63 sectors/track, 91201 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0xdacb141c

  Device Boot      Start         End      Blocks   Id  System

  Disk /dev/sdg: 750.1 GB, 750156374016 bytes
  255 heads, 63 sectors/track, 91201 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0xdacb141c

  Device Boot      Start         End      Blocks   Id  System

  Disk /dev/md1: 750.1 GB, 750156251136 bytes
  2 heads, 4 sectors/track, 183143616 cylinders
  Units = cylinders of 8 * 512 = 4096 bytes
  Disk identifier: 0xdacb141c

  Device Boot      Start         End      Blocks   Id  System
  Warning: ignoring extra data in partition table 5
  Warning: ignoring extra data in partition table 5
  Warning: ignoring extra data in partition table 5
  Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite)

  Disk /dev/md0: 1500.3 GB, 1500312502272 bytes
  255 heads, 63 sectors/track, 182402 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0x660a6799

   Device Boot      Start         End      Blocks   Id  System
  /dev/md0p1   *           1       88217   708603021   83  Linux
  /dev/md0p2           88218       91201    23968980    5  Extended
  /dev/md0p5   ?      121767      155317   269488144   20  Unknown

还有一件事。使用 mdadm --examine 命令时,结果如下:

 mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd
 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d
  devices=/dev/sdf

 ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d
 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

md0 有 3 个活动设备。有人能告诉我如何解决这个问题吗?如果可以的话,我不想移除有故障的硬盘。请指教

答案1

您正在处理 raid 超级块的丢失,您将需要停止阵列,将其删除,将成员设备清零,然后重建阵列。

重建阵列时,两个关键信息必须正确:

  • 设备订单
  • 块大小

您可以参考这个 wiki,其中描述了如何从 raid 超级块丢失中恢复详细步骤

相关内容