向 raid 5 添加格式错误的新驱动器,然后将其移除,导致 raid 5 损坏

向 raid 5 添加格式错误的新驱动器,然后将其移除,导致 raid 5 损坏

提前感谢你的帮助。

总结:我向 raid 5 阵列添加了 3 个新磁盘。然后,我以为它们没有被添加,就删除了它们。现在阵列无法正常工作。我该怎么做才能恢复阵列?

细节:

我使用“fd linux raid autodetect”格式化了 3 个新硬盘,并将它们添加到最初使用“ext4”设置的现有 raid 5 设置中。当我发现可用大小没有增加时,我意识到我的 mkfs 不一致,于是匆忙移除了 3 个新硬盘(没有多想!)并在它们上执行了 mkfs.ext4。重启后,阵列不再挂载。在安装之前,md 工作正常。我后悔了……

由于我正在将阵列安装到 /home,所以我必须使用 liveUSB 进入。

我尝试对这三个驱动器进行强制组装和强制创建,认为由于阵列大小从未真正改变,所以所有数据应该仍在原来的三个磁盘上。但似乎没有任何效果。

将 raid 设备的数量更改为“3”是否足以恢复我的阵列?如果可以,我该怎么做?

如能得到任何帮助我将非常非常感激。

以下是我尝试过的并且看到的结果:

sudo mdadm --grow /dev/md0 --raid-devices=3
mdadm: /dev/md0 is not an active md array - aborting

ubuntu@ubuntu:~$ sudo mount /dev/md0
mount: can't find /dev/md0 in /etc/fstab or /etc/mtab

ubuntu@ubuntu:~$ sudo mdadm -D /dev/md0
mdadm: md device /dev/md0 does not appear to be active.


ubuntu@ubuntu:~$ sudo mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=eba025dd:a180e5f2:7b9be516:8710a212 name=ubuntu:0



ubuntu@ubuntu:~$ cat /proc/mdstat
Personalities : 
md0 : inactive sda1[0](S) sdc1[3](S) sdb1[1](S)
      5860537344 blocks super 1.2

unused devices: <none>


ubuntu@ubuntu:~$ sudo mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : eba025dd:a180e5f2:7b9be516:8710a212
           Name : ubuntu:0  (local to host ubuntu)
  Creation Time : Tue Dec 24 21:25:20 2013
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
     Array Size : 19535119360 (9315.07 GiB 10001.98 GB)
  Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 44b40ccc:faf41272:21a6a41a:6f2c4798

    Update Time : Wed Dec 10 22:50:15 2014
       Checksum : 36b18224 - correct
         Events : 7953

         Layout : left-symmetric
     Chunk Size : 512K

    Device Role : Active device 0
    Array State : AAA... ('A' == active, '.' == missing)

其他 2 个驱动器列出的信息基本相似。

因此我尝试了其他一些方法:

ubuntu@ubuntu:~$ sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
mdadm: /dev/sda1 appears to contain an ext2fs file system
    size=1953513472K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sda1 appears to be part of a raid array:
    level=raid5 devices=6 ctime=Tue Dec 24 21:25:20 2013
mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=1953513472K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdb1 appears to be part of a raid array:
    level=raid5 devices=6 ctime=Tue Dec 24 21:25:20 2013
mdadm: /dev/sdc1 appears to contain an ext2fs file system
    size=1953513472K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdc1 appears to be part of a raid array:
    level=raid5 devices=6 ctime=Tue Dec 24 21:25:20 2013
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

ubuntu@ubuntu:~$ sudo mount /dev/md0 /home
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so



ubuntu@ubuntu:~$ dmesg |tail -30
[173789.684714] md: bind<sdc1>
[173789.690771] async_tx: api initialized (async)
[173789.764322] raid6: sse2x1    3216 MB/s
[173789.832280] raid6: sse2x2    4045 MB/s
[173789.900231] raid6: sse2x4    4717 MB/s
[173789.900233] raid6: using algorithm sse2x4 (4717 MB/s)
[173789.900236] raid6: using ssse3x2 recovery algorithm
[173789.908445] md: raid6 personality registered for level 6
[173789.908450] md: raid5 personality registered for level 5
[173789.908452] md: raid4 personality registered for level 4
[173789.909108] md/raid:md0: device sdb1 operational as raid disk 1
[173789.909114] md/raid:md0: device sda1 operational as raid disk 0
[173789.909798] md/raid:md0: allocated 3282kB
[173789.909948] md/raid:md0: raid level 5 active with 2 out of 3 devices, algorithm 2
[173789.909952] RAID conf printout:
[173789.909954]  --- level:5 rd:3 wd:2
[173789.909956]  disk 0, o:1, dev:sda1
[173789.909959]  disk 1, o:1, dev:sdb1
[173789.909997] md0: detected capacity change from 0 to 4000792444928
[173789.910032] RAID conf printout:
[173789.910038]  --- level:5 rd:3 wd:2
[173789.910042]  disk 0, o:1, dev:sda1
[173789.910045]  disk 1, o:1, dev:sdb1
[173789.910047]  disk 2, o:1, dev:sdc1
[173789.910176] md: recovery of RAID array md0
[173789.910183] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[173789.910187] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[173789.910204] md: using 128k window, over a total of 1953511936k.
[173790.155769]  md0: unknown partition table
[173803.332895] EXT4-fs (md0): bad geometry: block count 2441889920 exceeds size of device (976755968 blocks)

相关内容