MDADM Raid 0 无法挂载:一个驱动器有坏的超级块

MDADM Raid 0 无法挂载:一个驱动器有坏的超级块

我对 Linux 和软件 RAID 领域还很陌生,但我遇到了一个问题,希望大家能够帮助我。

几天前,系统没有正常关闭,当我启动 mdadm 软件 raid 时,无法再挂载,并且出现错误,指出 /dev/sdb1(属于我的 raid 0 阵列)有一个坏的超级块。我决定忽略挂载,因为我的系统不依赖它。所以我可以登录并进行一些测试。

我的阵列包含两个磁盘(sdb1 和 sde1),它们设置为 RAID0。两个驱动器完全相同。

当我运行时,fsck /dev/sdb1我收到错误,我的超级块比物理驱动器有更多的块。

$ fsck /dev/sdb1
fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
The filesystem size (according to the superblock) is 488380672 blocks
The physical size of the device is 244190390 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? 

我的其他驱动器似乎没有问题,但现在我收到以下错误,我认为这与 sdb1 丢失有关

$ fsck /dev/sde1
fsck from util-linux 2.20.1
fsck: fsck.linux_raid_member: not found
fsck: error 2 while executing fsck.linux_raid_member for /dev/sde1

在 fdisk 中一切似乎都正常,或者至少和以前一样。

$ fdisk -l

Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a091a

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   390721535   195109889    5  Extended
/dev/sda5          501760   390721535   195109888   8e  Linux LVM

Disk /dev/mapper/Server1-root: 193.6 GB, 193646821376 bytes
255 heads, 63 sectors/track, 23542 cylinders, total 378216448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/Server1-root doesn't contain a valid partition table

Disk /dev/mapper/Server1-swap_1: 2143 MB, 2143289344 bytes
255 heads, 63 sectors/track, 260 cylinders, total 4186112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/Server1-swap_1 doesn't contain a valid partition table

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
78 heads, 63 sectors/track, 397542 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000300

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  1953525167   976761560   fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
24 heads, 63 sectors/track, 1292014 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  1953525167   976761560   83  Linux

WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdd: 500.1 GB, 500107862016 bytes
81 heads, 63 sectors/track, 191411 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048   976773167   488385560   83  Linux

WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
78 heads, 63 sectors/track, 397542 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000300

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048  1953525167   976761560   fd  Linux raid autodetect

当我跑步时dumpe2fs /dev/sdb1我得到:

dumpe2fs 1.42 (29-Nov-2011)
Filesystem volume name:   <none>
Last mounted on:          /media/raid0
Filesystem UUID:          52e0e3eb-40d7-49fa-9b35-be6513a782d2
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         not clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              122101760
Block count:              488380672
Reserved block count:     4883806
Free blocks:              166355414
Free inodes:              121871448
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      907
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              128
RAID stripe width:        256
Flex block group size:    16
Filesystem created:       Sun Apr 22 21:57:36 2012
Last mount time:          Thu May  3 00:01:08 2012
Last write time:          Tue May  8 20:33:15 2012
Mount count:              24
Maximum mount count:      35
Last checked:             Sun Apr 22 21:57:36 2012
Check interval:           15552000 (6 months)
Next check after:         Fri Oct 19 21:57:36 2012
Lifetime writes:          1809 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Default directory hash:   half_md4
Directory Hash Seed:      54b59b52-4cfc-4bea-8c5a-5fc730317f4f
Journal backup:           inode blocks

还有一大堆这样的:

Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
  Checksum 0x6848, unused inodes 0
  Primary superblock at 0, Group descriptors at 1-117
  Reserved GDT blocks at 118-1024
  Block bitmap at 1025 (+1025), Inode bitmap at 1041 (+1041)
  Inode table at 1057-1568 (+1057)
  23517 free blocks, 8182 free inodes, 1 directories

当我跑步时cat /proc/mdstat我得到:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb[0](S)
      976761560 blocks super 1.2

md127 : inactive sde1[1](S)
      976760536 blocks super 1.2

unused devices: <none>

所以突然间我现在有了两个突袭阵列......

我已经尝试了他们建议使用备份扇区。但是没有运气,超级块大于物理驱动器的错误仍然存​​在。

你们能帮帮我吗?我现在很害怕丢失了我的数据。

编辑:添加了 mdadm.conf 文件的内容

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
# MAILADDR

# definitions of existing MD arrays
#ARRAY /dev/md/0 metadata=1.2 UUID=7658dc76:c33da511:d40c5dee:c5d5143d name=Server1:0

# This file was auto-generated on Fri, 27 Apr 2012 18:38:03 +0200
# by mkconf $Id$
ARRAY /dev/md/0 metadata=1.2 UUID=7658dc76:c33da511:d40c5dee:c5d5143d name=Server1:0
ARRAY /dev/md/0 metadata=1.2 UUID=3720f7a5:ae73fb52:deee0813:677105ae name=Server1:0

答案1

停止在 raid 成员上运行 fsck,那里不应该有文件系统,并且您可能会轻易破坏那里的 RAID 超级块。

首先,您的 RAID 设置确实存在问题,您的分区表应该是这样的。

root@mark21:~# fdisk -l

磁盘 /dev/sda:640.1 GB,640135028736 字节
255 个磁头,63 个扇区/磁道,77825 个磁柱,总共 1250263728 个扇区
单位 = 1 * 512 = 512 字节的扇区
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标识符:0x0000ffc4

   设备启动开始结束块ID系统
/dev/sda1 2048 1240233983 620115968 fd Linux raid 自动检测

看起来您在将 SD 设备设为 RAID 集的一部分后用文件系统对其进行了格式化,这完全是错误的。您应该格式化并使用 MD 设备。

相关内容