/dev/mdXY 上的文件系统类型错误、选项错误、超级块错误

/dev/mdXY 上的文件系统类型错误、选项错误、超级块错误

如果我使用昂贵的商业软件,我可以读取数据,但我没有购买它,因为它对我来说太贵了。因为我坚信我不会再次重新格式化硬盘,我可以轻松读取数据。

我将要做以下事情:

  • 将 2 个磁盘(4 个磁盘 Raid 10 硬件阵列中)的读取数据物理移动到 Linux 台式电脑中。
  • 正在寻找可以做到这一点的软件 RAID 解决方案。我希望 mdadm 可以做到这一点。

创建突袭作品:

# mdadm --create /dev/md/md_27tb --level=0 --raid-devices=2 /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 appears to be part of a raid array:level=raid10 devices=4 ctime=Sun Nov 3 01:19:11 2019
mdadm: /dev/sdd2 appears to be part of a raid array:level=raid10 devices=4 ctime=Sun Nov 3 01:19:11 2019
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md_27tb started.

不幸的是,我无法读取数据。无法安装。

然后我读到:安装:错误的 fs 类型、错误的选项、错误的超级块 并尝试过:

# mount -t ext4 /dev/md/md_27tb /mnt/md_27tb

mount: /mnt/md_27tb: wrong fs type, bad option, bad superblock on /dev/md126, missing codepage or helper program, or other error.

# fsck /dev/md/md_27tb

# fsck /dev/md/md_27tb

fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md126
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

# e2fsck -b 32768 /dev/md/md_27tb

e2fsck: Bad magic number in super-block 

# e2fsck -b 8193 /dev/md/md_27tb

e2fsck: Bad magic number in super-block 

也许我使用了错误的文件系统。

我在这里读到:

https://forums.lenovo.com/t5/Lenovo-Iomega-Networking-Storage/File-system-error-on-IX4-300d/td-p/1407487

也许是 ext3?

更新重要信息:

# file -sk /dev/md/md_27tb
/dev/md/md_27tb: symbolic link to ../md126

# fdisk -l /dev/md/md_27tb
Disk /dev/md/md_27tb: 5.43 TiB, 5957897682944 bytes, 11636518912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

# file -skL /dev/md/md_27tb
/dev/md/md_27tb: data

更新重要信息:

vgscan scans all supported LVM block devices in the system for VGs. 

# vgscan
  Reading volume groups from cache.
  Found volume group "md0_vg" using metadata type lvm2

# pvscan
  PV /dev/md127   VG md0_vg          lvm2 [19.98 GiB / 0    free]
  Total: 1 [19.98 GiB] / in use: 1 [19.98 GiB] / in no VG: 0 [0   ]


# vgdisplay
- Volume group -
  VG Name               md0_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No 3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.98 GiB
  PE Size               4.00 MiB
  Total PE              5115
  Alloc PE / Size       5115/19.98 GiB
  Free  PE / Size       0/0   
  VG UUID               fYicLg-jFJr-trfJ-3HvH-LWl4-tCci-fI

更新时间:11 月 27 日(周三)22:30

经过很长一段时间的自动重建,大约一天后,即 11 月 27 日星期三 22:30,我收到了“数据保护重建已完成”的消息。现在我确定这两个磁盘上的数据再次正确了,我可以继续尝试。

# mdadm --assemble --force /dev/md/md_27tb /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
[seeh-pc seeh]# 

更新 191128-090222(顺便说一下:静置时间在这里肯定不感兴趣):我希望 md126 不在那里。

$ lsblk
....

sdd           8:48   0   2.7T  0 disk  
├─sdd1        8:49   0    20G  0 part  
│ └─md126     9:126  0    20G  0 raid1 
│   ├─md0_vg-BFDlv
│   │       253:0    0     4G  0 lvm   
│   └─md0_vg-vol1
│           253:1    0    16G  0 lvm   
└─sdd2        8:50   0   2.7T  0 part  
sde           8:64   0 931.5G  0 disk  
├─sde1        8:65   0  60.6G  0 part  
└─sde2        8:66   0   871G  0 part  
sdf           8:80   0   2.7T  0 disk  
├─sdf1        8:81   0    20G  0 part  
│ └─md126     9:126  0    20G  0 raid1 
│   ├─md0_vg-BFDlv
│   │       253:0    0     4G  0 lvm   
│   └─md0_vg-vol1
│           253:1    0    16G  0 lvm   
└─sdf2        8:82   0   2.7T  0 part 

更新 191128-123821:raid1 对我来说看起来不对:

$ cat /proc/mdstat
Personalities : [raid1] 
md126 : active (auto-read-only) raid1 sdd1[0] sdf1[2]
      20955008 blocks super 1.1 [2/2] [UU]

md127 : inactive sdd2[6](S) sdf2[7](S)
      5818261504 blocks super 1.1

更新191128-144807:

看起来很成功。

# mdadm --assemble --force /dev/md127 /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
[seeh-pc seeh]# mdadm --stop md127
mdadm: stopped md127
[seeh-pc seeh]# mdadm --assemble md127 --run /dev/sdf2 /dev/sdd2
mdadm: /dev/md/md127 has been started with 2 drives (out of 4).
[seeh-pc seeh]# 


# cat /proc/mdstat
Personalities : [raid1] [raid10] 
md1 : active raid10 sdd2[6] sdf2[7]
      5818260480 blocks super 1.1 512K chunks 2 near-copies [4/2] [_U_U]

md126 : active (auto-read-only) raid1 sdd1[0] sdf1[2]
      20955008 blocks super 1.1 [2/2] [UU]


sdd            8:48   0   2.7T  0 disk   
├─sdd1         8:49   0    20G  0 part   
│ └─md126      9:126  0    20G  0 raid1  
│   ├─md0_vg-BFDlv
│   │        253:0    0     4G  0 lvm    
│   └─md0_vg-vol1
│            253:1    0    16G  0 lvm    
└─sdd2         8:50   0   2.7T  0 part   
  └─md1        9:1    0   5.4T  0 raid10 
    └─3760fd40_vg-lv2111e672
             253:2    0   5.4T  0 lvm    
sde            8:64   0 931.5G  0 disk   
├─sde1         8:65   0  60.6G  0 part   
└─sde2         8:66   0   871G  0 part   
sdf            8:80   0   2.7T  0 disk   
├─sdf1         8:81   0    20G  0 part   
│ └─md126      9:126  0    20G  0 raid1  
│   ├─md0_vg-BFDlv
│   │        253:0    0     4G  0 lvm    
│   └─md0_vg-vol1
│            253:1    0    16G  0 lvm    
└─sdf2         8:82   0   2.7T  0 part   
  └─md1        9:1    0   5.4T  0 raid10 
    └─3760fd40_vg-lv2111e672
             253:2    0   5.4T  0 lvm  

相关内容