我可以从失败的 RAID0 中恢复文件名吗 (Linux Debian 5.0.8)

我可以从失败的 RAID0 中恢复文件名吗 (Linux Debian 5.0.8)

Linux Debian 版本 5.0.8 正在我的 Iomega NAS 上运行,它有一个由四个 3TB 磁盘组成的 RAID0 阵列,但“第三个”磁盘出现故障。

我知道 RAID0 的非冗余性,但我想从该 RAID0 恢复文件名。如果我能恢复一些数据,那就太好了。

其中 sda1、sdb1、sdc1 和 sdd1 应该是 4 个 RAID 设备。

我的下一步可能是什么?

我感谢你抽出时间

以下是命令的输出:

fdisk -l
=============================================
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x03afffbe

Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x142a889c

Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x3daebd50

Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee EFI GPT

Disk /dev/md0: 21.4 GB, 21484339200 bytes
2 heads, 4 sectors/track, 5245200 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

以下是命令的输出:

mdadm --examine /dev/sd[abcd]1
=============================================
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
Creation Time : Mon Apr 23 19:55:36 2012
Raid Level : raid1
Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
Array Size : 20980800 (20.01 GiB 21.48 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Mon Jun 27 21:12:23 2016
      State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 1a57db60 - correct
Events : 164275

  Number   Major   Minor   RaidDevice State
this 0 8 1 0 active sync /dev/sda1

0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 8 33 3 active sync /dev/sdc1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
Creation Time : Mon Apr 23 19:55:36 2012
Raid Level : raid1
Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
Array Size : 20980800 (20.01 GiB 21.48 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Mon Jun 27 21:12:23 2016
      State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 1a57db72 - correct
Events : 164275

  Number   Major   Minor   RaidDevice State
this 1 8 17 1 active sync /dev/sdb1

0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 8 33 3 active sync /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
Creation Time : Mon Apr 23 19:55:36 2012
Raid Level : raid1
Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
Array Size : 20980800 (20.01 GiB 21.48 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0

Update Time : Mon Jun 27 21:12:23 2016
      State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 1a57db86 - correct
Events : 164275

  Number   Major   Minor   RaidDevice State
this 3 8 33 3 active sync /dev/sdc1

0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 8 33 3 active sync /dev/sdc1
=============================================

答案1

丢失 RAID 0 阵列中的任何成员都会使任何文件提取几乎不可能,因为您会丢失先前存储的所有信息的四分之一,包括superblocks和目录。

如果您有非常重要的文本文档,则可以进行文本提取来检索您所拥有内容的四分之三,这实际上可以通过一个简单的程序来自动化,该程序将根据块大小组合 3 个分区的块。

利用这些信息进行任何有用的操作对于警察和间谍机构来说更为常见,因为获取这些信息需要付出很大的努力,特别是考虑到这些信息是不完整的。

相关内容