恢复 BTRFS?/dev/md2 上的超级块魔法错误

恢复 BTRFS?/dev/md2 上的超级块魔法错误

所以最近一次停电后我的 DS718+ RAID 崩溃了。

在与 Syno 支持部门调查后,我们还发现我的 RAM 已损坏(已被移除),而且由于它不是官方支持的 RAM 模块,他们现在不再帮助我恢复我的 RAID(对此表示感谢),因为他们怀疑损坏的文件系统由此引起,而不是断电。

我确实有备份,但已经很旧了,所以如果可能的话,我想恢复一下。如果不行,我也可以恢复。

目前我可以告诉你的是:

btrfs-显示-super /dev/md2

superblock: bytenr=65536, device=/dev/md2
---------------------------------------------------------
ERROR: bad magic on superblock on /dev/md2 at 65536

fdisk -l

Disk /dev/ram0: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram1: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram2: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram3: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram4: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram5: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram6: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram7: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram8: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram9: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram10: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram11: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram12: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram13: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram14: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram15: 640 MiB, 671088640 bytes, 1310720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WD40EFRX-68N32N0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B0741029-0239-4B70-BDC9-115FFE436B29

Device       Start        End    Sectors  Size Type
/dev/sda1     2048    4982527    4980480  2.4G Linux RAID
/dev/sda2  4982528    9176831    4194304    2G Linux RAID
/dev/sda5  9453280 7813830239 7804376960  3.6T Linux RAID


Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WD4003FFBX-68MU3N0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7D01494F-CF7B-4DFF-8735-C9F326A16775

Device       Start        End    Sectors  Size Type
/dev/sdb1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdb2  4982528    9176831    4194304    2G Linux RAID
/dev/sdb5  9453280 7813830239 7804376960  3.6T Linux RAID


Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram0: 275 MiB, 288358400 bytes, 70400 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram1: 275 MiB, 288358400 bytes, 70400 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram2: 275 MiB, 288358400 bytes, 70400 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram3: 275 MiB, 288358400 bytes, 70400 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/synoboot: 120 MiB, 125829120 bytes, 245760 sectors
Disk model: DiskStation
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0B6F377D-3CFE-49FF-825F-7662245E3112

Device         Start    End Sectors Size Type
/dev/synoboot1  2048  67583   65536  32M EFI System
/dev/synoboot2 67584 239615  172032  84M Linux filesystem


Disk /dev/md2: 3.6 TiB, 3995839758336 bytes, 7804374528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/vg1000-lv: 3.6 TiB, 3995837923328 bytes, 7804370944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/cachedev_0: 3.6 TiB, 3995837923328 bytes, 7804370944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

mdadm --检查 /dev/sda5

/dev/sda5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 4df9f767:00c49417:945e810a:3c7ae5cf
           Name : DiskStation:2
  Creation Time : Thu Aug 27 16:55:24 2015
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
     Array Size : 3902187264 (3721.42 GiB 3995.84 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : eeb9349e:2df293df:e3e4d149:341a8cd9

    Update Time : Tue Oct 19 21:43:44 2021
       Checksum : 1776634c - correct
         Events : 236201


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --检查 /dev/sdb5

/dev/sdb5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 4df9f767:00c49417:945e810a:3c7ae5cf
           Name : DiskStation:2
  Creation Time : Thu Aug 27 16:55:24 2015
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
     Array Size : 3902187264 (3721.42 GiB 3995.84 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=384 sectors
          State : clean
    Device UUID : b0de0510:3424cbdd:380068fb:af4fa72d

    Update Time : Tue Oct 19 21:43:44 2021
       Checksum : 8d300ae5 - correct
         Events : 236201


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

命令dumpe2fs文件系统检查顺便说一句,找不到-不知道为什么。

当尝试mke2fs -n /dev/md2它说:

mke2fs 1.44.1 (24-Mar-2018)
/dev/md2 contains a LVM2_member file system
Proceed anyway? (y,N) y
/dev/md2 is apparently in use by the system; will not make a filesystem here!

我已经有将近两周没有使用数据/NAS 了,所以任何帮助我都非常感谢。由于我不是 Linux 爱好者,所以我很感激直接的建议。我知道 sda5 和 sdb5 是我的两个磁盘,md2 是 RAID。但不确定 LVM2_member 是什么,也不知道我现在如何修复它——如果可能的话。

编辑:

mdadm --assemble --scan -v

mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/dm-1
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: no recogniseable superblock on /dev/md2
mdadm: no recogniseable superblock on /dev/synoboot2
mdadm: Cannot assemble mbr metadata on /dev/synoboot1
mdadm: Cannot assemble mbr metadata on /dev/synoboot
mdadm: no recogniseable superblock on /dev/md1
mdadm: no recogniseable superblock on /dev/zram3
mdadm: no recogniseable superblock on /dev/zram2
mdadm: no recogniseable superblock on /dev/zram1
mdadm: no recogniseable superblock on /dev/zram0
mdadm: no recogniseable superblock on /dev/md0
mdadm: /dev/sdb5 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sda5 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: no recogniseable superblock on /dev/ram15
mdadm: no recogniseable superblock on /dev/ram14
mdadm: no recogniseable superblock on /dev/ram13
mdadm: no recogniseable superblock on /dev/ram12
mdadm: no recogniseable superblock on /dev/ram11
mdadm: no recogniseable superblock on /dev/ram10
mdadm: no recogniseable superblock on /dev/ram9
mdadm: no recogniseable superblock on /dev/ram8
mdadm: no recogniseable superblock on /dev/ram7
mdadm: no recogniseable superblock on /dev/ram6
mdadm: no recogniseable superblock on /dev/ram5
mdadm: no recogniseable superblock on /dev/ram4
mdadm: no recogniseable superblock on /dev/ram3
mdadm: no recogniseable superblock on /dev/ram2
mdadm: no recogniseable superblock on /dev/ram1
mdadm: no recogniseable superblock on /dev/ram0
mdadm: No arrays found in config file or automatically

猫/etc/fstab

none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_0 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,ro,nodev 0 0

猫/proc/mdstat

Personalities : [raid1]
md2 : active raid1 sdb5[3] sda5[2]
      3902187264 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [2/2] [UU]

unused devices: <none>

编辑2:

光伏显示器

  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1000
  PV Size               3.63 TiB / not usable 1.75 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              952682
  Free PE               0
  Allocated PE          952682
  PV UUID               Yj7BOC-Ni0B-Q1NG-CHKy-Fmq0-620T-aHeUYR

显示

  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.63 TiB
  PE Size               4.00 MiB
  Total PE              952682
  Alloc PE / Size       952682 / 3.63 TiB
  Free  PE / Size       0 / 0
  VG UUID               grXROS-RIVn-C0Nx-cKqF-lOyB-YFHJ-HDWngh

lv显示器


  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                xgX5UJ-vk3r-eGX0-3bxj-339u-B3sV-y1jldv
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                3.63 TiB
  Current LE             952682
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           252:0

支持人员一开始就告诉我:

系统安装在 RAID md0 中,RAID 级别 1,通过后来添加的磁盘进行扩展(在故障安全方面,而不是增加)。交换位于 md1 上,也是 RAID 1。从 md2 开始,放置卷。NAS 的 RAID 配置没有问题,但卷 1 上的文件系统有问题。文件系统无法再安装。甚至连只读的都无法安装

截屏

编辑3: btrfs-显示-super /dev/mapper/cachedev_0

superblock: bytenr=65536, device=/dev/mapper/cachedev_0
---------------------------------------------------------
csum                    0x63aa7385 [match]
bytenr                  65536
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    1e556138-5480-44ce-8c65-bfa2d7fd54cb
label                   2018.09.26-18:00:59 v23739
generation              3652464
root                    547766255616
sys_array_size          258
chunk_root_generation   3579752
root_level              1
chunk_root              3611627094016
chunk_root_level        1
log_root                547767943168
log_root_transid        0
log_root_level          0
log tree reserve bg     0
total_bytes             3995837923328
bytes_used              2851638943744
sectorsize              4096
nodesize                16384
leafsize                16384
stripesize              4096
root_dir                6
num_devices             1
compat_flags            0x8000000000000000
compat_ro_flags         0x3
                        ( FREE_SPACE_TREE |
                          FREE_SPACE_TREE_VALID )
incompat_flags          0x16b
                        ( MIXED_BACKREF |
                          DEFAULT_SUBVOL |
                          COMPRESS_LZO |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          SKINNY_METADATA )
csum_type               0
csum_size               4
cache_generation        18446744073709551615
uuid_tree_generation    3652464
dev_item.uuid           521f7cdc-497c-4c69-886e-c3974042337a
dev_item.fsid           1e556138-5480-44ce-8c65-bfa2d7fd54cb [match]
dev_item.type           0
dev_item.total_bytes    3995837923328
dev_item.bytes_used     3139713368064
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          1
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0

编辑4:

e2fsck -nvf -C 0 /dev/md2

e2fsck 1.44.1 (24-Mar-2018)
Warning!  /dev/md2 is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...

openfs: invalid superblock checksum.
e2fsck: Bad magic number in super-block while trying to open /dev/md2

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/md2 contains a LVM2_member file system

e2fsck -b 8193 /dev/md2

e2fsck 1.44.1 (24-Mar-2018)
/dev/md2 is in use.
e2fsck: Cannot continue, aborting.

我如何安全地卸载、尝试命令并重新安装?

多谢!

答案1

您是否尝试过直接组装阵列?我的系统在重启时总会丢失阵列。我尝试过制作 mdadm.conf,但没有成功,因此我只能在下次重启时使用 create 命令重新组装它。只要设备的顺序相同,它就能组装得很好。就我而言,我使用 raid5:

    mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sd[a1,b1,c1,d1]

答案2

您的 RAID 没问题。MD RAID 超级块中的元数据是一致的。因此,我同意 Synology 支持,认为 RAID 完好无损。您可能仍想检查其数据,但我建议您等到数据恢复。

因此不要对其执行任何其他操作/dev/md2,也不要搜索任何内容,因为系统已经发现并识别了那里的内容。

系统发现它正在使用中,并阻止对它的访问,因为它知道它是活动卷组 (VG) 的物理卷 (LVM PV),称为vg1000。在此卷组中,您有一个逻辑卷 (LV),简称为lv。它是块设备,可以作为 访问/dev/vg1000/lv(它是指向某些 的符号链接/dev/dm-X,其中 X 是动态的,可以在重新启动或添加或删除另一个设备映射器实体后更改)。

接下来缓存层。我不知道它使用哪种缓存技术,但缓存卷在 /dev/mapper 内有名称,这使用 Linux Device Mapper 框架。其中一种技术是 LVM Cache。LV/dev/vg1000/lv后备设备. 也可能有一个缓存设备。即使您目前没有安装任何 SSD,也可以创建此分层以方便进一步引入实际缓存。您可以将一些 SSD 放入其中,在 GUI 中勾选一些复选框,然后,您就会立即拥有一个缓存,而无需重新格式化和耗时的数据移动。

您的文件系统似乎位于此缓存卷中,它在系统中显示为/dev/mapper/cachedev_0(再次,它是指向某个块设备的符号链接/dev/dm-Y,其中 Y 是动态的)。文件中的以下行支持这一点fstab

/dev/mapper/cachedev_0 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,ro,nodev 0 0

您应该在此设备上搜索 BTRFS 文件系统。尝试一下btrfs-show-super /dev/mapper/cachedev_0,它应该找到一个超级块。

您展示的屏幕截图提到了 上的问题/dev/dm-1。如果这是 的目标/dev/mapper/cachedev_0(请与 核实readlink /dev/mapper/cachedev_0),那么您就倒霉了。您可以尝试将此系统交给 BTRFS 专家,希望他能提取一些信息,但要准备好为此付费。

相关内容