在 Linux 上的 Synology Box 之外安装和访问配置了 RAID1 的 HDD 中的数据

在 Linux 上的 Synology Box 之外安装和访问配置了 RAID1 的 HDD 中的数据

背景

我的双磁盘 Synology DS14+ 无法开机,我想要访问我的 Ubuntu 电脑上的 HDD 中存储的文件。我尝试按照官方指南进行操作Synology 常见问题解答但我在安装时总是出现以下错误:

mount: /home/test: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-volume_2 missing codepage or helper program, or other error.

尝试运行时发生以下错误mdadm vgchange

mdadm: Found some drive for an array that is already active: /dev/md/diskstation:3
mdadm: giving up.
mdadm: No arrays found in config file or automatically

主要问题/目标:

我试图使用 mdadm 在我的计算机上挂载并访问 LVM 分区,但仍然遇到mount: /home/test: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-volume_2 missing codepage or helper program错误。

我搜索了 StackExchange 并在这里找到了类似的帖子: 如何?在 Synology Box 之外安装、查找和恢复 HDD 中的数据与此相关的是:恢复 NAS 损坏硬盘的数据(SHR 和 Brtf 格式)但没有人能够解决我的问题。

输出和错误:

这是执行以下语句时的输出Synology 常见问题解答

root@pop-os:~# apt-get install -y mdadm lvm2 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
lvm2 is already the newest version (2.03.11-2.1ubuntu4).
mdadm is already the newest version (4.2-0ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.

root@pop-os:~# mdadm -AsfR && vgchange -ay 
mdadm: Found some drive for an array that is already active: /dev/md/diskstation:3
mdadm: giving up.
mdadm: No arrays found in config file or automatically

root@pop-os:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 sdb3[0]
      2925444544 blocks super 1.2 [2/1] [U_]
unused devices: <none>

root@pop-os:~# lvs
  LV                    VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1 -wi-a----- 12.00m                                                    
  volume_2              vg1 -wi-------  2.72t

root@pop-os:~# mount /dev/vg1/volume_2 /home/test
mount: /home/test: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-volume_2, missing codepage or helper program, or other error.

我尝试运行以下命令扫描变化量

root@pop-os:~# vgscan
  Found volume group "vg1" using metadata type lvm2
root@pop-os:~# vgscan --mknodes
  Found volume group "vg1" using metadata type lvm2
root@pop-os:~# vgchange -ay
  2 logical volume(s) in volume group "vg1" now active
root@pop-os:~# mount /dev/vg1/volume_2 /home/test
mount: /home/test: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-volume_2, missing codepage or helper program, or other error.

lvslv显示器显示卷的名称以及位于 vg1 下的内容:

root@pop-os:~# lvs
  LV                    VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1 -wi-a----- 12.00m                                                    
  volume_2              vg1 -wi-a-----  2.72t                                                    

root@pop-os:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                qdZODQ-1iEc-CYkk-kcgh-oJiG-bn3C-9Uadal
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vg1/volume_2
  LV Name                volume_2
  VG Name                vg1
  LV UUID                rBUb9q-DHGA-Tfua-7YOY-EPIF-3rUN-fx2kzj
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                2.72 TiB
  Current LE             714216
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  

输出lvmdiskscanlsblk

root@pop-os:~# ls -l /dev/vg1/
total 0
lrwxrwxrwx 1 root root 7 Oct 10  2023 syno_vg_reserved_area -> ../dm-0
lrwxrwxrwx 1 root root 7 Oct 10  2023 volume_2 -> ../dm-1

root@pop-os:~# ls -l /dev/md/
total 0
lrwxrwxrwx 1 root root 8 Oct 10 03:29 diskstation:3 -> ../md127
        
root@pop-os:~# lsblk -f
NAME      FSTYPE FSVER LABEL                     UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
loop0     squash 4.0                                                                          0   100% /rofs
sda                                                                                                    
├─sda1    linux_ 0.90.                           b9851fd5-4767-b72b-3017-a5a8c86610be                  
├─sda2    linux_ 0.90.                           0217a53e-367a-5599-439b-08c7bd6ed266                  [SWAP]
└─sda3    linux_ 1.2   diskstation:3             75f2a0ea-3302-b22c-96b2-8e88615d88fb                  
sdb                                                                                                    
├─sdb1    linux_ 0.90.                           0cef974a-8c64-a3f2-3017-a5a8c86610be                  
├─sdb2    linux_ 0.90.                           c42d7fa3-015d-d340-3017-a5a8c86610be                  [SWAP]
└─sdb3    linux_ 1.2   diskstation:3             75f2a0ea-3302-b22c-96b2-8e88615d88fb                  
  └─md127 LVM2_m LVM2                            ab4QcM-j5Fn-iDLH-1a3y-Ofa9-5aQc-qFCLvY                
    ├─vg1-syno_vg_reserved_area
    └─vg1-volume_2
          btrfs        2016.05.07-00:06:21 v7321 8710bcd6-baae-4824-a852-feedbc516824                  

zram0                                                                                                  [SWAP]

root@pop-os:~# lsblk -e7
NAME                            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda                               8:0    0  2.7T  0 disk  
├─sda1                            8:1    0  2.4G  0 part  
├─sda2                            8:2    0    2G  0 part  [SWAP]
└─sda3                            8:3    0  2.7T  0 part  
sdb                               8:16   0  2.7T  0 disk  
├─sdb1                            8:17   0  2.4G  0 part  
├─sdb2                            8:18   0    2G  0 part  [SWAP]
└─sdb3                            8:19   0  2.7T  0 part  
  └─md127                         9:127  0  2.7T  0 raid1 
    ├─vg1-syno_vg_reserved_area 253:0    0   12M  0 lvm   
    └─vg1-volume_2              253:1    0  2.7T  0 lvm   

zram0                           252:0    0  5.7G  0 disk  [SWAP]

这是输出mdadm——检查

root@pop-os:~# mdadm --examine /dev/sdb
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
root@pop-os:~# mdadm --stop /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?
root@pop-os:~# vgs
  VG  #PV #LV #SN Attr   VSize VFree
  vg1   1   2   0 wz--n- 2.72t    0 

显示

root@pop-os:~# mount /dev/mapper/vg1-lv /home/pop-os/syn
root@pop-os:/dev/mapper# ls -l
total 0
crw------- 1 root root 10, 236 Oct 10  2023 control
lrwxrwxrwx 1 root root       7 Oct 10  2023 vg1-syno_vg_reserved_area -> ../dm-0
lrwxrwxrwx 1 root root       7 Oct 10  2023 vg1-volume_2 -> ../dm-1

root@pop-os:/dev/mapper# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.72 TiB
  PE Size               4.00 MiB
  Total PE              714219
  Alloc PE / Size       714219 / 2.72 TiB
  Free  PE / Size       0 / 0   
  VG UUID               txidbY-halS-kzkQ-Fssb-C3R3-NNkv-YeDMlb

对于磁盘管理

root@pop-os:~# fdisk -l
Disk /dev/loop0: 2.1 GiB, 2257199104 bytes, 4408592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D81DA5E3-C00F-4E9C-815A-CFECC250EF81

Device       Start        End    Sectors  Size Type
/dev/sda1     2048    4982527    4980480  2.4G Linux RAID
/dev/sda2  4982528    9176831    4194304    2G Linux RAID
/dev/sda3  9437184 5860328351 5850891168  2.7T Linux RAID

Disk /dev/sdb: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1567D041-B970-4EEC-BC43-27B17D7BBF9A

Device       Start        End    Sectors  Size Type
/dev/sdb1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdb2  4982528    9176831    4194304    2G Linux RAID
/dev/sdb3  9437184 5860328351 5850891168  2.7T Linux RAID


Disk /dev/md127: 2.72 TiB, 2995655213056 bytes, 5850889088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/vg1-syno_vg_reserved_area: 12 MiB, 12582912 bytes, 24576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/vg1-volume_2: 2.72 TiB, 2995639025664 bytes, 5850857472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

截图

磁盘管理器 - RAID 成员

磁盘管理器 - LVM2 PV

** 我已经尽我所能,阅读了相当多的帖子,但似乎无法让它工作 :( 非常感谢您的帮助!**

相关内容