MDADM/ZFS 问题

MDADM/ZFS 问题

刚刚重新安装了 Kubuntu 版本 22(尝试 20=>22 LTS 支持升级,但一切都出了问题。)

我现在遇到了让磁盘阵列重新工作的问题。我记不清格式了,但其中一个似乎是 2 磁盘 ZFS 阵列,它很容易工作。

第二对似乎是 2 磁盘 mdadm 阵列。我安装了 mdadm,它创建了一个 conf 文件。

john@Adnoartina:/cctv/fs1$ cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=8ccafb9b:d754b713:ac709ab4:78ca2f53 name=Adnoartina:0

# This configuration was auto-generated on Wed, 16 Aug 2023 22:44:54 +0100 by mkconf

我必须运行 update-initramfs。

我仍然遇到与此处发现的类似的问题(https://superuser.com/questions/566867/unable-to-reassemble-md-raid-on-drives-pulled-from-readynas-duo-v1) “mdadm:无法将 /dev/sdb3 添加到 /dev/md/2_0:参数无效”

所以我运行了“--update = devicesize”的链接建议

这似乎有所进展,但我仍然无法骑上去。

hn@Adnoartina:/cctv/fs1$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
loop0     7:0    0     4K  1 loop  /snap/bare/5
loop1     7:1    0  63.4M  1 loop  /snap/core20/1974
loop2     7:2    0 237.2M  1 loop  /snap/firefox/2987
loop3     7:3    0 349.7M  1 loop  /snap/gnome-3-38-2004/143
loop4     7:4    0  91.7M  1 loop  /snap/gtk-common-themes/1535
loop5     7:5    0  53.3M  1 loop  /snap/snapd/19457
sda       8:0    0   7.3T  0 disk  
├─sda1    8:1    0   7.3T  0 part  
└─sda9    8:9    0     8M  0 part  
sdb       8:16   0   7.3T  0 disk  
├─sdb1    8:17   0   7.3T  0 part  
└─sdb9    8:25   0     8M  0 part  
sdc       8:32   0  55.9G  0 disk  
├─sdc1    8:33   0   512M  0 part  /boot/efi
└─sdc2    8:34   0  55.4G  0 part  /var/snap/firefox/common/host-hunspell
                                   /
sdd       8:48   0   2.7T  0 disk  
├─sdd1    8:49   0   2.7T  0 part  
│ └─md0   9:0    0   2.7T  0 raid1 
└─sdd9    8:57   0     8M  0 part  
sde       8:64   0   2.7T  0 disk  
├─sde1    8:65   0   2.7T  0 part  
│ └─md0   9:0    0   2.7T  0 raid1 
└─sde9    8:73   0     8M  0 part  
john@Adnoartina:/cctv/fs1$ sudo mount /dev/md/0 Important/
mount: /cctv/fs1/Important: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
john@Adnoartina:/cctv/fs1$ sudo mount /dev/md0 Important/
mount: /cctv/fs1/Important: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

fsck 没有帮助

john@Adnoartina:/cctv/fs1$ sudo fsck -n /dev/md/0
fsck from util-linux 2.37.2
e2fsck 1.46.5 (30-Dec-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

更多信息

john@Adnoartina:/cctv/fs1$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Jun 22 20:52:51 2020
        Raid Level : raid1
        Array Size : 2930132992 (2.73 TiB 3.00 TB)
     Used Dev Size : 2930132992 (2.73 TiB 3.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Aug 16 23:27:13 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Adnoartina:0  (local to host Adnoartina)
              UUID : 8ccafb9b:d754b713:ac709ab4:78ca2f53
            Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

然后我看到运行 fdisk -l 并得到以下令人惊讶的输出,它说驱动器的类型是 Solaris 和 apple ZFS?

Disk /dev/sdd: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA DT01ACA3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F3AE916B-DB77-9844-98DA-250443CFC5A4

Device          Start        End    Sectors  Size Type
/dev/sdd1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS
/dev/sdd9  5860515840 5860532223      16384    8M Solaris reserved 1


Disk /dev/sde: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA DT01ACA3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 20CFD558-A582-8847-BB20-E9585044D859

Device          Start        End    Sectors  Size Type
/dev/sde1        2048 5860515839 5860513792  2.7T Solaris /usr & Apple ZFS
/dev/sde9  5860515840 5860532223      16384    8M Solaris reserved 1


Disk /dev/md0: 2.73 TiB, 3000456183808 bytes, 5860265984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

还有一些提到了坏块,但我想这可能是由于磁盘开始老化造成的。

john@Adnoartina:/cctv/fs1$ sudo mdadm -E /dev/sde1
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x8
     Array UUID : 8ccafb9b:d754b713:ac709ab4:78ca2f53
           Name : Adnoartina:0  (local to host Adnoartina)
  Creation Time : Mon Jun 22 20:52:51 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5860249600 sectors (2.73 TiB 3.00 TB)
     Array Size : 2930132992 KiB (2.73 TiB 3.00 TB)
  Used Dev Size : 5860265984 sectors (2.73 TiB 3.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=18446744073709535232 sectors
          State : clean
    Device UUID : 740242b8:a390e2aa:9ccbc034:3346bd6d

    Update Time : Wed Aug 16 23:27:13 2023
  Bad Block Log : 512 entries available at offset 24 sectors - bad blocks present.
       Checksum : eddcc88c - correct
         Events : 2


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
john@Adnoartina:/cctv/fs1$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdd1[0] sde1[1]
      2930132992 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
``

I'm now stuck so any suggestions would be appreciated. Is there any possibility that on my old install I got into a weird mdadm / zfs state (I know I used these drives as mdadm a long time ago and wonder if I'd subsequently changed to be zfs but perhaps not all relevant data got wiped out?)

Thanks,

John

答案1

已修复 - 找到另一个页面,其中有关于在服务器之间迁移磁盘但 zpool 无法检测到磁盘的建议。我停止使用 mdadm,并使用以下命令

sudo zpool import -d /dev/
   pool: ToBackup
     id: 12065074383621521084
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        ToBackup    ONLINE
          mirror-0  ONLINE
            sdd     ONLINE
            sde     ONLINE
john@Adnoartina:~$ sudo zpool import -d /dev/ -f ToBackup

john@Adnoartina:~$ zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ToBackup  2.72T  1.05T  1.67T        -         -     0%    38%  1.00x    ONLINE  -
cctv      14.5T  11.8T  2.71T        -         -    40%    81%  1.00x    ONLINE  -

相关内容