重新同步后,mdadm raid 1 阵列无法挂载

重新同步后,mdadm raid 1 阵列无法挂载

我有一个 raid one 设备,它可以与一个驱动器配合良好,并且在添加第二个驱动器后重新同步时,但一旦我重新启动,我就无法再挂载文件系统了。阵列中只有 1 个驱动器(我可以删除任何一个驱动器,结果相同),我得到了一个 /dev/mdxpxxx 设备,它可以很好地挂载文件系统。当我添加第二个驱动器(仍处于挂载状态)时,它将重新同步而不会出现问题,最终我得到一个标记为干净的阵列。但重新启动后,我再也看不到 mdxpxxx 设备,也无法再挂载文件系统:

root@Watchme:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov 14 12:06:40 2021
        Raid Level : raid1
        Array Size : 23439733760 (21.83 TiB 24.00 TB)
     Used Dev Size : 23439733760 (21.83 TiB 24.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov 14 12:06:40 2021
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : xxxxx:0  (local to host xxx)
              UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

root@Watchme:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[1] sda1[0]
      23439733760 blocks super 1.2 [2/2] [UU]
      bitmap: 0/175 pages [0KB], 65536KB chunk

unused devices: <none>

root@Watchme:~# mdadm --examine /dev/sda1 /dev/sdb1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
           Name : Watchme.wachtveitl.xyz:0  (local to host Watchme.wachtveitl.xyz)
  Creation Time : Sun Nov 14 12:06:40 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 46879467520 sectors (21.83 TiB 24.00 TB)
     Array Size : 23439733760 KiB (21.83 TiB 24.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264080 sectors, after=0 sectors
          State : clean
    Device UUID : 55a52a97:8a7019d0:eab789f9:eb18d6f0

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Nov 14 12:06:40 2021
  Bad Block Log : 512 entries available at offset 96 sectors
       Checksum : dec6072c - correct
         Events : 0


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
           Name : Watchme.wachtveitl.xyz:0  (local to host Watchme.wachtveitl.xyz)
  Creation Time : Sun Nov 14 12:06:40 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 46879467520 sectors (21.83 TiB 24.00 TB)
     Array Size : 23439733760 KiB (21.83 TiB 24.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264080 sectors, after=0 sectors
          State : clean
    Device UUID : 06ed8c71:b26928da:5b59adf0:5550c044

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Nov 14 12:06:40 2021
  Bad Block Log : 512 entries available at offset 96 sectors
       Checksum : e4520e1 - correct
         Events : 0


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

root@Watchme:~# blkid|grep -v loop
/dev/sda1: UUID="dde5b8f6-fe3a-89e5-f281-c9efc4433874" UUID_SUB="55a52a97-8a70-19d0-eab7-89f9eb18d6f0" LABEL="Watchme.wachtveitl.xyz:0" TYPE="linux_raid_member" PARTUUID="98b9d7e4-1ffe-b34e-8657-e1171fee9eea"
/dev/sdb1: UUID="dde5b8f6-fe3a-89e5-f281-c9efc4433874" UUID_SUB="06ed8c71-b269-28da-5b59-adf05550c044" LABEL="Watchme.wachtveitl.xyz:0" TYPE="linux_raid_member" PARTUUID="966dc93b-3b1b-49b8-8bc3-cda98819cf2c"
/dev/sdc1: UUID="2371e92a-ce66-4367-af04-82d671001eac" TYPE="swap" PARTUUID="6a4e6d9f-01"
/dev/sdc5: UUID="909bf5e9-f558-4d26-ac96-ac8f7ff952c5" UUID_SUB="b291a607-c28f-4eab-9ae9-6621f9929cd2" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="6a4e6d9f-05"
/dev/sdd1: UUID="054a11be-eb31-480f-b0d9-9de0c9809d8e" UUID_SUB="e61e0ac0-d008-4f47-8517-a854f42dd9cb" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="Test Partition" PARTUUID="086e0cc9-2710-0000-50eb-806e6f6e6963"
/dev/sdd2: UUID="1BFA-08CE" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="Test Partition" PARTUUID="07376838-2710-0000-50eb-806e6f6e6963"
/dev/sdd3: LABEL="Win10" BLOCK_SIZE="512" UUID="47F6009C5191E06C" TYPE="ntfs" PARTLABEL="Test Partition" PARTUUID="07374128-2710-0000-50eb-806e6f6e6963"
/dev/sdd4: UUID="A583-7A72" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="Test Partition" PARTUUID="086e33d9-2710-0000-50eb-806e6f6e6963"
/dev/md0: PTTYPE="PMBR"

root@Watchme:~# mount /dev/md0 /media_lv
mount: /media_lv: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

root@Watchme:~# mount --uuid=dde5b8f6-fe3a-89e5-f281-c9efc4433874 /media_lv
mount: /media_lv: unknown filesystem type 'linux_raid_member'.

root@Watchme:~# mount --uuid=dde5b8f6-fe3a-89e5-f281-c9efc4433874 -t ext4 /media_lv
mount: /media_lv: /dev/sda1 already mounted or mount point busy.

我花了几个小时在网上搜索解决方案,我可以移除任一驱动器,然后 mdxpxxx 设备会恢复,我可以挂载文件系统,所有数据都正常。重新同步是一个漫长的过程,显然由于文件系统的大小,需要大约 10 个小时才能完成。我尝试过多次停止阵列并使用 --assume-clean 选项再次创建它(中间显然没有挂载),也只是将第二个驱动器添加为热备用,然后将其添加回阵列并让它重新同步,这样就可以正常工作,我可以使用数据而不会出现问题,直到我重新启动,然后我又卡住了。

我不知道下一步该去哪里,非常感谢任何帮助。

大卫

答案1

仍然不知道发生了什么,但基本上我只是从头开始,删除了磁盘上的所有内容(包括分区),现在我获得了 /dev/md0 的 UUID,并且可以正常挂载和卸载它。当我最初创建阵列时,第一个磁盘上有一个文件系统,我认为那可能把我搞砸了,因为之后我没有在 /dev/md0 上创建文件系统,因为现有文件系统出现了,并且可以挂载在 md0p1 设备上。

我只是想发布我的发现,希望它能在未来对某人有所帮助。

相关内容