重启后 RAID 设备上的 UUID 丢失

重启后 RAID 设备上的 UUID 丢失

我正在寻求帮助,试图弄清楚为什么我无法在重启后重新组装 RAID。我尝试过:

sudo mdadm --assemble --scan

我之前的 /etc/mdadm/mdadm.conf 如下。感兴趣的 RAID 是底部的 md1。这是一个由连接到 ASUS Hyper M.2 x16 卡的 4 个 nvme 驱动器组成的 RAID0。

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md/0  metadata=1.2 UUID=73abfd2e:9332c4dd:6bdf75fc:e457fac6 name=functionalgenomics.server:0

# This configuration was auto-generated on Fri, 27 Sep 2019 11:59:21 -0700 by mkconf
#ARRAY /dev/md0 metadata=1.2 name=functionalgenomics.server:0 UUID=73abfd2e:9332c4dd:6bdf75fc:e457fac6
ARRAY /dev/md0 metadata=1.2 name=functionalgenomics.server:0 UUID=73abfd2e:9332c4dd:6bdf75fc:e457fac6
ARRAY /dev/md1 metadata=1.2 name=functionalgenomics:1 UUID=86a1939b:26fac733:9006b919:36e6f30e

但重启后最后一行就消失了。使用 lsblk 后系统显示如下:

walter@functionalgenomics:~$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME      SIZE FSTYPE            TYPE   MOUNTPOINT
loop0   161.4M squashfs          loop   /snap/gnome-3-28-1804/128
loop1    99.2M squashfs          loop   /snap/core/10859
loop2     2.5M squashfs          loop   /snap/gnome-calculator/826
loop3     2.2M squashfs          loop   /snap/gnome-system-monitor/148
loop4    55.5M squashfs          loop   /snap/core18/1988
loop5     276K squashfs          loop   /snap/gnome-characters/570
loop6    99.2M squashfs          loop   /snap/core/10908
loop7     2.5M squashfs          loop   /snap/gnome-calculator/884
loop8     548K squashfs          loop   /snap/gnome-logs/103
loop9    55.4M squashfs          loop   /snap/core18/1944
loop10    276K squashfs          loop   /snap/gnome-characters/550
loop11   64.4M squashfs          loop   /snap/gtk-common-themes/1513
loop12    2.2M squashfs          loop   /snap/gnome-system-monitor/157
loop13   64.8M squashfs          loop   /snap/gtk-common-themes/1514
loop14    956K squashfs          loop   /snap/gnome-logs/100
loop15  162.9M squashfs          loop   /snap/gnome-3-28-1804/145
loop16  217.9M squashfs          loop   /snap/gnome-3-34-1804/60
loop17    219M squashfs          loop   /snap/gnome-3-34-1804/66
sda       9.1T linux_raid_member disk   
└─md0    18.2T ext4              raid10 /data
sdb       9.1T linux_raid_member disk   
└─md0    18.2T ext4              raid10 /data
sdc       9.1T linux_raid_member disk   
└─md0    18.2T ext4              raid10 /data
sdd       9.1T linux_raid_member disk   
└─md0    18.2T ext4              raid10 /data
sde       477G                   disk   
├─sde1    512M vfat              part   /boot/efi
└─sde2  476.4G ext4              part   /
nvme0n1 931.5G                   disk   
nvme1n1 931.5G                   disk   
nvme2n1 931.5G                   disk   
nvme3n1 931.5G                   disk   

它是 RAID0 中的最后 4 个 nvme 驱动器。

我尝试使用 mdadm 通过 UUID 手动重新组装:

mdadm --assemble /dev/md1 --uuid 86a1939b:26fac733:9006b919:36e6f30e

由于这些都不起作用,我用 blkid 检查了磁盘:

$blkid
/dev/sde2: UUID="622cb66b-07b8-4b3e-bf66-47e328d108b5" TYPE="ext4" PARTUUID="2bbad625-4e76-44f2-984e-82f137ca5e94"
/dev/sdb: UUID="73abfd2e-9332-c4dd-6bdf-75fce457fac6" UUID_SUB="2addace1-4dcf-83a1-efca-06efbe945214" LABEL="functionalgenomics.server:0" TYPE="linux_raid_member"
/dev/sda: UUID="73abfd2e-9332-c4dd-6bdf-75fce457fac6" UUID_SUB="99d89efb-611f-31e0-e18a-15d24884c89e" LABEL="functionalgenomics.server:0" TYPE="linux_raid_member"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/nvme0n1: PTUUID="bc67c382-2764-490a-b4d7-0a90090d21a5" PTTYPE="gpt"
/dev/nvme1n1: PTUUID="f84c4a99-5f3b-4c24-a28f-67986efd5fc9" PTTYPE="gpt"
/dev/nvme2n1: PTUUID="5051655f-2d62-484a-83e4-222ace832a02" PTTYPE="gpt"
/dev/nvme3n1: PTUUID="113b2827-7ae1-48af-8c81-d6a0b8e2315b" PTTYPE="gpt"
/dev/sdd: UUID="73abfd2e-9332-c4dd-6bdf-75fce457fac6" UUID_SUB="62827ba3-e4e5-67f7-b966-0f644790452a" LABEL="functionalgenomics.server:0" TYPE="linux_raid_member"
/dev/sde1: UUID="16F7-B5D4" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="e94f401a-a861-492a-bc75-dc67efdaa5a3"
/dev/md0: UUID="f06692a8-6db3-4a19-864b-4aaedd14ed8a" TYPE="ext4"
/dev/sdc: UUID="73abfd2e-9332-c4dd-6bdf-75fce457fac6" UUID_SUB="f808cf3c-ea1b-773e-e77c-372de7714b8c" LABEL="functionalgenomics.server:0" TYPE="linux_raid_member"
/dev/loop8: TYPE="squashfs"
/dev/loop9: TYPE="squashfs"
/dev/loop10: TYPE="squashfs"
/dev/loop11: TYPE="squashfs"
/dev/loop12: TYPE="squashfs"
/dev/loop13: TYPE="squashfs"
/dev/loop14: TYPE="squashfs"
/dev/loop15: TYPE="squashfs"
/dev/loop16: TYPE="squashfs"
/dev/loop17: TYPE="squashfs"

看来设备已将其 UUID 丢失给 RAID?我已关闭、拔下、重新插入并重新启动。但还是不行。

fdisk -l 给出(删除所有循环*因为它很长:

fdisk -l


Disk /dev/nvme0n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BC67C382-2764-490A-B4D7-0A90090D21A5


Disk /dev/nvme1n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F84C4A99-5F3B-4C24-A28F-67986EFD5FC9


Disk /dev/nvme2n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5051655F-2D62-484A-83E4-222ACE832A02


Disk /dev/nvme3n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 113B2827-7AE1-48AF-8C81-D6A0B8E2315B


Disk /dev/sda: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 51A8D887-C792-4353-B2BF-1724213AAEC1

Device       Start        End   Sectors   Size Type
/dev/sde1     2048    1050623   1048576   512M EFI System
/dev/sde2  1050624 1000214527 999163904 476.4G Linux filesystem


Disk /dev/sdc: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md0: 18.2 TiB, 20001394262016 bytes, 39065223168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

如果您能提供任何关于如何恢复此问题的想法(希望不会丢失数据),我们将不胜感激。谢谢!

[编辑/更新]

我尝试了女士——检查这些:

$ mdadm --examine /dev/nvme*
mdadm: cannot open /dev/nvme0: Invalid argument
/dev/nvme0n1:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)
mdadm: cannot open /dev/nvme1: Invalid argument
/dev/nvme1n1:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)
mdadm: cannot open /dev/nvme2: Invalid argument
/dev/nvme2n1:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)
mdadm: cannot open /dev/nvme3: Invalid argument
/dev/nvme3n1:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)

还:

$ mdadm --assemble --scan --verbose

mdadm: No super block found on /dev/nvme3n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme3n1
mdadm: No super block found on /dev/nvme2n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme2n1
mdadm: No super block found on /dev/nvme0n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme0n1
mdadm: No super block found on /dev/nvme1n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme1n1

相关内容