今天我重新启动了我的计算机但它无法启动,所以我启动了一个恢复映像并运行lsblk
,这是输出:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 3G 1 loop
nvme1n1 259:0 0 476.9G 0 disk
├─nvme1n1p1 259:16 0 32G 0 part
├─nvme1n1p2 259:17 0 512M 0 part
└─nvme1n1p3 259:18 0 444.4G 0 part
nvme0n1 259:4 0 476.9G 0 disk
├─nvme0n1p1 259:5 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1
├─nvme0n1p2 259:6 0 512M 0 part
│ └─md1 9:1 0 511M 0 raid1
└─nvme0n1p3 259:7 0 444.4G 0 part
nvme2n1 259:8 0 953.9G 0 disk
├─nvme2n1p1 259:9 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1
├─nvme2n1p2 259:10 0 512M 0 part
│ └─md1 9:1 0 511M 0 raid1
└─nvme2n1p3 259:11 0 444.4G 0 part
nvme3n1 259:12 0 953.9G 0 disk
├─nvme3n1p1 259:13 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1
├─nvme3n1p2 259:14 0 512M 0 part
│ └─md1 9:1 0 511M 0 raid1
└─nvme3n1p3 259:15 0 444.4G 0 part
文档中说您可以挂载md2
和恢复文件等,但数据分区似乎丢失了,我该怎么办?我可以用任何方式挂载数据分区吗?我的数据分区怎么了?
编辑
cat /proc/mdstat
输出:
root@rescue / # cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 nvme0n1p2[0] nvme3n1p2[3] nvme2n1p2[2]
523264 blocks super 1.2 [4/3] [U_UU]
md0 : active raid1 nvme0n1p1[0] nvme3n1p1[3] nvme2n1p1[2]
33520640 blocks super 1.2 [4/3] [U_UU]
unused devices: <none>
如下mdadm.conf
:
root@rescue / # cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=ac0c4fe3:e447da2e:c329a42e:39201624 name=rescue:0
ARRAY /dev/md/1 metadata=1.2 UUID=adcd0f2c:276ba739:39bc8438:ed925281 name=rescue:1
ARRAY /dev/md/2 metadata=1.2 UUID=4dc7bc3a:190cd6fc:abfac538:b9bd2481 name=rescue:2
# This configuration was auto-generated on Mon, 14 Mar 2022 21:14:40 +0100 by mkconf
此外,该机器托管在云中,我只能访问救援图像。
编辑
以下是关于md0
和 的信息md1
:
/dev/md0:
Version : 1.2
Creation Time : Fri May 21 14:18:17 2021
Raid Level : raid1
Array Size : 33520640 (31.97 GiB 34.33 GB)
Used Dev Size : 33520640 (31.97 GiB 34.33 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Mar 14 18:23:49 2022
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : rescue:0 (local to host rescue)
UUID : ac0c4fe3:e447da2e:c329a42e:39201624
Events : 33664
Number Major Minor RaidDevice State
0 259 5 0 active sync /dev/nvme0n1p1
- 0 0 1 removed
2 259 9 2 active sync /dev/nvme2n1p1
3 259 13 3 active sync /dev/nvme3n1p1
/dev/md1:
Version : 1.2
Creation Time : Fri May 21 14:18:17 2021
Raid Level : raid1
Array Size : 523264 (511.00 MiB 535.82 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Mar 14 21:18:38 2022
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : rescue:1 (local to host rescue)
UUID : adcd0f2c:276ba739:39bc8438:ed925281
Events : 337
Number Major Minor RaidDevice State
0 259 6 0 active sync /dev/nvme0n1p2
- 0 0 1 removed
2 259 10 2 active sync /dev/nvme2n1p2
3 259 14 3 active sync /dev/nvme3n1p2
编辑
此时重新同步已完成,我可以通过 访问它mount /dev/md2 /mnt
。
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 nvme0n1p3[0] nvme3n1p3[3] nvme2n1p3[2]
931788800 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [U_UU]
bitmap: 4/4 pages [16KB], 65536KB chunk
md1 : active raid1 nvme0n1p2[0] nvme3n1p2[3] nvme2n1p2[2]
523264 blocks super 1.2 [4/3] [U_UU]
md0 : active raid1 nvme0n1p1[0] nvme3n1p1[3] nvme2n1p1[2]
33520640 blocks super 1.2 [4/3] [U_UU]
答案1
通常,您首先要检查 raid 设备名称是否已更改。一种简单的检查方法是:
cat /proc/mdstat
最有可能的情况是您的设备名称已更改。下一步是检查您是否有 mdadm.conf。通常位于 /etc/mdadm/mdadm.conf 中。如果那里有内容,请粘贴结果,我们可以继续提供帮助。
更新:
mdadm --examine /dev/nvme0n1p3
应该显示您的分区与 RAID 阵列相关联。您也可以尝试其他设备成员。您正在寻找的结果是它应该显示为 RAID 阵列,并且应该显示您的设备信息。只要这没问题,请尝试以下操作:
mdadm --assemble --scan
这只是让 mdadm 查看您现有的磁盘并尝试重新安装 raid 阵列。您似乎已正确配置了 mdadm.conf,因此这可能会奇迹般地起作用。如果没有,
mdadm --assemble --force /dev/md2 /dev/nvme{0,1,2,3}n1p3
应该指示 mdadm 尝试使用列出的设备来重建您的 raid 阵列。
最后,您的所有设备都缺少一名团队成员,因此您可能需要在完成此任务后检查原因。