我最近按照以下方法设置了两个新的 8TB 硬盘,采用 RAID1 格式本指南。机器用的是Ubuntu 21.04。我用到的命令如下:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
sudo mkfs.ext4 -F /dev/md0
sudo mkdir -p /mnt/md0
sudo mount /dev/md0 /mnt/md0
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
完成此操作后,我将重要文件从主 SSD 复制到新 RAID,然后(愚蠢地)从 SSD 中删除它们。然后我重新启动了计算机。启动计算机后,/mnt/md0 消失了。
fstab 文件包含以下内容:
cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/nvme0n1p2 during installation
UUID=fd6ba30f-9352-49de-914d-64c33052ce33 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=DFDB-EE63 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
mdadm 文件包含以下内容:
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This configuration was auto-generated on Tue, 17 Sep 2019 16:18:19 +0100 by mkconf
ARRAY /dev/md0 metadata=1.2 name=MyWorkstation:0 UUID=c5135e4e:62dc43c7:a2187207:c13a1ed0
ARRAY /dev/md0 metadata=1.2 name=MyWorkstation:0 UUID=e609a265:401ddd89:382db2bf:65fd931c
我尝试做--assemble--scan
MyWorkstation% sudo mdadm --assemble --scan
mdadm: Devices UUID-c5135e4e:62dc43c7:a2187207:c13a1ed0 and UUID-e609a265:401ddd89:382db2bf:65fd931c have the same name: /dev/md0
mdadm: Duplicate MD device names in conf file were found.
我试过mdadm --examine
MyWorkstation% sudo mdadm --examine /dev/sda
/dev/sda:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
MyWorkstation% sudo mdadm --examine /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
我担心数据丢失,所以我的目标是将其中一个硬盘视为丢失,并尝试从其中一个硬盘中恢复数据。为了尝试这一点,我尝试了:
MyWorkstation% sudo mdadm --assemble --readonly /dev/md1 /dev/sda
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: /dev/sda has no superblock - assembly aborted
然后我尝试使用它udisksctl
来安装其中一个光盘:
MyWorkstation% udisksctl mount -b /dev/sda
Object /org/freedesktop/UDisks2/block_devices/sda is not a mountable filesystem.
所以问题是:如何从 RAID1 阵列中安装似乎没有超级块的单个硬盘?
非常感谢您的所有建议!
答案1
最终的解决方案是重新运行sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
。
这恢复了 RAID 驱动器,让我重新可以访问其中存储的所有数据。我曾意识到这种情况,但害怕在没有备份的情况下删除数据:首先缓慢地检索最重要的数据部分testdisk
(这一直陷入无限循环)会浪费大量时间。
导致这一切的问题似乎是我在整个磁盘上创建了 RAID,而不是分区,如下所述:https://askubuntu.com/questions/860643/raid-array-doesnt-reassemble-after-reboot。应该请求 DigitalOcean 从互联网上删除他们的 RAID 指南,因为当您搜索如何创建 RAID1 阵列时,它目前是 Google 上最热门的搜索结果,但它显然是错误的。