重启后 RAID 阵列未组装。
我有一块用于启动系统的 SSD,以及三块属于阵列的 HDD。系统是 Ubuntu 16.04。
我所遵循的步骤主要基于本指南:
确认我是否可以出发。
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
输出结果显示除了 SSD 分区之外,还有 sda、sdb 和 sdc 设备。我通过查看以下输出验证了这些设备是否确实代表 HDD:
hwinfo --disk
一切都匹配。
組織阵列。
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
我通过输入以下内容来验证它是否正常:cat /proc/mdstat
输出如下所示:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=======>.............] recovery = 37.1% (1449842680/3906887168) finish=273.8min speed=149549K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
我等到这个过程结束。
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
创建并安装文件系统。
sudo mkfs.ext4 -F /dev/md0 sudo mkdir -p /mnt/md0 sudo mount /dev/md0 /mnt/md0 df -h -x devtmpfs -x tmpfs
我输入了一些数据,输出如下所示:
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 406G 191G 196G 50% /
/dev/nvme0n1p1 511M 3.6M 508M 1% /boot/efi
/dev/md0 7.3T 904G 6.0T 13% /mnt/md0
保存阵列布局。
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf sudo update-initramfs -u echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
重新启动并验证一切是否正常工作。
重启后我尝试:cat /proc/mdstat
它没有显示任何活动的 raid 设备。
ls /mnt/md0
是空的。
以下命令不会打印任何内容并且也不起作用:
mdadm --assemble --scan -v
仅以下操作可恢复包含数据的阵列:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
应该采取什么不同的做法?
这和这个的输出有关系吗?
sudo dpkg-reconfigure mdadm
输出显示:
update-initramfs: deferring update (trigger activated)
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.4.0-51-generic
Found initrd image: /boot/initrd.img-4.4.0-51-generic
Found linux image: /boot/vmlinuz-4.4.0-31-generic
Found initrd image: /boot/initrd.img-4.4.0-31-generic
Adding boot menu entry for EFI firmware configuration
done
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Processing triggers for initramfs-tools (0.122ubuntu8.5) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-51-generic
对我来说最有趣的部分是“启动和停止操作不再受支持;恢复为默认设置”
此外,/usr/share/mdadm/mkconf 的输出最后不会打印任何数组。
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR [email protected]
# definitions of existing MD arrays
而 cat /etc/mdadm/mdadm.conf 的输出却是这样的。
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# DEVICE /dev/sda /dev/sdb /dev/sdc
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR [email protected]
# definitions of existing MD arrays
# This file was auto-generated on Sun, 04 Dec 2016 18:56:42 +0100
# by mkconf $Id$
ARRAY /dev/md0 metadata=1.2 spares=1 name=hinton:0 UUID=616991f1:dc03795b:8d09b1d4:8393060a