现在的情况是这样的。我在我的机器上运行 Debian 10,我想为连接到我的机器端口的 4 个 USB 驱动器组装 RAID10 设置。我使用命令设置配置
sudo mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 --verbose
这设置得很好。然后我确保我的 /etc/fstab 和 /etc/mdadm/mdadm.conf 编写正确,事实也确实如此。我什至确保更新 initramfs。但是,重新启动后,我检查配置是否正确设置,当我看到以下输出时,我大吃一惊:
:/# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 8 18:36:15 2020
Raid Level : raid10
Array Size : 120762368 (115.17 GiB 123.66 GB)
Used Dev Size : 60381184 (57.58 GiB 61.83 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jun 10 16:38:00 2020
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Name : DietPi:0 (local to host DietPi)
UUID : 9d59030f:f7d48652:ffdd4067:ae45a372
Events : 95
Number Major Minor RaidDevice State
0 8 1 0 active sync set-A /dev/sda1
- 0 0 1 removed
2 8 33 2 active sync set-A /dev/sdc1
- 0 0 3 removed
这是输出dmesg | grep md
:
root@DietPi:/home/dietpi# dmesg | grep md
[ 0.177100] unimac-mdio unimac-mdio.-19: DMA mask not set
[ 0.233442] unimac-mdio unimac-mdio.-19: Broadcom UniMAC MDIO bus at 0x(ptrval)
[ 0.787409] systemd[1]: System time before build time, advancing clock.
[ 0.878411] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[ 0.878665] systemd[1]: Detected architecture arm.
[ 0.881996] systemd[1]: Set hostname to <DietPi>.
[ 1.114070] random: systemd: uninitialized urandom read (16 bytes read)
[ 1.199454] random: systemd: uninitialized urandom read (16 bytes read)
[ 1.199616] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[ 1.199700] random: systemd: uninitialized urandom read (16 bytes read)
[ 1.199797] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[ 1.200074] systemd[1]: Listening on Journal Socket (/dev/log).
[ 1.200195] systemd[1]: Listening on initctl Compatibility Named Pipe.
[ 1.200238] systemd[1]: Reached target Paths.
[ 1.202088] systemd[1]: Created slice system-getty.slice.
[ 1.202474] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[ 1.697530] systemd-journald[118]: Received request to flush runtime journal from PID 1
[ 4.541769] md/raid10:md0: active with 2 out of 4 devices
[ 4.541809] md0: detected capacity change from 0 to 123660664832
[ 4.719675] EXT4-fs (md0): warning: mounting fs with errors, running e2fsck is recommended
[ 4.767908] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[ 35.595357] md/raid10:md127: not clean -- starting background reconstruction
[ 35.595363] md/raid10:md127: active with 2 out of 4 devices
[ 35.595403] md127: detected capacity change from 0 to 123662761984
[ 305.123227] EXT4-fs (md0): error count since last fsck: 102
[ 305.123239] EXT4-fs (md0): initial error at time 1591721842: htree_dirblock_to_tree:995: inode 1311120
[ 305.123255] EXT4-fs (md0): last error at time 1591721842: ext4_empty_dir:2724: inode 1311118
[ 1539.368517] md127: detected capacity change from 123662761984 to 0
[ 1539.368531] md: md127 stopped.
就像一半的驱动器刚刚卸载一样。我觉得这可能是一个超级块问题,但我不确定。我在这里缺少什么吗?还是漏了一步?或者我可以输入一个命令来解决这个问题,这样它就不会在每次重新启动后仍然存在?
答案1
对于许多使用 Raspbian 的人来说,启动时初始化硬件驱动程序的 Linux 内核与扫描要组装的驱动器的 mdadm-raid 之间似乎存在竞争条件。这可以通过在文件 /boot/cmdline.txt 中添加内核延迟参数来伪造,该文件包含一行。在同一行添加“rootdelay=3”。