我有一个德班服务器(最新版本和所有安全更新已完成),3 年前我创建了一个超过 3 个磁盘的 RAID-5 mdadm
。它仍然有效,但我现在从该工具中收到错误logwatch
。今天才注意到,所以不知道它存在多久。此外,任何日志文件中都没有错误。我使用整个 RAID-5,lvm
所以我不知道如何使用“中给出的解决方案”我的 RAID 1 在重新启动后总是将自身重命名为 /dev/md127 | Debian 10”。
我也跑了update-initramfs -u
,但这似乎并不能解决问题。
mdadm: 无法打开 /dev/md0: 没有这样的文件或目录
所以我开始调查,以下是我发现的事情。
root@horus:/etc# ls -lhd /dev/md*
drwxr-xr-x 2 root root 60 19 dec 11:33 /dev/md
brw-rw---- 1 root disk 9, 127 19 dec 11:33 /dev/md127
root@horus:/etc# ls -lh /dev/md
totaal 0
lrwxrwxrwx 1 root root 8 19 dec 11:33 horus:0 -> ../md127
root@horus:/etc# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active raid5 sdb1[3] sda1[0] sdd1[1]
1953258496 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 3/8 pages [12KB], 65536KB chunk
unused devices: <none>
root@horus:/etc# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Jan 26 11:57:52 2021
Raid Level : raid5
Array Size : 1953258496 (1862.77 GiB 2000.14 GB)
Used Dev Size : 976629248 (931.39 GiB 1000.07 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Feb 28 10:05:57 2024
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : horus:0 (local to host horus)
UUID : b187df52:41d7a47e:98e7fa00:cae9bf67
Events : 23348
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 49 1 active sync /dev/sdd1
3 8 17 2 active sync /dev/sdb1
root@horus:/var/log# fdisk -l /dev/md127
Disk /dev/md127: 1,82 TiB, 2000136699904 bytes, 3906516992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
root@horus:/etc# mdadm -D --scan
ARRAY /dev/md/horus:0 metadata=1.2 name=horus:0 UUID=b187df52:41d7a47e:98e7fa00:cae9bf67
root@horus:/etc# cat mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=horus:0 UUID=b187df52:41d7a47e:98e7fa00:cae9bf67
devices=/dev/sda1,/dev/sdb1,/dev/sdc1
# This configuration was auto-generated on Tue, 26 Jan 2021 11:39:42 +0100 by mkconf
我应该怎么办?我可以在文件中更改md0
为吗?我应该只将 的输出添加到该文件中吗?md127
/etc/mdadm/mdadm.conf
mdadm -D --scan