昨天我从一台服务器升级了我的 CentOS 8 服务器
Raid 1、2 磁盘设置
到一个
使用 mdadm 设置 Raid 5、4 磁盘
然而,在升级的最后阶段之后:
mdadm --grow /dev/md/pv00 -n 4
以及多次检查后的一段时间
猫 /proc/mdstat
Raid 5 设置的进度(约为 4/5%)
停电了:(
当电源重新接通时,CentOS 8 将不再启动,除非经过很长时间进入紧急模式。
给出的错误:
md:5级个性未加载
在这种紧急模式下,我无法 mdadm 组装 Raid,因为它会出现错误的 raid 级别问题。
经过大量谷歌搜索后,我终于创建了一个 Ubuntu 21.04 实时可启动 USB,这样我也许可以看到更多信息。
事实证明,我终于可以在 Ubuntu 环境中再次 mdadm 组装 4 个驱动器,之后 raid 系统本身开始恢复并继续 Raid5 过程。
最后,几个小时后,这一切完成了,在 Raid 系统上安装了各个卷并看到一切都很好并完成后,我重新启动了。
同样的问题..
使用 FedoraCore 工作站实时启动 USB 尝试同样的操作,我可以再次简单地安装 raid5 磁盘。
然后我进入 CentOS 救援模式,在 Grub 中选择该模式,Raid5 4 磁盘就可以正常上网了!很奇怪
我只看到 1 个错误,即 /boot 无法安装(由于救援模式,似乎无法找到这是否正常?)编辑:这不正常!尝试手动挂载 /boot 时出现错误:
挂载 /dev/sda1 /boot
mount: /boot: unknown filesystem type 'ext4'.
lsmod 确实显示没有加载 ext4
但我可以正常看到卷,并且 cat /proc/mdstat 输出正常的活动信息。
我不知道如何解决这个问题,因为谷歌搜索显示我可能需要对启动 img 进行一些修复,但不知道如何在修复模式下挂载 /boot,而紧急模式似乎没有任何东西我可以一起工作。
所以基本上我的操作系统无法在正常模式下启动,而 Raid 磁盘似乎工作得很好。
我尝试手动编辑 /etc/mdadm.conf 文件,但在重新启动时也不起作用,并在网上发现它只是组装点。
非常感谢您的帮助!
细节
CentOS Linux 版本 8.4.2105
内核版本:4.18.0-305.7.1
来自 Fedora Workstation Live 34:
猫 /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1]
md127 : active raid5 sdc[2] sdd[3] sdb1[1] sda2[0]
2926740480 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices: <none>
没有任何无用的磁盘/卷:
fdisk -l
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NM0033-9ZM
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd393f4b7
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 1953523711 1951424512 930.5G fd Linux raid autodetect
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NM0033-9ZM
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf0f126b5
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1951426559 1951424512 930.5G fd Linux raid autodetect
Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NM0033-9ZM
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NM0033-9ZM
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md127: 2.73 TiB, 2996982251520 bytes, 5853480960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
我注释掉了 ARRAY,希望 conf 能够了解级别和设备数量的变化,但是不行,这是来自 RAID 安装的根卷:
猫 mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
#ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=5b729889:1b231f26:6806a14c:71abe309
ARRAY /dev/md/pv00 level=raid5 num-devices=4 UUID=5b729889:1b231f26:6806a14c:71abe309
光伏显示
--- Physical volume ---
PV Name /dev/md127
VG Name cl_nas
PV Size <930.39 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238178
Free PE 0
Allocated PE 238178
PV UUID RpepyF-BxMl-9BZT-Sebw-otBV-HLCq-7XSGWX
图形显示
--- Volume group ---
VG Name cl_nas
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.38 GiB
PE Size 4.00 MiB
Total PE 238178
Alloc PE / Size 238178 / 930.38 GiB
Free PE / Size 0 / 0
VG UUID NrW8lm-Xi70-hP73-Auis-wLVy-beHq-85TWZM
lv显示
--- Logical volume ---
LV Path /dev/cl_nas/home
LV Name home
VG Name cl_nas
LV UUID VXGWg5-MnHI-1yg8-5rez-1tIa-Aa2f-s0h2K3
LV Write Access read/write
LV Creation host, time nas.localdomain, 2020-10-28 10:45:39 +0100
LV Status available
# open 1
LV Size <892.51 GiB
Current LE 228482
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:2
--- Logical volume ---
LV Path /dev/cl_nas/swap
LV Name swap
VG Name cl_nas
LV UUID 49Aech-0lGd-TbVx-2Hy7-n18b-EG75-4tOBhd
LV Write Access read/write
LV Creation host, time nas.localdomain, 2020-10-28 10:45:46 +0100
LV Status available
# open 0
LV Size <7.88 GiB
Current LE 2016
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:3
--- Logical volume ---
LV Path /dev/cl_nas/root
LV Name root
VG Name cl_nas
LV UUID V5WLBp-lvkd-aGT4-isoE-U8UH-ehqG-i37BmE
LV Write Access read/write
LV Creation host, time nas.localdomain, 2020-10-28 10:45:47 +0100
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:4
(没有闪存驱动器 /dev/sde)
mdadm --检查 /dev/sd*
/dev/sda:
MBR Magic : aa55
Partition[0] : 2097152 sectors at 2048 (type 83)
Partition[1] : 1951424512 sectors at 2099200 (type fd)
mdadm: No md superblock detected on /dev/sda1.
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5b729889:1b231f26:6806a14c:71abe309
Name : nas.localdomain:pv00
Creation Time : Wed Oct 28 10:45:31 2020
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1951160704 (930.39 GiB 998.99 GB)
Array Size : 2926740480 (2791.16 GiB 2996.98 GB)
Used Dev Size : 1951160320 (930.39 GiB 998.99 GB)
Data Offset : 263808 sectors
Super Offset : 8 sectors
Unused Space : before=263728 sectors, after=384 sectors
State : clean
Device UUID : 33ff95f7:95ae0fb4:bf45c073:0e248f5f
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 9 21:45:11 2021
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 9c0e4536 - correct
Events : 19835
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
MBR Magic : aa55
Partition[0] : 1951424512 sectors at 2048 (type fd)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5b729889:1b231f26:6806a14c:71abe309
Name : nas.localdomain:pv00
Creation Time : Wed Oct 28 10:45:31 2020
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1951160704 (930.39 GiB 998.99 GB)
Array Size : 2926740480 (2791.16 GiB 2996.98 GB)
Used Dev Size : 1951160320 (930.39 GiB 998.99 GB)
Data Offset : 263808 sectors
Super Offset : 8 sectors
Unused Space : before=263728 sectors, after=384 sectors
State : clean
Device UUID : 5c027c56:0b6e4a5c:26a56c03:5dfb6c53
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 9 21:45:11 2021
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 26b93e8b - correct
Events : 19835
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5b729889:1b231f26:6806a14c:71abe309
Name : nas.localdomain:pv00
Creation Time : Wed Oct 28 10:45:31 2020
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953261360 (931.39 GiB 1000.07 GB)
Array Size : 2926740480 (2791.16 GiB 2996.98 GB)
Used Dev Size : 1951160320 (930.39 GiB 998.99 GB)
Data Offset : 263808 sectors
Super Offset : 8 sectors
Unused Space : before=263728 sectors, after=2101040 sectors
State : clean
Device UUID : 1e731fa9:febb6375:f9f05cec:8dd1c8e5
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 9 21:45:11 2021
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : de22cf6 - correct
Events : 19835
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5b729889:1b231f26:6806a14c:71abe309
Name : nas.localdomain:pv00
Creation Time : Wed Oct 28 10:45:31 2020
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1953261360 (931.39 GiB 1000.07 GB)
Array Size : 2926740480 (2791.16 GiB 2996.98 GB)
Used Dev Size : 1951160320 (930.39 GiB 998.99 GB)
Data Offset : 263808 sectors
Super Offset : 8 sectors
Unused Space : before=263728 sectors, after=2101040 sectors
State : clean
Device UUID : 54ceea17:4b06f020:5a287029:9f7928dd
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 9 21:45:11 2021
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 5cacb1eb - correct
Events : 19835
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
答案1
最终这个伎俩的作用如下:
制作了一个可启动的 CentOS 8 USB,这样我就可以加载与启动卷上相同的内核
然后启动并转到故障排除 -> 救援
对我来说,它不会自动加载/检测任何驱动器/卷,所以我最终进入了 shell
那里使用
mdadm --assemble --scan -v
我能够加载 Raid 系统,然后开始安装文件系统,如下所述:
https://wiki.centos.org/TipsAndTricks/CreateNewInitrd
首先从 Raid 加载 root 到 /mnt/sysimage 然后从 ext4 磁盘启动到 /mnt/sysimage/boot 然后按照网站上提到的其他内容最后还将 home 安装到 /mnt/sysimage/home
之后我再次进入 chroot,如上所述
并从那里备份 boot/initramfs*.img 并使用生成新的
对于可启动 USB 加载的内核:
德拉库特-H-f
特别是我通常运行的最新内核
dracut -H -f /boot/initramfs(内核).img (内核)
得到命令来自: mdadm raid 未安装
最后我重新启动,神奇地经过两天的谷歌搜索和挫败感,它再次工作了:)