我刚刚完成了我的新的 ubuntu server 10.4 机器的设置,它有 2x500 GB SATA 磁盘 - 我打算在 raid1 中配置它;具体来说,这就是我在安装过程中所做的:
分区:
磁盘 1 - sda:
sda1 - 500mb 主
sda2 - 99gb 主
sda3 扩展
sda5 - 399gb 扩展
磁盘 2 - sdb:
sdb 1 - 500mb 主
sdb2 - 99gb 主
sdb3 扩展
sdb5 - 399gb 扩展
数组:
md0 - sda1+sdb1,raid1,ext2,/boot
md1 - sda2+sdb2,raid1,ext4,/
md2 - sda5+sdb5,raid1,未格式化,安装期间未安装。
一切都很顺利,但是当我的新系统启动时,我看到的是这样的:
$cat /etc/fstab
/ was on /dev/md1 during installation
UUID=cc1a0b10-dd66-4c88-9022-247bff6571a6
/ ext4 errors=remount-ro 0 1
/boot was on /dev/md0 during installation
UUID=7e37165c-ab1c-4bd4-a62b-8b98656fe1f1
/boot ext2 defaults 0 2
$cat /proc/分区
major minor blocks name
8 0 488386584 sda
8 1 487424 sda1
8 2 97265664 sda2
8 3 1 sda3
8 5 390631424 sda5
8 16 488386584 sdb
8 17 487424 sdb1
8 18 97265664 sdb2
8 19 1 sdb3
8 21 390631424 sdb5
9 2 390631360 md2
259 0 487424 md2p1
259 1 97265664 md2p2
259 2 1 md2p3
259 3 292876224 md2p5
9 1 97265600 md1
9 0 487360 md0
$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 md2p1[0]
487360 blocks [2/1] [U_]
md1 : active raid1 md2p2[0]
97265600 blocks [2/1] [U_]
md2 : active raid1 sda[0] sdb[1]
390631360 blocks [2/2] [UU]
[============>........] resync = 63.1% (246865856/390631360) finish=25.9min speed=92459K/sec
unused devices: <none>
$mdadm --query --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Wed Jul 7 16:07:16 2010
Raid Level : raid1
Array Size : 487360 (476.02 MiB 499.06 MB)
Used Dev Size : 487360 (476.02 MiB 499.06 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:13:58 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : fefff930:8e4d661c:665cfb90:2bbaf5ad
Events : 0.74
Number Major Minor RaidDevice State
0 259 0 0 active sync /dev/md2p1
1 0 0 1 removed
$ sudo mdadm --query --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Wed Jul 7 16:07:23 2010
Raid Level : raid1
Array Size : 97265600 (92.76 GiB 99.60 GB)
Used Dev Size : 97265600 (92.76 GiB 99.60 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:38:19 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 68b86560:6150f422:6a741df7:3de5f08f
Events : 0.460
Number Major Minor RaidDevice State
0 259 1 0 active sync /dev/md2p2
1 0 0 1 removed
$ sudo mdadm --query --detail /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 390631360 (372.54 GiB 400.01 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:37:04 2010
State : active, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rebuild Status : 65% complete
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
$ sudo mdadm --query --detail /dev/md2p1
/dev/md2p1:
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 487424 (476.08 MiB 499.12 MB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
$ sudo mdadm --query --detail /dev/md2p2
/dev/md2p2:
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 97265664 (92.76 GiB 99.60 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
$ sudo mdadm --query --detail /dev/md2p3
/dev/md2p3:
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 1
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
$ sudo mdadm --query --detail /dev/md2p5
/dev/md2p5:
Version : 00.90
Creation Time : Wed Jul 7 16:07:31 2010
Raid Level : raid1
Array Size : 292876224 (279.31 GiB 299.91 GB)
Used Dev Size : 390631360 (372.54 GiB 400.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 17:37:04 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : fc7dadbe:2230a995:814dd292:d7c4bf75
Events : 0.33
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
似乎不是构建 raid1 阵列:
md0 = sda1+sdb1
md1 = sda2+sdb2
已经构建了类似附加“子阵列”的东西:
md2p1 = sda1+sdb1
md2p2 = sda2+sdb2
这些“子阵列”配置为 md0 和 md1 阵列的一部分。由于每个阵列只有 2 个磁盘(分区),因此 mdadm 正确地从 2 个分区分别构建了 md2p1 和 md2p2,但随后以降级状态启动主阵列:md0 和 md1 - 因为它们每个仅由 1 个“子阵列”组成。
现在我很疑惑 - 我做错了什么?或者也许一切都没问题,而我只是不理解这个配置的某些部分?但事实似乎并非如此 - md0 和 md1 显然被标记为已降级。那么现在 - 我该如何做对?我必须重新安装系统吗?现在好多了,就在安装之后,然后稍后,在我花一些精力配置和保护它之后。但也许有一些不错的 mdadm 技巧可以让一切都好起来?请帮忙 :) 谢谢!
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=fefff930:8e4d661c:665cfb90:2bbaf5ad
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=68b86560:6150f422:6a741df7:3de5f08f
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=fc7dadbe:2230a995:814dd292:d7c4bf75
# This file was auto-generated on Wed, 07 Jul 2010 16:18:30 +0200
# by mkconf $Id$
答案1
似乎是一个相当严重的错误:
修复将随 ubuntu 10.04.2 一起提供 - 解决方法可能如 launchpad 中所述
https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/569900
我在尝试使用 500,1GB 硬盘运行正确的软件 raid 时遇到了这个问题,并且深受困扰。
作为一个错误受害者,所要做的就是在最后一个分区的末尾留出一些可用空间,然后一切都会好起来:)。所以不要选择 partman 错误计算的默认值。
答案2
经过两天的尝试,拆卸和重新创建阵列等,我终于不得不尝试全新安装。这次我只使用了 3 个主分区,即:/boot、/ 和 /var,一切运行正常。这不是真正的解决方案,但它确实有效。