我在 Ubuntu 12.10 上有两个速度和可靠性不同的软件 RAID 卷。我想将速度更快但可靠性较差的卷的内容镜像/dev/md0
到/dev/md1
。
我的计划是在 mdadm 上构建一个复合 RAID1 卷/dev/md0
并/dev/md1
使用它,但是在构建卷时遇到了麻烦。
mdam 警告说,各个组件阵列似乎是现有阵列的一部分,并且重新启动后,kenel 显示阵列已降级并需要修复。
是否可以构建两个现有软件阵列的复合阵列?如果不能,除了 rsync 之外,还有其他我忽略的解决方案吗?如果我可以让操作系统优先读取速度更快的 ,那就加分了/dev/md0
。
用于创建数组的命令:
mdadm --create /dev/md0 --level=0 -c256 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
mdadm --create /dev/md1 --level=10 -c256 --raid-devices=4 /dev/xvdf /dev/xvdg /dev/xvdh /dev/xvdi
mdadm --create /dev/md2 --level=1 -c256 --raid-devices=2 /dev/md0 /dev/md1
mdadm.conf:
DEVICE /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
DEVICE /dev/xvdf /dev/xvdg /dev/xvdh /dev/xvdi
DEVICE /dev/md0 /dev/md1
ARRAY /dev/md0 metadata=1.2 name=dn1-c.foo.com:0 UUID=b6bf6a3f:09e60e75:a04ce9d6:8a668b84
ARRAY /dev/md1 metadata=1.2 name=dn1-c.foo.com:1 UUID=d02217aa:4188e959:2a5d07e9:28e0d724
ARRAY /dev/md2 metadata=1.2 name=dn1-c.foo.com:2 UUID=c5266085:2adcbd6d:de4a8335:87e255d6
相关的 dmesg 输出:
[7333496.257635] md: bind<xvdd>
[7333496.262758] md: bind<xvdh>
[7333496.266326] md: bind<xvdc>
[7333496.270206] md: bind<xvdf>
[7333496.274588] md: bind<xvdb>
[7333496.279373] md: bind<xvdg>
[7333496.286136] md: bind<xvde>
[7333496.290990] bio: create slab <bio-1> at 1
[7333496.291006] md/raid0:md127: md_size is 3522926592 sectors.
[7333496.291012] md: RAID0 configuration for md127 - 1 zone
[7333496.291017] md: zone0=[xvdb/xvdc/xvdd/xvde]
[7333496.291026] zone-offset= 0KB, device-offset= 0KB, size=1761463296KB
[7333496.291033]
[7333496.291048] md127: detected capacity change from 0 to 1803738415104
[7333496.291154] md: bind<xvdi>
[7333496.294186] md127: unknown partition table
[7333496.296463] md: raid10 personality registered for level 10
[7333496.296830] md/raid10:md126: not clean -- starting background reconstruction
[7333496.296839] md/raid10:md126: active with 4 out of 4 devices
[7333496.296869] md126: detected capacity change from 0 to 2198754295808
[7333496.308765] raid6: int64x1 1748 MB/s
[7333496.314083] md126: unknown partition table
[7333496.376772] raid6: int64x2 2369 MB/s
[7333496.444782] raid6: int64x4 1634 MB/s
[7333496.512757] raid6: int64x8 1673 MB/s
[7333496.580754] raid6: sse2x1 4746 MB/s
[7333496.648749] raid6: sse2x2 5793 MB/s
[7333496.716755] raid6: sse2x4 6073 MB/s
[7333496.716766] raid6: using algorithm sse2x4 (6073 MB/s)
[7333496.717708] xor: automatically using best checksumming function: generic_sse
[7333496.736747] generic_sse: 2205.000 MB/sec
[7333496.736755] xor: using function: generic_sse (2205.000 MB/sec)
[7333496.738128] md: raid6 personality registered for level 6
[7333496.738138] md: raid5 personality registered for level 5
[7333496.738146] md: raid4 personality registered for level 4
/dev 的内容:
root@dn1-c:/home/ubuntu# ls /dev/md*
/dev/md126 /dev/md127
/dev/md:
dn1-c.foo.com:0 dn1-c.foo.com:1
root@dn1-c:/home/ubuntu#
mdadm -D 输出
root@dn1-c:/home/ubuntu# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Wed Mar 13 16:29:17 2013
Raid Level : raid10
Array Size : 2147220992 (2047.75 GiB 2198.75 GB)
Used Dev Size : 1073610496 (1023.87 GiB 1099.38 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Mar 13 17:56:48 2013
State : clean, resyncing (PENDING)
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 256K
Name : dn1-c.foo.com:1 (local to host dn1-c.foo.com)
UUID : d02217aa:4188e959:2a5d07e9:28e0d724
Events : 5
Number Major Minor RaidDevice State
0 202 80 0 active sync /dev/xvdf
1 202 96 1 active sync /dev/xvdg
2 202 112 2 active sync /dev/xvdh
3 202 128 3 active sync /dev/xvdi
root@dn1-c:/home/ubuntu# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Wed Mar 13 16:28:23 2013
Raid Level : raid0
Array Size : 1761463296 (1679.86 GiB 1803.74 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Mar 13 16:28:23 2013
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 256K
Name : dn1-c.foo.com:0 (local to host dn1-c.foo.com)
UUID : b6bf6a3f:09e60e75:a04ce9d6:8a668b84
Events : 0
Number Major Minor RaidDevice State
0 202 16 0 active sync /dev/xvdb
1 202 32 1 active sync /dev/xvdc
2 202 48 2 active sync /dev/xvdd
3 202 64 3 active sync /dev/xvde
答案1
是的,这是可能的。听起来您指向的mdadm
是单个磁盘而不是阵列,但由于您没有指定确切的命令或输出mdadm -D
,因此很难知道您哪里出错了。