不知何故,我的 RAID 1 配置中的两个驱动器已连接到两个设备 ( md0
、md127
):
~# lsblk -o NAME,SIZE,MOUNTPOINT,STATE,FSTYPE,MODEL,SERIAL,UUID,LABEL
NAME SIZE MOUNTPOINT STATE FSTYPE MODEL SERIAL UUID LABEL
sda 1.8T running linux_raid_member ST32000542AS 69c415bb-716b-8e0b-b03d-721888a9cb05 turris:0
`-md0 1.8T btrfs 47039540-3842-4b2b-be2b-b8f76e88189b
sdb 1.8T running linux_raid_member ST32000542AS 69c415bb-716b-8e0b-b03d-721888a9cb05 turris:0
`-md127 1.8T /mnt/raid btrfs 47039540-3842-4b2b-be2b-b8f76e88189b
为什么会发生这种情况?如何将它们恢复到单个设备 ( md0
)?
** 编辑 **
mdadm -E /dev/sda
:
~# mdadm -E /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 69c415bb:716b8e0b:b03d7218:88a9cb05
Name : turris:0 (local to host turris)
Creation Time : Sun Jul 23 11:52:07 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
Array Size : 1953383360 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906766720 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 4ed3485a:ce6205f4:ecd1f9d0:6e4fb2b5
Update Time : Wed Oct 11 21:18:53 2017
Checksum : 8a845e99 - correct
Events : 623
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing)
mdadm -E /dev/sdb
:
~# mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 69c415bb:716b8e0b:b03d7218:88a9cb05
Name : turris:0 (local to host turris)
Creation Time : Sun Jul 23 11:52:07 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
Array Size : 1953383360 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906766720 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 7c8a1f7a:184b254c:1b25397c:8162faa4
Update Time : Wed Oct 11 05:58:52 2017
Checksum : 9d058b99 - correct
Events : 345
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)
mdadm -D /dev/md0
:
~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Jul 23 11:52:07 2017
Raid Level : raid1
Array Size : 1953383360 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383360 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Wed Oct 11 21:18:53 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : turris:0 (local to host turris)
UUID : 69c415bb:716b8e0b:b03d7218:88a9cb05
Events : 623
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 0 0 1 removed
mdadm -D /dev/md127
:
~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Jul 23 11:52:07 2017
Raid Level : raid1
Array Size : 1953383360 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383360 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Wed Oct 11 05:58:52 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : turris:0 (local to host turris)
UUID : 69c415bb:716b8e0b:b03d7218:88a9cb05
Events : 345
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 16 1 active sync /dev/sdb
** 编辑 1 **
内容/etc/rc.local
:
# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.
# Disable NCQ (fix RAID issue)
echo 1 > /sys/block/sda/device/queue_depth
echo 1 > /sys/block/sdb/device/queue_depth
# /fix
# start RAID array
mdadm --assemble --scan
exit 0
内容/etc/mdadm/mdadm.conf
:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 name=turris:0 UUID=69c415bb:716b8e0b:b03d7218:88a9cb05
内容(稍作编辑)/etc/aliases
:
root: cra***@*****.com
答案1
Events : 623
Events : 345
此时,您的两个驱动器已经完全不同步,没有简单的方法可以将它们恢复在一起。假设事件计数准确反映了驱动器上数据的相对年龄,我建议擦除/dev/sdb
并将其重新添加到/dev/md0
:
- 确保您拥有最新的数据备份。
- 放下
/dev/md127
:umount /mnt/raid
,mdadm --stop /dev/md127
- 让自己
/dev/sdb
看起来不再像 RAID 成员:(wipefs -a /dev/sdb
快速方法)或dd if=/dev/zero of=/dev/sdb
(彻底方法)。 - 将其添加
/dev/md0
为新设备:mdadm --manage /dev/md0 --add /dev/sdb
- 等待阵列重建。
- 在等待时,打开故障监控:
nano -w /etc/mdadm.conf
并将该行添加到末尾的某个位置,然后激活监控服务(这是特定于发行版的)。MAILADDR [email protected]
mdadm
- 激活
/dev/md0
:(mdadm --run /dev/md0
可能不需要)后跟mount /dev/md0 /mnt/raid
。
至于导致它的原因,我猜测在某一时刻您遇到了短暂的故障/dev/sdb
(希捷硬盘往往会这样做),并且它从阵列中掉出来,直到您下次重新启动计算机为止。由于事件计数不同,mdadm
无法将两个驱动器放入单个 RAID-1 阵列,而是决定创建两个单驱动器 RAID-1 阵列。