我有一个 Debian Jessie 盒子,它丢失了包含操作系统的 RAID 卷,因此我启动了 Live USB 来尝试恢复连接的 SAS 控制的外部驱动器托架,该托架包含我的数据 RAID6,并且它看到 4 个中的 3 个,如下所示:
>: cat /proc/mdstat
Personalities :
md0 : inactive sda1[0](S) sdc1[2](S) sdb1[1](S)
11718349824 blocks super 1.2
unused devices: <none>
然后再次:
>: mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Name : backup1:0
UUID : a7946015:259ae101:1fed525f:5766e9d5
Events : 381
Number Major Minor RaidDevice
- 8 1 - /dev/sda1
- 8 17 - /dev/sdb1
- 8 33 - /dev/sdc1
所以它认为这是一个奇怪的 RAID0?有没有办法告诉它再次将其用作 RAID6,但不删除数据,也不将驱动器标记为备用?我在想类似的事情:
mdadm --stop /dev/md0
mdadm --create /dev/md0 --level=6 --raid-devices=4 --chunk=64 --name=backup1:0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 --assume-clean
即使没有显示 /dev/sdd1。我似乎需要使用一些标志重新组装以避免重写数据,或者手动从 md0 中删除/添加每个磁盘(但 RAID6 需要多个磁盘,那么您该怎么做呢?)/dev/sdd 的 fdisk 显示:
Disk /dev/sdd: 3.7 TiB, 3999999721472 bytes, 7812499456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 27419FEB-5830-4C44-9DC9-00828D0F115A
Device Start End Sectors Size Type
/dev/sdd1 2048 7812497407 7812495360 3.7T Linux RAID
因此,正如我所料,那里有一个 raid 分区,但是当我检查它时,它显示:
mdadm --examine /dev/sdd1
mdadm: No md superblock detected on /dev/sdd1.
与其他驱动器不同:
mdadm --examine /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : a7946015:259ae101:1fed525f:5766e9d5
Name : backup1:0
Creation Time : Tue Jul 19 17:34:55 2016
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 7812233216 (3725.16 GiB 3999.86 GB)
Array Size : 7812233216 (7450.33 GiB 7999.73 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 4d1d775e:eef629d4:03f15e09:f1762443
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jul 19 18:17:36 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 8365777c - correct
Events : 381
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
我猜在重建期间它可以修复 /dev/sdd1?我这样做对吗?我基本上不想覆盖数据。
答案1
好的,我停止了 /dev/md0,然后将其创建为 raid6,如下所示:
mdadm --stop /dev/md0
mdadm: stopped /dev/md0
root@debian:/home/user >: cat /proc/mdstat
Personalities :
unused devices: <none>
root@debian:/home/user >: mdadm --create /dev/md0 --level=6 --raid-devices=4 --chunk=64 --name=backup1:0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 --assume-clean
mdadm: /dev/sda1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Jul 19 17:34:55 2016
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Jul 19 17:34:55 2016
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Jul 19 17:34:55 2016
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@debian:/home/user >: cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0]
7812233216 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 30/30 pages [120KB], 65536KB chunk
unused devices: <none>
以下是有关重新组装的 RAID 的更多详细信息:
mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Mar 1 22:04:57 2018
Raid Level : raid6
Array Size : 7812233216 (7450.33 GiB 7999.73 GB)
Used Dev Size : 3906116608 (3725.16 GiB 3999.86 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Mar 1 22:04:57 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : backup1:0
UUID : c3ef766b:2fe9581a:5a906461:d52ee71e
Events : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
所以我猜它成功了!现在当我去安装它时,它说
root@debian:/home/user >: mount /dev/md0 raid6
mount: unknown filesystem type 'LVM2_member'
所以现在我必须弄清楚如何重建 LVM 内容,但这是另一个问题。我希望这可以帮助其他人解决类似问题。