我在 Raspberry Pi 上使用 4 个驱动器作为 raid 5。我每天晚上都会关闭 Pi,第二天早上再启动。有时阵列似乎有故障。也许 Pi 没有正确安装驱动器,但没有在 dmesg 中提及问题:
[ 10.538758] scsi 0:0:0:0: Direct-Access ACASIS 8034 PQ: 0 ANSI: 6
[ 10.541035] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 10.541282] sd 0:0:0:0: [sda] Write Protect is off
[ 10.541290] sd 0:0:0:0: [sda] Mode Sense: 67 00 10 08
[ 10.541658] scsi 0:0:0:1: Direct-Access ACASIS 8034 PQ: 0 ANSI: 6
[ 10.541767] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 10.542490] sd 0:0:0:0: [sda] Optimal transfer size 33553920 bytes
[ 10.544213] sd 0:0:0:1: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 10.544465] sd 0:0:0:1: [sdb] Write Protect is off
[ 10.544473] sd 0:0:0:1: [sdb] Mode Sense: 67 00 10 08
[ 10.544919] sd 0:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 10.545643] sd 0:0:0:1: [sdb] Optimal transfer size 33553920 bytes
[ 10.603258] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 10.603350] sd 0:0:0:1: Attached scsi generic sg1 type 0
[ 10.631296] sd 0:0:0:0: [sda] Attached SCSI disk
[ 10.633209] sd 0:0:0:1: [sdb] Attached SCSI disk
[ 11.022152] usb 2-1: new SuperSpeed Gen 1 USB device number 3 using xhci_hcd
[ 11.043358] usb 2-1: New USB device found, idVendor=1058, idProduct=0a10, bcdDevice=80.34
[ 11.043370] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=5
[ 11.043376] usb 2-1: Product: Go To Final Lap
[ 11.043381] usb 2-1: SerialNumber: 1234567890123
[ 11.051424] scsi host1: uas
[ 11.052496] scsi 1:0:0:0: Direct-Access ACASIS 8034 PQ: 0 ANSI: 6
[ 11.054130] sd 1:0:0:0: Attached scsi generic sg2 type 0
[ 11.058494] sd 1:0:0:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 11.058746] sd 1:0:0:0: [sdc] Write Protect is off
[ 11.058754] sd 1:0:0:0: [sdc] Mode Sense: 67 00 10 08
[ 11.059094] scsi 1:0:0:1: Direct-Access ACASIS 8034 PQ: 0 ANSI: 6
[ 11.059279] sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 11.060062] sd 1:0:0:0: [sdc] Optimal transfer size 33553920 bytes
[ 11.061458] sd 1:0:0:1: Attached scsi generic sg3 type 0
[ 11.061797] sd 1:0:0:1: [sdd] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 11.062062] sd 1:0:0:1: [sdd] Write Protect is off
[ 11.062072] sd 1:0:0:1: [sdd] Mode Sense: 67 00 10 08
[ 11.062546] sd 1:0:0:1: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 11.063295] sd 1:0:0:1: [sdd] Optimal transfer size 33553920 bytes
[ 11.145514] sd 1:0:0:1: [sdd] Attached SCSI disk
[ 11.146878] sd 1:0:0:0: [sdc] Attached SCSI disk
我注意到该阵列处于非活动状态,并且我的第 4 个驱动器位置错误。/dev/sdd 应该是 IMO [3] 而不是 4:
Personalities :
md127 : inactive sdd[4](S) sdc[2](S) sda[0](S) sdb[1](S)
7813529952 blocks super 1.2
我停止了阵列并强制重新组装。
OK root@ncloud:~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
OK root@ncloud:~# mdadm --assemble --force /dev/md127 /dev/sd[abcd]
mdadm: forcing event count in /dev/sdc(2) from 16244 upto 16251
mdadm: forcing event count in /dev/sdd(3) from 16244 upto 16251
mdadm: clearing FAULTY flag for device 2 in /dev/md127 for /dev/sdc
mdadm: Marking array /dev/md127 as 'clean'
mdadm: /dev/md127 has been started with 4 drives.
OK root@ncloud:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid5 sda[0] sdd[4] sdc[2] sdb[1]
5860147200 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/15 pages [0KB], 65536KB chunk
搜索盘的详细信息:
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 9fd0e97e:8379c390:5b0dac21:462d643c
Name : ncloud:vo1 (local to host ncloud)
Creation Time : Tue Jan 25 11:23:04 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : acc331ad:9f38d203:2e5be32f:8f149f1b
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 06:27:09 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 90184840 - correct
Events : 16251
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 9fd0e97e:8379c390:5b0dac21:462d643c
Name : ncloud:vo1 (local to host ncloud)
Creation Time : Tue Jan 25 11:23:04 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 0aa352e2:0e6e6da8:76e7f142:a6a97fb0
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 06:27:09 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 11c37e6f - correct
Events : 16251
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 9fd0e97e:8379c390:5b0dac21:462d643c
Name : ncloud:vo1 (local to host ncloud)
Creation Time : Tue Jan 25 11:23:04 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 56ce4443:3dae7622:91bb141e:9da1916d
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 06:24:41 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 84f2b563 - correct
Events : 16244
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 9fd0e97e:8379c390:5b0dac21:462d643c
Name : ncloud:vo1 (local to host ncloud)
Creation Time : Tue Jan 25 11:23:04 2022
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
Used Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : 5ccd1f95:179ed088:5fdd6f33:502f7804
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 06:24:41 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : e96953c6 - correct
Events : 16244
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
我该如何正确修复这个问题而不丢失数据?
答案1
RAID 中的驱动器顺序并不重要,因为每个驱动器都有一个磁盘元数据块,其中包含此特定驱动器在阵列中应占据的位置等信息。
最有可能的原因是你的数组是一旦出现足够的部件,即可尽早组装,不一定是全部。对于 RAID 5,任何 n-1 个数据磁盘都足够了,因此它会组装为四个磁盘中有三个可用。并且在您的特定情况下,USB 磁盘出现得很慢,因为它们很慢并且它们连接到 USB。
不要通过 USB 构建 RAID 阵列。不要用硬盘构建 RAID 5 阵列,尤其是现代消费级硬盘。单独使用任何一种方式都会导致数据丢失,而两种方式混合在一起则会导致数据丢失。这两种问题不被视为一个合理的商业惯例这就是这个网站的全部内容。
Raspberry Pi 不适合构建 RAID。因此,这个问题不是你可以“修复”或“修理”的,你从一开始就走错了路。可能,将 Raspberry Pi 的“计算模块”风格插入一些专门设计的托管板,将 PCIe 暴露给 SAS/SATA HBA 或 PCIe 交换机和一堆 NVMe 驱动器,可以以可接受的方式进行软件 RAID。但这在“Raspberry Pi”上是不可能的。