如何使我原来的 RAID 设置再次运行?

如何使我原来的 RAID 设置再次运行?

我试图将 raid 5 升级到更大的驱动器。我遵循了在以下网址找到的指南:

https://rabexc.org/posts/mdadm-replace

但在添加更大的驱动器之前,我就能感觉到有些不对劲。两个设备似乎消失了。所以我尝试重新添加原始驱动器。看起来 mdamd 可以看到所有 4 个,但我不确定哪里出了问题。我原来的 4 个驱动器都重新连接了,Ubuntu 可以看到它们。我如何让原始设置再次运行?

我正在运行 Ubuntu 20

4 个驱动器是 4TB SATA 物理磁盘

我认为这个输出可能是最有帮助的:

sudo mdadm --examine /dev/sd[a-z]1
/dev/sda1:
   MBR Magic : aa55
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : fc1484e3:9dd42927:050c3334:510f959c

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:53:17 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : e373a9f0 - correct
         Events : 29259

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : A..A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : ef83a829:fb3b15d5:a8efbc06:6e1ce6ff

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:48:11 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : d606fcd - correct
         Events : 29256

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : 33674b45:e1670e28:6357bac7:54feeea5

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:45:08 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : e301c7e1 - correct
         Events : 29253

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : effdc94e:e1c09065:676782d9:11099abc

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:53:17 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : 5274d44e - correct
         Events : 29259

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : A..A ('A' == active, '.' == missing, 'R' == replacing)

更新:

看起来我的驱动器不同步了?

/dev/sdb1:
         Events : 29259
   Device Role : Active device 0
/dev/sdc1:
         Events : 29256
   Device Role : Active device 1
/dev/sdd1:
         Events : 29253
   Device Role : Active device 2
/dev/sde1:
         Events : 29259
   Device Role : Active device 3

答案1

我发现下面的命令修复了同步问题。我基本理解了它,但也不知道我对此有什么不了解。

我学到的一点是,我不需要按照任何特定的顺序排列 /dev/sdx1 驱动器。作为 Linux 上的软件 RAID,它会从驱动器中读取识别信息,并使用该信息进行组装。我只是传递要考虑哪些设备。

mdadm --assemble --run --force --update=resync /dev/md127 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

相关内容