重新启动后软件 RAID 阵列中缺少驱动器

重新启动后软件 RAID 阵列中缺少驱动器

我有一个问题,我制作了 6 个硬盘的软件 RAID6 阵列。然而,每次系统重新启动时,阵列都会变得不活动,并且在再次启动后它会退化,总是丢失相同的驱动器(/dev/sde)。当我重新添加驱动器时,阵列会重新组装并再次正常运行,但我想解决这个问题。

规格:操作系统:Ubuntu 20.04.1 LTS

驱动器 (x6)

Model Family:     Western Digital Red
Device Model:     WDC WD100EFAX-68LHPN0
Firmware Version: 83.H0A83
User Capacity:    10.000.831.348.736 bytes [10,0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Apr 20 22:45:13 2021 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

cat /proc/mdstat(重启后):

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdc[2](S) sdf[5](S) sdd[3](S) sdb[1](S) sda[0](S)
      48831523840 blocks super 1.2

mdadm --detail /dev/md0(重新添加后/dev/sde

               Version : 1.2
     Creation Time : Wed Aug 12 20:25:02 2020
        Raid Level : raid6
        Array Size : 39065219072 (37255.50 GiB 40002.78 GB)
     Used Dev Size : 9766304768 (9313.87 GiB 10000.70 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Apr 14 14:20:42 2021
             State : clean, degraded, recovering
    Active Devices : 5
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Rebuild Status : 13% complete

              Name : 
              UUID : c41002b3:537a96c4:d6a3e2a9:f6debd2b
            Events : 87437

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       6       8       64        4      spare rebuilding   /dev/sde
       5       8       80        5      active sync   /dev/sdf

mdadm --detail /dev/md0(重新组装后)

    /dev/md0:
           Version : 1.2
     Creation Time : Wed Aug 12 20:25:02 2020
        Raid Level : raid6
        Array Size : 39065219072 (37255.50 GiB 40002.78 GB)
     Used Dev Size : 9766304768 (9313.87 GiB 10000.70 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Mar 10 14:08:55 2021
             State : clean
    Active Devices : 6
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name :
              UUID : c41002b3:537a96c4:d6a3e2a9:f6debd2b
            Events : 72271


    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       6       8       64        4      active sync   /dev/sde
       5       8       80        5      active sync   /dev/sdf

/etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/nvme0n1p5 during installation
UUID=50ba9118-24c0-4483-bc27-6e8d2fc3b844 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=F92D-03DC  /boot/efi       vfat    umask=0077      0       1
/swapfile                                 none            swap    sw              0       0


#Mount RAID-Array (last edit: 16/04/2021; lukas)
#/dev/md0 /mnt/md1 ext4 defaults,nofail,discard 0 0
#UUID= 29994073-b1c9-4b0f-8168-77c2475d6133
UUID=29994073-b1c9-4b0f-8168-77c2475d6133 /mnt/md1 ext4 defaults 0 0

Boot-Log RAID 设备似乎超时。

[ TIME ] Timed out waiting for device 3-b1c9-4b0f-8168-77c2475d6133.
[DEPEND] Dependency failed for /mnt/md1.
[DEPEND] Dependency failed for Local File Systems.
         Starting Load AppArmor profiles...
         Starting Set console font and keymap...
         Starting Create final runt…dir for shutdown pivot root...
         Starting Tell Plymouth To Write Out Runtime Data...
         Starting Create Volatile Files and Directories...
[  OK  ] Finished Create final runt…e dir for shutdown pivot root.

相关内容