如何更换 Linux Raid 上的故障磁盘(mdadm)

如何更换 Linux Raid 上的故障磁盘(mdadm)

当我列出以下驱动器时出现输入/输出错误:fdisk -l

Failed to read extended partition table (offset=3886721022): Input/output error
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7ac0eeb9

Device     Boot      Start        End    Sectors  Size Id Type
/dev/sda1             2048 3886718975 3886716928  1.8T fd Linux raid autodetect
/dev/sda2       3886721022 3907028991   20307970  9.7G  5 Extended

Partition 2 does not start on physical sector boundary.


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7ac0eeb9

Device     Boot      Start        End    Sectors  Size Id Type
/dev/sdb1             2048 3886718975 3886716928  1.8T fd Linux raid autodetect
/dev/sdb2       3886721022 3907028991   20307970  9.7G  5 Extended
/dev/sdb5       3886721024 3907028991   20307968  9.7G fd Linux raid autodetect

Partition 2 does not start on physical sector boundary.


Disk /dev/md1: 9.7 GiB, 10389291008 bytes, 20291584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md0: 1.8 TiB, 1989864849408 bytes, 3886454784 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

然后,/proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[2]
      1943227392 blocks super 1.2 [2/1] [_U]
      bitmap: 15/15 pages [60KB], 65536KB chunk

md1 : active raid1 sdb5[2]
      10145792 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

然后:mdadm --detail /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Tue Mar 20 06:41:14 2018
     Raid Level : raid1
     Array Size : 1943227392 (1853.21 GiB 1989.86 GB)
  Used Dev Size : 1943227392 (1853.21 GiB 1989.86 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Nov 23 16:21:07 2023
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : impacs:0
           UUID : 619c5551:3e475969:80882df7:7da3f864
         Events : 18817061

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       2       8       17        1      active sync   /dev/sdb1

然后:mdadm --detail /dev/md1

/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar 20 06:41:40 2018
     Raid Level : raid1
     Array Size : 10145792 (9.68 GiB 10.39 GB)
  Used Dev Size : 10145792 (9.68 GiB 10.39 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Nov 23 14:19:34 2023
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : impacs:1
           UUID : 1b9a0dc4:cc30cd7e:274fefd9:55266436
         Events : 70615

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       2       8       21        1      active sync   /dev/sdb5

我没有看到/dev/sda那里,我怎么知道它是否在 RAID 中使用?

答案1

我看到的唯一线索是:

/dev/sda1             2048 3886718975 3886716928  1.8T fd Linux raid autodetect

看来您的/dev/sda分区表(可能还有/dev/sda1)太乱了,以至于 RAID 不会尝试保留磁盘阵列中sda1不再可用的 或sda5。因此,我认为 99% 可以肯定/dev/sda您的 RAID 阵列中有一半已经坏了,需要更换。

相关内容