突然关闭后,RAID-5 的两个磁盘成员之一变得不可见?

突然关闭后,RAID-5 的两个磁盘成员之一变得不可见?

我有两个磁盘(sda 和 sdc,我不知道其他的命运是否有)组装为 RAID-5。几次突然关闭后,其中一名突袭成员变得隐形,我无法重新组装它。我对Linux文件系统包括RAID-5不是很熟悉。据我所知,由于 RAID-5,我不会丢失任何文件,但我不知道我的 sdc 磁盘是否损坏,如果没有损坏,是否可以在不丢失任何信息的情况下重新组装它们,或者如果它坏了我如何访问我的文件。我很感激任何帮助!

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

NAME     SIZE FSTYPE                        TYPE MOUNTPOINT
loop0   27,1M squashfs                      loop /snap/snapd/7264
loop1     55M squashfs                      loop /snap/core18/1705
loop2  240,8M squashfs                      loop /snap/gnome-3-34-1804/24
loop3   62,1M squashfs                      loop /snap/gtk-common-themes/1506
loop4   49,8M squashfs                      loop /snap/snap-store/433
sda      3,7T promise_fasttrack_raid_member disk 
└─sda1   3,7T linux_raid_member             part 
sdb    465,8G                               disk 
├─sdb1 186,4G ext4                          part /
├─sdb2     1K                               part 
├─sdb3 268,2G ext4                          part /home
└─sdb5  11,2G swap                          part [SWAP]

$sudo parted -l

Model: ATA WDC WD40EZRX-00S (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  4001GB  4001GB  ext4               raid


Model: ATA Samsung SSD 840 (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size    Type      File system     Flags
 1      1049kB  200GB  200GB   primary   ext4            boot
 2      200GB   212GB  12,0GB  extended
 5      200GB   212GB  12,0GB  logical   linux-swap(v1)
 3      212GB   500GB  288GB   primary   ext4

我只能在 Ubuntu 中的“磁盘”应用程序中看到它,并且它显示 sdc 磁盘“无媒体”。

在此输入图像描述

sda分区似乎是正确的。

在此输入图像描述

我无法重新组装它

sudo mdadm --assemble --scan

升级ubuntu 20时出现以下错误。

grub-probe: warning: disk does not exist, so falling back to partition device /dev/sdc1.
grub-probe: error: cannot read `/dev/sdc1': Input/output error.

更新:

重新启动后它显示了磁盘(我不知道为什么或如何)。但还是无法组装和安装。

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME     SIZE FSTYPE                        TYPE MOUNTPOINT
loop0     55M squashfs                      loop /snap/core18/1705
loop1   61,9M squashfs                      loop /snap/core20/1270
loop2   62,1M squashfs                      loop /snap/gtk-common-themes/1506
loop3  108,1M squashfs                      loop /snap/remmina/5130
loop4   49,8M squashfs                      loop /snap/snap-store/433
loop5  240,8M squashfs                      loop /snap/gnome-3-34-1804/24
loop6   27,1M squashfs                      loop /snap/snapd/7264
loop7  247,9M squashfs                      loop /snap/gnome-3-38-2004/87
loop8   55,5M squashfs                      loop /snap/core18/2284
loop9   43,4M squashfs                      loop /snap/snapd/14549
loop10     4K squashfs                      loop /snap/bare/5
loop11  65,2M squashfs                      loop /snap/gtk-common-themes/1519
loop12  54,2M squashfs                      loop /snap/snap-store/558
loop13   219M squashfs                      loop /snap/gnome-3-34-1804/77
sda      3,7T promise_fasttrack_raid_member disk 
└─sda1   3,7T linux_raid_member             part 
sdb    465,8G                               disk 
├─sdb1 186,4G ext4                          part /
├─sdb2     1K                               part 
├─sdb3 268,2G ext4                          part /home
└─sdb5  11,2G swap                          part [SWAP]
sdc      3,7T promise_fasttrack_raid_member disk 
└─sdc1   3,7T linux_raid_member             part 

我尝试 mdadm 组装然后安装它,但失败了。

$ sudo mdadm --assemble --scan --force
$ sudo mount /dev/md0 /home/bilgen/
mount: /home/person: can't read superblock on /dev/md0.


$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdc1[2] sda1[1]
      7813771264 blocks super 1.2
       
unused devices: <none>


$ sudo mdadm --examine --scan
ARRAY /dev/md/0  metadata=1.2 UUID=92fd33d9:351dcee8:c809916a:b46055e5 name=zencefil:0

验证 RAID 磁盘 sda1 和 sdc1。 sda1 的状态为 clean,但 sdc1 的状态为 active。我认为这对于调试问题很重要。我确认它是 RAID-5,但其中一个磁盘应该已经损坏,因为我拿到计算机时只观察到两个磁盘。现在我意识到了。

$ sudo mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 92fd33d9:351dcee8:c809916a:b46055e5
           Name : zencefil:0
  Creation Time : Thu Mar 12 00:17:27 2015
     Raid Level : **raid5**
   Raid Devices : **3**

 Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
     Array Size : 7813770240 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          **State : clean**
    Device UUID : 6212bf12:f2bacb20:6a10a588:ac50cb2d

    Update Time : Mon Jan 24 20:04:23 2022
       Checksum : cfb60638 - correct
         Events : 6360009

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)

$ sudo mdadm -E /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 92fd33d9:351dcee8:c809916a:b46055e5
           Name : zencefil:0
  Creation Time : Thu Mar 12 00:17:27 2015
     Raid Level : **raid5**
   Raid Devices : **3**

 Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
     Array Size : 7813770240 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          **State : active**
    Device UUID : daef079b:176d487a:cb152bdb:3d2548f9

    Update Time : Mon Jan 24 20:04:23 2022
       Checksum : cf7e70ca - correct
         Events : 6360009

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)

消息可以帮助了解情况

dmesg | grep md0
[    8.147671] md/raid:md0: not clean -- starting background reconstruction
[    8.147750] md/raid:md0: device sdc1 operational as raid disk 2
[    8.147755] md/raid:md0: device sda1 operational as raid disk 1
[    8.149553] md/raid:md0: cannot start dirty degraded array.
[    8.149943] md/raid:md0: failed to run raid set.
[ 2878.821074] EXT4-fs (md0): unable to read superblock

事情变得越来越有趣

mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Mar 12 00:17:27 2015
        Raid Level : raid5
     Used Dev Size : 18446744073709551615
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Mon Jan 24 20:04:23 2022
             State : active, FAILED, Not Started 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

              Name : zencefil:0
              UUID : 92fd33d9:351dcee8:c809916a:b46055e5
            Events : 6360009

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed

       -       8        1        1      sync   /dev/sda1
       -       8       33        2      sync   /dev/sdc1

相关内容