mdadm raid 不活动并且重新启动后丢失设备

mdadm raid 不活动并且重新启动后丢失设备

我有一台机器,有一个 mdadm RAID1 阵列,由 2 个 8TB 磁盘(/dev/sdc/dev/sdd)组成。这工作得很好,我在上面添加了一堆数据。

我在另一台机器上进行了多次试运行,将具有 2 个磁盘的 RAID1 发展为(最终)具有 5 个磁盘的 RAID5,其工作按预期进行。

太;博士

将具有 2 个磁盘的 RAID1 阵列扩展为具有 3 个磁盘的 RAID5 阵列时,我缺少什么?重新启动后阵列处于非活动状态并且缺少设备!

我做了什么:

  • 将 RAID 级别更改为 5:mdadm --grow /dev/md0 -l 5
  • 添加备用硬盘:mdadm /dev/md0 --add /dev/sdb
  • 增加 RAID 以使用新磁盘:mdadm --grow /dev/md0 -n 3
  • 此后同步开始

这是同步期间的输出:

user@server:~$ sudo mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 19 17:56:28 2022
        Raid Level : raid5
        Array Size : 7813894464 (7.28 TiB 8.00 TB)
     Used Dev Size : 7813894464 (7.28 TiB 8.00 TB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Aug 25 18:25:21 2022
             State : clean, reshaping
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

    Reshape Status : 24% complete
     Delta Devices : 1, (2->3)

              Name : ubuntu-server:0
              UUID : 9d1e2e6e:14dc5960:011daf54:xxxxxxxx
            Events : 77556

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       2       8       16        2      active sync   /dev/sdb

还有磁盘:

user@server:~$ lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
loop0                    7:0    0    62M  1 loop  /snap/core20/1593
loop1                    7:1    0    62M  1 loop  /snap/core20/1611
loop2                    7:2    0  79.9M  1 loop  /snap/lxd/22923
loop3                    7:3    0   103M  1 loop  /snap/lxd/23541
loop4                    7:4    0  44.7M  1 loop  /snap/snapd/15534
loop5                    7:5    0    47M  1 loop  /snap/snapd/16292
sda                      8:0    0 931.5G  0 disk
├─sda1                   8:1    0     1M  0 part
├─sda2                   8:2    0     2G  0 part  /boot
└─sda3                   8:3    0 929.5G  0 part
  └─dm_crypt-1         253:0    0 929.5G  0 crypt
    └─ubuntu--vg-lv--0 253:1    0 929.5G  0 lvm   /
sdb                      8:16   0   7.3T  0 disk
└─md0                    9:0    0   7.3T  0 raid5
sdc                      8:32   0   7.3T  0 disk
└─md0                    9:0    0   7.3T  0 raid5
sdd                      8:48   0   7.3T  0 disk
└─md0                    9:0    0   7.3T  0 raid5

同步完成后,我能够挂载阵列并访问数据,尽管阵列大小仍然是 8TB(我假设我必须手动增加)。

快进 24 小时到现在(重启后):

root@server:~# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid5
     Total Devices : 2
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 2

              Name : ubuntu-server:0
              UUID : 9d1e2e6e:14dc5960:011daf54:xxxxxxxx
            Events : 85828

    Number   Major   Minor   RaidDevice

       -       8       32        -        /dev/sdc
       -       8       48        -        /dev/sdd

root@server:~# lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
loop0                    7:0    0    62M  1 loop  /snap/core20/1593
loop1                    7:1    0    62M  1 loop  /snap/core20/1611
loop2                    7:2    0  79.9M  1 loop  /snap/lxd/22923
loop3                    7:3    0   103M  1 loop  /snap/lxd/23541
loop4                    7:4    0  44.7M  1 loop  /snap/snapd/15534
loop5                    7:5    0    47M  1 loop  /snap/snapd/16292
sda                      8:0    0 931.5G  0 disk
├─sda1                   8:1    0     1M  0 part
├─sda2                   8:2    0     2G  0 part  /boot
└─sda3                   8:3    0 929.5G  0 part
  └─dm_crypt-1         253:0    0 929.5G  0 crypt
    └─ubuntu--vg-lv--0 253:1    0 929.5G  0 lvm   /
sdb                      8:16   0   7.3T  0 disk
sdc                      8:32   0   7.3T  0 disk
└─md0                    9:0    0     0B  0 md
sdd                      8:48   0   7.3T  0 disk
└─md0                    9:0    0     0B  0 md

所以,在我看来,新添加的 HDD ( /dev/sdb) 不知何故丢失了!我尝试将 的输出添加mdadm --detail --scan --verbose到我的/etc/mdadm/mdadm.conf并运行update-initramfs -u后,但无济于事......

这里有更多信息:

root@server:~# mdadm --examine /dev/sd[bcd]
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 9d1e2e6e:14dc5960:011daf54:403c80a6
           Name : ubuntu-server:0
  Creation Time : Tue Jul 19 17:56:28 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 15627789488 sectors (7.28 TiB 8.00 TB)
     Array Size : 15627788928 KiB (14.55 TiB 16.00 TB)
  Used Dev Size : 15627788928 sectors (7.28 TiB 8.00 TB)
    Data Offset : 263680 sectors
   Super Offset : 8 sectors
   Unused Space : before=263600 sectors, after=560 sectors
          State : clean
    Device UUID : cce44b44:6be581c6:ed09e3e8:5a2f5735

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Aug 26 19:56:25 2022
  Bad Block Log : 512 entries available at offset 40 sectors
       Checksum : 370fd1fa - correct
         Events : 85828

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 9d1e2e6e:14dc5960:011daf54:403c80a6
           Name : ubuntu-server:0
  Creation Time : Tue Jul 19 17:56:28 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 15627789488 sectors (7.28 TiB 8.00 TB)
     Array Size : 15627788928 KiB (14.55 TiB 16.00 TB)
  Used Dev Size : 15627788928 sectors (7.28 TiB 8.00 TB)
    Data Offset : 263680 sectors
   Super Offset : 8 sectors
   Unused Space : before=263600 sectors, after=560 sectors
          State : clean
    Device UUID : 5744d817:29d6e7e7:30e536d7:16d43c13

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Aug 26 19:56:25 2022
  Bad Block Log : 512 entries available at offset 40 sectors
       Checksum : f83ba242 - correct
         Events : 85828

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

root@server:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=ubuntu-server:0 UUID=9d1e2e6e:14dc5960:011daf54:xxxxxx
   devices=/dev/sdb,/dev/sdc,/dev/sdd
root@server:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy

答案1

对于有同样问题的人:

看来我的问题确实是缺少分区\dev\sdb,尽管我从未找到导致分区表损坏/消失/等的原因。

我对在没有数据备份的情况下做一些事情不太高兴,所以我做了我在评论中建议的事情:

  1. 降级阵列
  2. 使用备用(“故障”)磁盘 ( /dev/sdb) 保存/备份数据
  3. 对剩余的两个驱动器进行分区/dev/sdc/dev/sdd并破坏阵列!)
  4. 重新创建具有 2 个驱动器的阵列
  5. 将数据复制回其上
  6. 添加第三个(分区)磁盘

再次感谢您为我指明了正确的方向!

相关内容