创建 SW RAID6 池时创建了不需要的 md 阵列

创建 SW RAID6 池时创建了不需要的 md 阵列

我遇到了一个问题,当我在存储服务器上创建一堆 RAID6 阵列时,一些不需要的随机阵列毫无理由地被创建了。

我使用的是旧磁盘,但我在所有磁盘上运行了 mdadm --zero-superblock,还使用了 sgdisk -Z。之后 mdadm --examine 没有找到任何阵列,重启后也没有找到。磁盘之前用于 RAID50 排列。

这是 /proc/mdadm 输出。您可以看到 md125..127 和完全随机的 md23,它们是由于某种原因在仍在组装新的 RAID6 阵列时创建的。

我认为它可能是来自以前的 SW raid 配置的一些旧数据,但正如我所说,我擦除了磁盘,之后就没有任何阵列的痕迹。

它们为什么在那里?我怎样才能摆脱它们?

md9 : active raid6 sdbj[5] sdbi[4] sdbh[3] sdbg[2] sdbf[1] sdbe[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (387900/2930135040) finish=2139.9min speed=22817K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md125 : inactive md8[0](S)
      8790274048 blocks super 1.2

md8 : active raid6 sdbd[5] sdbc[4] sdbb[3] sdba[2] sdaz[1] sday[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (579836/2930135040) finish=2020.9min speed=24159K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md7 : active raid6 sdax[5] sdaw[4] sdav[3] sdau[2] sdat[1] sdas[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (759416/2930135040) finish=1735.8min speed=28126K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md6 : active raid6 sdar[5] sdaq[4] sdap[3] sdao[2] sdan[1] sdam[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (882816/2930135040) finish=1659.0min speed=29427K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md126 : inactive md5[1](S)
      8790274048 blocks super 1.2

md5 : active raid6 sdal[5] sdak[4] sdaj[3] sdai[2] sdah[1] sdag[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (1106488/2930135040) finish=1520.6min speed=32103K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md4 : active raid6 sdaf[5] sdae[4] sdad[3] sdac[2] sdab[1] sdaa[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (1279132/2930135040) finish=1438.5min speed=33931K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md127 : inactive md7[2](S) md3[1](S)
      17580548096 blocks super 1.2

md3 : active raid6 sdz[5] sdy[4] sdx[3] sdw[2] sdv[1] sdu[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (1488528/2930135040) finish=1361.9min speed=35839K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md23 : inactive md2[1](S)
      8790274048 blocks super 1.2

md2 : active raid6 sdr[5] sdq[4] sdp[3] sdo[2] sdn[1] sdm[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (2165400/2930135040) finish=1032.5min speed=47260K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md1 : active raid6 sdl[5] sdk[4] sdj[3] sdi[2] sdh[1] sdg[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.9% (28889600/2930135040) finish=610.7min speed=79172K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
      11720540160 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  1.5% (45517312/2930135040) finish=771.3min speed=62328K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

以防万一创建数组的命令:

mdadm --zero-superblock
sgdisk -Z

mdadm --create /dev/md8 -v --raid-devices=6 --bitmap=internal --level=6 /dev/sda[yz] /dev/sdb[abcd]

显然,系统正在尝试以某种方式将新创建的阵列从以前的配置添加到 RAID0。但是有关它的数据存储在哪里?所以我可以将其清除并创建全新的 RAID60

root@vod0-brn:~# mdadm -D /dev/md23
/dev/md23:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : vod0-brn:23  (local to host vod0-brn)
           UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
         Events : 0

    Number   Major   Minor   RaidDevice

       -       9        2        -        /dev/md2

root@vod0-brn:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 2
    Persistence : Superblock is persistent

          State : inactive

           Name : debian:25
           UUID : f4499ca3:b5c206e8:2bd8afd1:23aaea2c
         Events : 0

    Number   Major   Minor   RaidDevice

       -       9        7        -        /dev/md7
       -       9        3        -        /dev/md3
root@vod0-brn:~# mdadm -D /dev/md126
/dev/md126:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : debian:26
           UUID : 52be5dac:b730c109:d2f36d64:a98fa836
         Events : 0

    Number   Major   Minor   RaidDevice

       -       9        5        -        /dev/md5
root@vod0-brn:~# mdadm -D /dev/md125
/dev/md125:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : debian:28
           UUID : 4ea15dcc:1ab164fc:fa2532d1:0b93d0ae
         Events : 0

    Number   Major   Minor   RaidDevice

       -       9        8        -        /dev/md8
root@vod0-brn:~# mdadm -D /dev/md23 
/dev/md23:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : vod0-brn:23  (local to host vod0-brn)
           UUID : 2b4555e5:ed4f13ca:9a347c91:23748d47
         Events : 0

    Number   Major   Minor   RaidDevice

       -       9        2        -        /dev/md2

在我执行 mdadm --stop /dev/md** 之后,它们不再出现在 /proc/mdstat 中,但它们仍然存在于系统中,我非常不喜欢。这只是一个半解决方案

root@vod0-brn:~# cat /dev/md 
md/    md0    md1    md125  md126  md127  md2    md23   md29   md3    md4    md5    md6    md7    md8    md9

mdadm --examine 仍然会找到它们,即使名称不同,真是一团糟:(:

ARRAY /dev/md/23  metadata=1.2 UUID=2b4555e5:ed4f13ca:9a347c91:23748d47 name=vod0-brn:23
ARRAY /dev/md/26  metadata=1.2 UUID=52be5dac:b730c109:d2f36d64:a98fa836 name=debian:26
ARRAY /dev/md/25  metadata=1.2 UUID=f4499ca3:b5c206e8:2bd8afd1:23aaea2c name=debian:25
ARRAY /dev/md/28  metadata=1.2 UUID=4ea15dcc:1ab164fc:fa2532d1:0b93d0ae name=debian:28

答案1

看起来好像在其他 MD RAID 设备之上创建了 MD RAID 设备,这就是为什么一旦/dev/md2创建,系统就会检测到该设备上的 RAID0 并创建/dev/md23

在这种情况下,最好在文件中添加一行/etc/mdadm/mdadm.conf

DEVICE /dev/sd*

现在,系统仅/dev/sd*在尝试组装现有的 MD RAID 设备时才考虑设备。

相关内容