RAID 处于非活动状态,新的 md127 以某种方式添加为 RAID 磁盘

RAID 处于非活动状态,新的 md127 以某种方式添加为 RAID 磁盘

服务器崩溃后,我目前正在尝试让 RAID 重新工作。设置如下:一个 RAID 有 2 个硬盘,另一个有 7 个硬盘。但显然在我尝试重新启动它之后,出现了问题。出现了一个名为 md127 的新 RAID,并作为另一个磁盘添加到其中一个 RAID 中,如状态统计输出。由于存在数据丢失的可能性,我没有尝试再次组装 RAID。

下面您将看到我所能收集到的所有信息。

我真的不知道该怎么做,因为我对 RAID 还很陌生,所以非常感谢任何帮助。

配置文件是:

ARRAY /dev/md/0 level=raid1 num-devices=2 metadata=1.2 name=cimbernserver:0 UUID=8f613aa2:92288947:af7c552e:d920b7ac
   devices=/dev/sde1,/dev/sdg1
ARRAY /dev/md1 level=raid6 num-devices=7 metadata=1.2 name=cimbernserver:1 UUID=31f14508:f1f3425b:ecfbbf0a:3b4db3c3
   devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sdf1,/dev/sdh1,/dev/sdi1

猫/proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
md1 : inactive md127[0] sdd1[7]
      3906247953 blocks super 1.2

md0 : active raid1 sdg1[2] sde1[0]
      971677560 blocks super 1.2 [2/2] [UU]

md127 : active (auto-read-only) raid5 sda[7] sdf[6] sdb[1] sdi[4] sdc[2] sdh[5]
      11721077760 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/6] [UUU_UUU]

摘录消息

[    3.891477] md: bind<sdh>
[    3.906250] md: bind<sdc>
[    4.001739] md: bind<sde1>
[    4.003069] md: bind<sdg1>
[    4.005080] md: raid1 personality registered for level 1
[    4.005634] md/raid1:md0: active with 2 out of 2 mirrors
[    4.005664] md0: detected capacity change from 0 to 994997821440
[    4.006325] random: fast init done
[    4.008595] md: bind<sdd1>
[    4.222463]  sdi: sdi1
[    4.222841] sd 8:0:0:0: [sdi] Attached SCSI disk
[    4.238703]  sdb: sdb1
[    4.239078] sd 1:0:0:0: [sdb] Attached SCSI disk
[    4.255885]  sdf: sdf1
[    4.256263] sd 5:0:0:0: [sdf] Attached SCSI disk
[    4.270019] md: bind<sdi>
[    4.294450] md: bind<sdb>
[    4.298661] md: bind<sdf>
[    4.301504] md: bind<sda>
[    4.412010] raid6: sse2x1   gen()  3438 MB/s
[    4.480015] raid6: sse2x1   xor()  2860 MB/s
[    4.548015] raid6: sse2x2   gen()  4586 MB/s
[    4.616015] raid6: sse2x2   xor()  3222 MB/s
[    4.684010] raid6: sse2x4   gen()  4730 MB/s
[    4.752010] raid6: sse2x4   xor()  2278 MB/s
[    4.752012] raid6: using algorithm sse2x4 gen() 4730 MB/s
[    4.752012] raid6: .... xor() 2278 MB/s, rmw enabled
[    4.752014] raid6: using intx1 recovery algorithm
[    4.752254] xor: measuring software checksum speed
[    4.792010]    prefetch64-sse:  6981.000 MB/sec
[    4.832011]    generic_sse:  6948.000 MB/sec
[    4.832012] xor: using function: prefetch64-sse (6981.000 MB/sec)
[    4.832242] async_tx: api initialized (async)
[    4.834118] md: raid6 personality registered for level 6
[    4.834120] md: raid5 personality registered for level 5
[    4.834121] md: raid4 personality registered for level 4
[    4.834555] md/raid:md127: device sda operational as raid disk 0
[    4.834556] md/raid:md127: device sdf operational as raid disk 6
[    4.834557] md/raid:md127: device sdb operational as raid disk 1
[    4.834558] md/raid:md127: device sdi operational as raid disk 4
[    4.834559] md/raid:md127: device sdc operational as raid disk 2
[    4.834560] md/raid:md127: device sdh operational as raid disk 5
[    4.835380] md/raid:md127: allocated 7548kB
[    4.835440] md/raid:md127: raid level 5 active with 6 out of 7 devices, algorithm 2
[    4.835478] RAID conf printout:
[    4.835479]  --- level:5 rd:7 wd:6
[    4.835480]  disk 0, o:1, dev:sda
[    4.835481]  disk 1, o:1, dev:sdb
[    4.835483]  disk 2, o:1, dev:sdc
[    4.835484]  disk 4, o:1, dev:sdi
[    4.835485]  disk 5, o:1, dev:sdh
[    4.835486]  disk 6, o:1, dev:sdf
[    4.835538] md127: detected capacity change from 0 to 12002383626240
[    4.835875] random: crng init done
[    4.854931] md: bind<md127>
[    4.878060] md: linear personality registered for level -1
[    4.879979] md: multipath personality registered for level -4
[    4.881923] md: raid0 personality registered for level 0
[    4.890235] md: raid10 personality registered for level 10
[    4.904236] PM: Starting manual resume from disk
[    4.904241] PM: Hibernation image partition 8:69 present
[    4.904242] PM: Looking for hibernation image.
[    4.904470] PM: Image not found (code -22)
[    4.904471] PM: Hibernation image not present or could not be loaded.
[    5.173145] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[    6.710919] loop: module loaded
[    6.810842] EXT4-fs (md0): warning: checktime reached, running e2fsck is recommended
[    6.907251] EXT4-fs (md0): re-mounted. Opts: errors=remount-ro
[ 2393.587184] md/raid:md1: device md127 operational as raid disk 0
[ 2393.587191] md/raid:md1: device sdd1 operational as raid disk 3
[ 2393.589628] md/raid:md1: allocated 7548kB
[ 2393.589871] md/raid:md1: not enough operational devices (5/7 failed)
[ 2393.589920] RAID conf printout:
[ 2393.589922]  --- level:6 rd:7 wd:2
[ 2393.589926]  disk 0, o:1, dev:md127
[ 2393.589929]  disk 3, o:1, dev:sdd1
[ 2393.591356] md/raid:md1: failed to run raid set.
[ 2393.591368] md: pers->run() failed ...

sudo 更新-initramfs -u

update-initramfs: Generating /boot/initrd.img-4.9.0-6amd64
W: mdadm: the array /dev/md/cimbernserver:1 with UUID 390867ca:4d719fb2:8e328b86:eab84e2a
W: mdadm: is currently active, but is not listed in mdadm.conf. if 
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm.conf, and make the necessary changes. 

但我真的不明白建立配置文件文件的意思应该是:

猫/usr/share/mdadm/mkconf

#!/bin/sh
#
# mkconf -- outputs valid mdadm.conf contents for the local system
#
# Copyright © martin f. krafft <[email protected]>
# distributed under the terms of the Artistic Licence 2.0
#
set -eu

ME="${0##*/}"
MDADM=/sbin/mdadm
DEBIANCONFIG=/etc/default/mdadm
CONFIG=/etc/mdadm/mdadm.conf

# initialise config variables in case the environment leaks
MAILADDR= DEVICE= HOMEHOST= PROGRAM=

test -r $DEBIANCONFIG && . $DEBIANCONFIG

if [ -n "${MDADM_MAILADDR__:-}" ]; then
  # honour MAILADDR from the environment (from postinst)
  MAILADDR="$MDADM_MAILADDR__"
else
  # preserve existing MAILADDR
  MAILADDR="$(sed -ne 's/^MAILADDR //p' $CONFIG 2>/dev/null)" || :
fi

# save existing values as defaults
if [ -r "$CONFIG" ]; then
  DEVICE="$(sed -ne 's/^DEVICE //p' $CONFIG)"
  HOMEHOST="$(sed -ne 's/^HOMEHOST //p' $CONFIG)"
  PROGRAM="$(sed -ne 's/^PROGRAM //p' $CONFIG)"
fi

[ "${1:-}" = force-generate ] && rm -f $CONFIG
case "${1:-}" in
  generate|force-generate)
    [ -n "${2:-}" ] && CONFIG=$2
    # only barf if the config file specifies anything else than MAILADDR
    if egrep -qv '^(MAILADDR.*|#.*|)$' $CONFIG 2>/dev/null; then
      echo "E: $ME: $CONFIG already exists." >&2
      exit 255
    fi

    mkdir --parent ${CONFIG%/*}
    exec >$CONFIG
    ;;
esac

cat <<_eof
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE ${DEVICE:-partitions containers}

# automatically tag new arrays as belonging to the local system
HOMEHOST ${HOMEHOST:-<system>}

# instruct the monitoring daemon where to send mail alerts
MAILADDR ${MAILADDR:-root}

_eof

if [ -n "${PROGRAM:-}" ]; then
  cat <<-_eof
    # program to run when mdadm monitor detects potentially interesting events
    PROGRAM ${PROGRAM}

    _eof
fi

error=0
if [ ! -r /proc/mdstat ]; then
  echo W: $ME: MD subsystem is not loaded, thus I cannot scan for arrays. >&2
  error=1
elif [ ! -r /proc/partitions ]; then
  echo W: $ME: /proc/partitions cannot be read, thus I cannot scan for arrays. >&2
  error=2
else
  echo "# definitions of existing MD arrays"
  if ! $MDADM --examine --scan --config=partitions; then
    error=$(($? + 128))
    echo W: $ME: failed to scan for partitions. >&2
    echo "### WARNING: scan failed."
  else
    echo
  fi
fi

echo "# This configuration was auto-generated on $(date -R) by mkconf"

exit $error

mdadm --detail --scan发现,不知何故,有两个同名的数组位于不同的位置,并且具有不同的 UUID:

ARRAY /dev/md/cimbernserver:1 metadata=1.2 name=cimbernserver:1 UUID=390867ca:4d719fb2:8e328b86:eab84e2a
ARRAY /dev/md/0 metadata=1.2 name=cimbernserver:0 UUID=8f613aa2:92288947:af7c552e:d920b7ac
ARRAY /dev/md1 metadata=1.2 name=cimbernserver:1 UUID=31f14508:f1f3425b:ecfbbf0a:3b4db3c3

lsblk

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 
sdb       8:16   0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 
sdc       8:32   0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 
sdd       8:48   0   1,8T  0 disk  
└─sdd1    8:49   0   1,8T  0 part  
sde       8:64   0 931,5G  0 disk  
├─sde1    8:65   0 926,7G  0 part  
│ └─md0   9:0    0 926,7G  0 raid1 /
├─sde2    8:66   0     1K  0 part  
└─sde5    8:69   0   4,9G  0 part  [SWAP]
sdf       8:80   0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 
sdg       8:96   0   1,8T  0 disk  
├─sdg1    8:97   0 926,7G  0 part  
│ └─md0   9:0    0 926,7G  0 raid1 /
├─sdg2    8:98   0     1K  0 part  
└─sdg5    8:101  0   4,9G  0 part  
sdh       8:112  0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 
sdi       8:128  0   1,8T  0 disk  
└─md127   9:127  0  10,9T  0 raid5 

最后是不同驱动器的所有详细信息: mdadm -D /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Wed Oct 20 18:43:39 2010
     Raid Level : raid1
     Array Size : 971677560 (926.66 GiB 995.00 GB)
  Used Dev Size : 971677560 (926.66 GiB 995.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jul 27 21:08:46 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : cimbernserver:0
           UUID : 8f613aa2:92288947:af7c552e:d920b7ac
         Events : 16914129

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       2       8       97        1      active sync   /dev/sdg1

mdadm -D /dev/md1

/dev/md1:
        Version : 1.2
  Creation Time : Fri Sep 26 11:43:02 2014
     Raid Level : raid6
  Used Dev Size : 1953123840 (1862.64 GiB 2000.00 GB)
   Raid Devices : 7
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Jul 26 21:14:38 2018
          State : active, FAILED, Not Started 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : cimbernserver:1
           UUID : 31f14508:f1f3425b:ecfbbf0a:3b4db3c3
         Events : 8225

    Number   Major   Minor   RaidDevice State
       0       9      127        0      active sync   /dev/md/cimbernserver:1
       -       0        0        1      removed
       -       0        0        2      removed
       7       8       49        3      active sync   /dev/sdd1
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed

mdadm -D /dev/md127

/dev/md127:
        Version : 1.2
  Creation Time : Tue Sep 23 14:24:40 2014
     Raid Level : raid5
     Array Size : 11721077760 (11178.09 GiB 12002.38 GB)
  Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
   Raid Devices : 7
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Fri Jul 27 12:05:24 2018
          State : clean, degraded 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : cimbernserver:1
           UUID : 390867ca:4d719fb2:8e328b86:eab84e2a
         Events : 37

    Number   Major   Minor   RaidDevice State
       7       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       -       0        0        3      removed
       4       8      128        4      active sync   /dev/sdi
       5       8      112        5      active sync   /dev/sdh
       6       8       80        6      active sync   /dev/sdf

/dev/md/0 和 /dev/md/cimbernserver:1 是到 md0 和 md127 的符号链接:

lrwxrwxrwx  1 root root    6 Jul 27 15:48 0 -> ../md0
lrwxrwxrwx  1 root root    8 Jul 27 15:48 cimbernserver:1 -> ../md127

mdadm -D /dev/md/0

/dev/md/0:
        Version : 1.2
  Creation Time : Wed Oct 20 18:43:39 2010
     Raid Level : raid1
     Array Size : 971677560 (926.66 GiB 995.00 GB)
  Used Dev Size : 971677560 (926.66 GiB 995.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jul 27 21:12:47 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : cimbernserver:0
           UUID : 8f613aa2:92288947:af7c552e:d920b7ac
         Events : 16914129

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       2       8       97        1      active sync   /dev/sdg1

mdadm -D /dev/md/cimbernserver:1

/dev/md/cimbernserver:1:
        Version : 1.2
  Creation Time : Tue Sep 23 14:24:40 2014
     Raid Level : raid5
     Array Size : 11721077760 (11178.09 GiB 12002.38 GB)
  Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
   Raid Devices : 7
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Fri Jul 27 12:05:24 2018
          State : clean, degraded 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : cimbernserver:1
           UUID : 390867ca:4d719fb2:8e328b86:eab84e2a
         Events : 37

    Number   Major   Minor   RaidDevice State
       7       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       -       0        0        3      removed
       4       8      128        4      active sync   /dev/sdi
       5       8      112        5      active sync   /dev/sdh
       6       8       80        6      active sync   /dev/sdf

相关内容