mdadm raid 未安装

mdadm raid 未安装

/etc/mdadm.conf我有一个像这样定义的 raid 数组:

ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6
ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7

但是当我尝试安装它们时,我得到了这个:

# mount /dev/md0 /mnt/media/
mount: special device /dev/md0 does not exist
# mount /dev/md1 /mnt/data
mount: special device /dev/md1 does not exist

/proc/mdstat同时说:

# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md125 : inactive dm-6[0](S)
      238340224 blocks

md126 : inactive dm-5[0](S)
      244139648 blocks

md127 : inactive dm-3[0](S)
      390628416 blocks

unused devices: <none>

所以我尝试了这个:

# mount /dev/md126 /mnt/data
mount: /dev/md126: can't read superblock
# mount /dev/md125 /mnt/media
mount: /dev/md125: can't read superblock

分区上的 fs 是ext3,当我用 指定 fs 时-t,我得到

mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error
       (could this be the IDE device where you in fact use
       ide-scsi so that sr0 or sda or so is needed?)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

如何安装我的 raid 阵列?之前已经工作过了

编辑1

# mdadm --detail --scan
mdadm: cannot open /dev/md/127_0: No such file or directory
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1_0: No such file or directory

编辑2

# dmsetup ls
isw_cabciecjfi_Raid7    (252:6)
isw_cabciecjfi_Raid6    (252:5)
isw_cabciecjfi_Raid5    (252:4)
isw_cabciecjfi_Raid3    (252:3)
isw_cabciecjfi_Raid2    (252:2)
isw_cabciecjfi_Raid1    (252:1)
isw_cabciecjfi_Raid     (252:0)
# dmsetup table
isw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958
isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411
isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638
isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270
isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285
isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63
isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors

编辑3

# file -s -L /dev/mapper/*
/dev/mapper/control:              ERROR: cannot read `/dev/mapper/control' (Invalid argument)
/dev/mapper/isw_cabciecjfi_Raid:  x86 boot sector
/dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795
/dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files)
/dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files)
# 

另外,当我的系统中有一个额外的磁盘时/dev/mapper/isw_cabciecjfi_Raid- 我尝试安装一个分区,但得到:

# mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
mount: unknown filesystem type 'linux_raid_member'

我重新启动并确认 RAID 在我的BIOS.

I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected:
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
# ls -l /mnt/media/
total 0
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data
# ls -l /mnt/data
total 0

编辑4

执行建议的命令后,我只得到:

$ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory

编辑5

/dev/md127现在已经安装了,但/dev/md0仍然/dev/md1无法访问:

# mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory



root@regDesktopHome:~# mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md127
root@regDesktopHome:~# mdadm --assemble --scan
mdadm: /dev/md127 has been started with 1 drive (out of 2).
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 236 Aug 13 22:43 control
brw-rw---- 1 root disk 252,   0 Aug 13 22:43 isw_cabciecjfi_Raid
brw------- 1 root root 252,   1 Aug 13 22:43 isw_cabciecjfi_Raid1
brw------- 1 root root 252,   2 Aug 13 22:43 isw_cabciecjfi_Raid2
brw------- 1 root root 252,   3 Aug 13 22:43 isw_cabciecjfi_Raid3
brw------- 1 root root 252,   4 Aug 13 22:43 isw_cabciecjfi_Raid5
brw------- 1 root root 252,   5 Aug 13 22:43 isw_cabciecjfi_Raid6
brw------- 1 root root 252,   6 Aug 13 22:43 isw_cabciecjfi_Raid7
root@regDesktopHome:~# mdadm --examine
mdadm: No devices to examine
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# mdadm --examine /dev/dm-[356]
/dev/dm-3:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 18:50:36 2009
     Raid Level : raid1
  Used Dev Size : 390628416 (372.53 GiB 400.00 GB)
     Array Size : 390628416 (372.53 GiB 400.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127

    Update Time : Sat May 31 18:52:12 2014
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 23fe942e - correct
         Events : 167


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync

   0     0       8       35        0      active sync
   1     1       8       19        1      active sync
/dev/dm-5:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 19:15:33 2009
     Raid Level : raid1
  Used Dev Size : 244139648 (232.83 GiB 250.00 GB)
     Array Size : 244139648 (232.83 GiB 250.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : bfad9d61 - correct
         Events : 75007


      Number   Major   Minor   RaidDevice State
this     0       8       38        0      active sync

   0     0       8       38        0      active sync
   1     1       8       22        1      active sync
/dev/dm-6:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165
  Creation Time : Tue Sep  8 21:19:15 2009
     Raid Level : raid1
  Used Dev Size : 238340224 (227.30 GiB 244.06 GB)
     Array Size : 238340224 (227.30 GiB 244.06 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2a7a125f - correct
         Events : 3973383


      Number   Major   Minor   RaidDevice State
this     0       8       39        0      active sync

   0     0       8       39        0      active sync
   1     1       8       23        1      active sync
root@regDesktopHome:~# 

编辑6

我阻止了他们mdadm --stop /dev/md[01]并确认/proc/mdstat不会再显示他们,然后处决mdadm --asseble --scan并得到

# mdadm --assemble --scan
mdadm: /dev/md0 has been started with 1 drives.
mdadm: /dev/md1 has been started with 2 drives.

但如果我尝试安装任一阵列,我仍然会得到:

root@regDesktopHome:~# mount /dev/md1 /mnt/data
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

与此同时,我发现我的超级块似乎已损坏(PS我已经确认tune2fs并且fdisk我正在处理ext3分区):

root@regDesktopHome:~# e2fsck /dev/md1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 59585077 blocks
The physical size of the device is 59585056 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
root@regDesktopHome:~# e2fsck /dev/md0
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 61034935 blocks
The physical size of the device is 61034912 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes

但两个分区都备份了一些超级块:

root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696
inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1863
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 
        4096000, 7962624, 11239424, 20480000, 23887872

root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248
inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1819
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872

您认为我应该尝试将两个阵列上的备份恢复到吗23887872?我想我可以做到e2fsck -b 23887872 /dev/md[01]这一点,你建议尝试一下吗?
我不一定想尝试一些我不太了解的东西,这可能会破坏我磁盘上的数据...man e2fsck不一定说它很危险,但可能还有另一种更安全的方法来修复超级块.. .?


作为社区的最后更新,


我曾经resize2fs让我的超级块恢复正常,并再次安装我的驱动器! (resize2fs /dev/md0&resize2fs /dev/md1让我支持!)说来话长,但终于成功了!mdadm一路上我学到了很多东西!谢谢@IanMacintosh

答案1

您的阵列未正确启动。使用以下命令将它们从运行配置中删除:

mdadm --stop /dev/md12[567]

现在尝试使用自动扫描和组装功能。

mdadm --assemble --scan

假设可行,保存您的配置(假设是 Debian 衍生版本)(这将覆盖您的配置,因此我们首先进行备份):

mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

您现在应该可以重新启动,并且每次都会自动组装并启动。

如果不是,请给出输出:

mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7

有点长但显示一切你需要了解阵列和阵列的成员磁盘、它们的状态等。

顺便说一句,如果您不在磁盘上单独创建多个 raid 阵列(即 /dev/sd[bc]6 和 /dev/sd[bc]7),通常效果会更好。相反,只需创建一个阵列,然后您可以根据需要在阵列上创建分区。大多数时候,LVM 是一种更好的阵列分区方式。

答案2

这将永久修复它:

# mdadm -Es > /etc/mdadm.conf
# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

相关内容