CentOS 7 上两个硬盘各有软件 RAID1?

CentOS 7 上两个硬盘各有软件 RAID1?

免责声明:我没有创建这个服务器。

从下面看到的情况来看,我似乎有两个相同的驱动器/dev/sda/dev/sdb,每个驱动器都有自己的软件 RAID1。我完全读错了吗?服务器确实支持硬件 RAID1,但下面的命令中不应该显示硬件 RAID1。我看到的事实意味着/dev/mdX存在软件 RAID。每个驱动器都有自己的软件 RAID1 吗?

这是输出lsblk -o name,fstype,maj:min,size,type,mountpoint,uuid

NAME                   FSTYPE         MAJ:MIN   SIZE   TYPE  MOUNTPOINT UUID
sda                    isw_raid_mem   8:0       465.8G disk             
└─md126                               9:126     442.5G raid1            
  ├─md126p1            xfs            259:0     1G     md    /boot      f178f32e-b423-4000-b458-1a2e9c36a295
  └─md126p2            LVM2_member    259:1     441.5G md               VXBriv-sYp5-AZyT-UrI1-uzDt-Bjkw-ZI1Ka5
    ├─centos_root      xfs            253:0     350G   lvm   /          3b2a2d46-e097-44b5-98a7-24256a047bbb
    ├─centos_swap      swap           253:1     41.5G  lvm   [SWAP]     b0ca8a53-f78b-4afd-b0fc-6b86a47e59aa
    └─centos_home      xfs            253:2     50G    lvm   /home      4cb3b128-ccb7-41b5-af2c-abc0a9b54112
sdb                    isw_raid_mem   8:16      465.8G disk             
└─md126                               9:126     442.5G raid1            
  ├─md126p1            xfs            259:0     1G     md    /boot      f178f32e-b423-4000-b458-1a2e9c36a295
  └─md126p2            LVM2_member    259:1     441.5G md               VXBriv-sYp5-AZyT-UrI1-uzDt-Bjkw-ZI1Ka5
    ├─centos_root      xfs            253:0     350G   lvm   /          3b2a2d46-e097-44b5-98a7-24256a047bbb
    ├─centos_swap      swap           253:1     41.5G  lvm   [SWAP]     b0ca8a53-f78b-4afd-b0fc-6b86a47e59aa
    └─centos_home      xfs            253:2     50G    lvm   /home      4cb3b128-ccb7-41b5-af2c-abc0a9b54112

以下是输出/proc/mdstat

Personalities : [raid1] 
md126 : active raid1 sda[1] sdb[0]
      463992832 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sdb[1](S) sda[0](S)
      6306 blocks super external:imsm

unused devices: <none>

以下是输出/etc/mdadm.conf

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/Volume0_0 UUID=81677824:f064e89e:eec139df:de40c0e5
ARRAY /dev/md/imsm0 UUID=58987cc8:398c9863:db0f4339:3f35e11c

输出mdadm --detail /dev/md126

/dev/md126:
         Container : /dev/md/imsm0, member 0
        Raid Level : raid1
        Array Size : 463992832 (442.50 GiB 475.13 GB)
     Used Dev Size : 463992964 (442.50 GiB 475.13 GB)
      Raid Devices : 2
     Total Devices : 2

             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync


              UUID : 81677824:f064e89e:eec139df:de40c0e5
    Number   Major   Minor   RaidDevice State
       1       8        0        0      active sync   /dev/sda
       0       8       16        1      active sync   /dev/sdb

输出fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b2b72

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   927985663   462943232   8e  Linux LVM

Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b2b72

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2099199     1048576   83  Linux
/dev/sdb2         2099200   927985663   462943232   8e  Linux LVM

Disk /dev/md126: 475.1 GB, 475128659968 bytes, 927985664 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b2b72

      Device Boot      Start         End      Blocks   Id  System
/dev/md126p1   *        2048     2099199     1048576   83  Linux
/dev/md126p2         2099200   927985663   462943232   8e  Linux LVM

Disk /dev/mapper/centos_root: 375.8 GB, 375809638400 bytes, 734003200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos_swap: 44.6 GB, 44551897088 bytes, 87015424 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos_home: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

答案1

您有两个相同的驱动器。但您只有一个 RAID1,它将这两个驱动器中的每一个用作镜像成员。RAID 阵列使用英特尔假 RAID,不是真正的硬件 RAID。

答案2

当使用“假 raid”设置创建的阵列作为软件 MD RAID 阵列打开时,它看起来就是这样的。

Linux MD RAID 支持几种外部 RAID 格式,IMSM 就是其中之一。在此配置中,阵列的创建格式是 BIOS 和主板上的芯片组能够理解的,并可以从中启动。一旦操作系统启动,它就会接管阵列的管理。

在 Linux 中,外部守护进程mdmon知道如何解释和更新阵列元数据。它通过 sysfs 与内核通信。您在内核中使用常规 Linux 软件 RAID,但阵列元数据由 管理mdmon

这也是有趣的额外阵列的来源,也是 Linux 管理这些外部 RAID 元数据格式的方式之一。 /dev/md127是一个容器阵列。“真正的”阵列是/dev/md126

相关内容