我的 Ubuntu RAID 阵列有问题吗

我的 Ubuntu RAID 阵列有问题吗

我有一台 Ubuntu 18.04 服务器,在操作系统安装过程中配置了软件 RAID1 阵列中的两个 1Tb 驱动器。我想今晚检查阵列和磁盘的运行状况,但不确定我的某个驱动器是否有问题。

我看到的是'mdadm --detail /dev/md0' 输出显示其中一个驱动器似乎已被删除,我从另一个 askubuntu 问题' 中的 [_U]猫/proc/mdstat'命令输出可能是分区出现故障的信号。

从以下结果来看,驱动器是否发生故障?如果发生故障,最佳处理方案是什么?此外,如何设置驱动器,以便在驱动器发生故障时向我发送电子邮件?

sudo fdisk -l

Disk /dev/loop0: 99.2 MiB, 104030208 bytes, 203184 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 99.2 MiB, 104026112 bytes, 203176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 310AB5A9-6622-49D9-82C3-B1F2E53DD560

Device       Start        End    Sectors   Size Type
/dev/sda1     2048    2099199    2097152     1G Linux filesystem
/dev/sda2  2099200 1953521663 1951422464 930.5G Linux filesystem


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D634E279-44CC-4ED5-B380-07D02C3C3601

Device       Start        End    Sectors   Size Type
/dev/sdb1     2048       4095       2048     1M BIOS boot
/dev/sdb2     4096    2101247    2097152     1G Linux filesystem
/dev/sdb3  2101248 1953521663 1951420416 930.5G Linux filesystem


Disk /dev/md0: 930.4 GiB, 998991986688 bytes, 1951156224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

lsblk

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0     7:0    0  99.2M  1 loop  /snap/core/10908
loop2     7:2    0  99.2M  1 loop  /snap/core/10859
sda       8:0    0 931.5G  0 disk
├─sda1    8:1    0     1G  0 part
└─sda2    8:2    0 930.5G  0 part
sdb       8:16   0 931.5G  0 disk
├─sdb1    8:17   0     1M  0 part
├─sdb2    8:18   0     1G  0 part  /boot
└─sdb3    8:19   0 930.5G  0 part
  └─md0   9:0    0 930.4G  0 raid1 /

猫/etc/mdadm/mdadm.conf

ARRAY /dev/md0 metadata=1.2 name=ubuntu-server:0 UUID=1d9d79bd:d675f751:144db975:0d24caa9
MAILADDR root

sudo mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Aug 30 21:55:50 2019
        Raid Level : raid1
        Array Size : 975578112 (930.38 GiB 998.99 GB)
     Used Dev Size : 975578112 (930.38 GiB 998.99 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Apr  3 23:18:52 2021
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : ubuntu-server:0
              UUID : 1d9d79bd:d675f751:144db975:0d24caa9
            Events : 1907228

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       19        1      active sync   /dev/sdb3

猫/proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb3[1]
      975578112 blocks super 1.2 [2/1] [_U]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

答案1

谢谢 Terence。智能控制显示两个磁盘都没有问题,所以我运行sudo mdadm --管理/dev/md0 --添加/dev/sda2然后它继续添加磁盘并重建阵列。现在一切看起来又正常了。

猫/proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda2[0] sdb3[1]
      975578112 blocks super 1.2 [2/2] [UU]
      bitmap: 2/8 pages [8KB], 65536KB chunk

unused devices: <none>

lsblk

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0     7:0    0  99.2M  1 loop  /snap/core/10908
loop2     7:2    0  99.2M  1 loop  /snap/core/10859
sda       8:0    0 931.5G  0 disk
├─sda1    8:1    0     1G  0 part
└─sda2    8:2    0 930.5G  0 part
  └─md0   9:0    0 930.4G  0 raid1 /
sdb       8:16   0 931.5G  0 disk
├─sdb1    8:17   0     1M  0 part
├─sdb2    8:18   0     1G  0 part  /boot
└─sdb3    8:19   0 930.5G  0 part
  └─md0   9:0    0 930.4G  0 raid1 /

sudo mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Aug 30 21:55:50 2019
        Raid Level : raid1
        Array Size : 975578112 (930.38 GiB 998.99 GB)
     Used Dev Size : 975578112 (930.38 GiB 998.99 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Apr  5 07:46:14 2021
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : ubuntu-server:0
              UUID : 1d9d79bd:d675f751:144db975:0d24caa9
            Events : 1930973

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       19        1      active sync   /dev/sdb3

相关内容