RAID1 清理、降级

RAID1 清理、降级

在我的主机发生火灾后,我终于恢复了对 Debian 9 服务器的访问。我可以进入恢复模式,但无法以正常 (HDD) 模式运行服务器。

如果我运行 /cat/mdstat,我会有 4 个 RAID 1 条目:md2、md3、md126 和 md127(也许这是错的,4 个条目?)

root@rescue:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md2 : active raid1 sda2[0]
      523200 blocks [2/1] [U_]

md3 : active raid1 sda3[0]
      66558912 blocks [2/1] [U_]

md126 : active raid1 sdb2[1]
      20478912 blocks [2/1] [_U]

md127 : active raid1 sdb3[1]
      3885486016 blocks [2/1] [_U]
      bitmap: 29/29 pages [116KB], 65536KB chunk

unused devices: <none>

因此,当我在任意 raid1 上尝试“mdadm -D”时,我都得到“clean, degrade”状态,例如:

root@rescue:/# mdadm -D /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Sat Sep 26 23:54:48 2020
     Raid Level : raid1
     Array Size : 66558912 (63.48 GiB 68.16 GB)
  Used Dev Size : 66558912 (63.48 GiB 68.16 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Sun Apr 25 15:56:55 2021
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : ed678e73:ac21dd35:a4d2adc2:26fd5302 (local to host rescue.ovh.net)
         Events : 0.30984

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       2       0        0        2      removed

我粘贴了另一个命令以获取更多信息:

root@rescue:/# blkid
/dev/sda1: LABEL="EFI_SYSPART" UUID="8857-4B3A" TYPE="vfat" PARTLABEL="primary" PARTUUID="2e641df3-f71f-4ab5-92ca-668851a02d77"
/dev/sda2: UUID="2971523d-f498-f757-a4d2-adc226fd5302" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="599f49fb-f989-4832-b77b-e2f27a673f97"
/dev/sda3: UUID="ed678e73-ac21-dd35-a4d2-adc226fd5302" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="624ddd28-87be-40c0-b305-79bb3347ea48"
/dev/sda4: LABEL="swap-sda4" UUID="b8da1fc7-5b4a-44b4-aa20-40e8f26f0ca2" TYPE="swap" PARTLABEL="primary" PARTUUID="94d9ce3d-2e42-498d-8c7f-1feae8d72338"
/dev/sda5: UUID="bb12ede2-142e-4ccf-9827-e19eb0080d53" TYPE="ext4" PARTLABEL="logical" PARTUUID="41fb2a01-48cb-4155-8808-02b9235335a0"
/dev/sdb1: LABEL="EFI_SYSPART" UUID="DD13-67E8" TYPE="vfat" PARTLABEL="primary" PARTUUID="77c4ac1b-164b-4b0d-b437-1d3a80eb42a9"
/dev/sdb2: UUID="bb2e7d1a-8f0f-c857-a4d2-adc226fd5302" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="1260b8f0-e1bc-42b2-8776-d78476eaa6e1"
/dev/sdb3: UUID="19ed8088-4525-3065-a4d2-adc226fd5302" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="b90eb1f5-b2f7-429a-a789-b51007914f53"
/dev/sdb4: LABEL="swap-sdb4" UUID="ab630309-a5f4-416a-9625-81a8b31cca5d" TYPE="swap" PARTLABEL="primary" PARTUUID="728caa19-34fa-4a78-a2ee-4c2b3874e15a"
/dev/md127: LABEL="/var" UUID="234afc28-b25d-4fec-b002-2c81df1dff23" TYPE="ext4"
/dev/md126: LABEL="/" UUID="284a8e2b-51ba-48bd-836a-c57a16cd05bb" TYPE="ext4"
/dev/md3: LABEL="/" UUID="baa73135-6079-41cb-99d6-2ea2cf2124ce" TYPE="ext4"
/dev/md2: LABEL="/boot" UUID="9bd5e2fb-6045-456c-886d-0299de841f60" TYPE="ext4"

root@rescue:/# fdisk -l | grep "Disk "
Disk /dev/ram0: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram1: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram2: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram3: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram4: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram5: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram6: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram7: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram8: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram9: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram10: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram11: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram12: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram13: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram14: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/ram15: 50 MiB, 52428800 bytes, 102400 sectors
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk identifier: 0EFAD3CA-A5DB-4B4B-8E33-C50B0DA58C72
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk identifier: FB7E8F72-716D-411D-BC38-1D419EF28895
Disk /dev/md127: 3.6 TiB, 3978737680384 bytes, 7770972032 sectors
Disk /dev/md126: 19.5 GiB, 20970405888 bytes, 40957824 sectors
Disk /dev/md3: 63.5 GiB, 68156325888 bytes, 133117824 sectors
Disk /dev/md2: 511 MiB, 535756800 bytes, 1046400 sectors

root@rescue:/# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This configuration was auto-generated on Tue, 12 May 2015 13:56:17 +0000 by mkconf

root@rescue:/# mdadm --detail --scan --verbose
ARRAY /dev/md/127 level=raid1 num-devices=2 metadata=0.90 UUID=19ed8088:45253065:a4d2adc2:26fd5302
   devices=/dev/sdb3
ARRAY /dev/md/126 level=raid1 num-devices=2 metadata=0.90 UUID=bb2e7d1a:8f0fc857:a4d2adc2:26fd5302
   devices=/dev/sdb2
ARRAY /dev/md/3 level=raid1 num-devices=2 metadata=0.90 UUID=ed678e73:ac21dd35:a4d2adc2:26fd5302
   devices=/dev/sda3
ARRAY /dev/md/2 level=raid1 num-devices=2 metadata=0.90 UUID=2971523d:f498f757:a4d2adc2:26fd5302
   devices=/dev/sda2

我需要帮助。

谢谢。

相关内容