重启后,mdadm RAID 底层的 LVM 消失

重启后,mdadm RAID 底层的 LVM 消失

我的问题的主要部分已经在这里讨论过了......但我有两个特殊的问题以前没有得到解答。

情况:我升级了硬件,安装了两个新磁盘并更改了设置。

这是新的设置

PROMPT> blkid
/dev/nvme0n1p3: UUID="29cd2fd5-2cd5-455c-9ac0-7cb278e46ee3" TYPE="swap" PARTUUID="dc3d569d-363f-4f68-87ac-56e1bdb4f29d"
/dev/nvme0n1p1: UUID="D2F8-117C" TYPE="vfat" PARTUUID="cdc50870-bddc-47bf-9835-369694713e41"
/dev/nvme0n1p2: UUID="e36edd7b-d3fd-45f4-87a8-f828238aab08" TYPE="ext4" PARTUUID="25d98b84-55a1-4dd8-81c1-f2cfece5c802"
/dev/sda: UUID="2d9225e8-edc1-01fc-7d54-45fcbdcc8020" UUID_SUB="fec59176-0f3a-74d4-1947-32b508978749" LABEL="fangorn:0" TYPE="linux_raid_member"
/dev/sde: UUID="2d9225e8-edc1-01fc-7d54-45fcbdcc8020" UUID_SUB="efce71d0-3080-eecb-ce2f-8a166b2e4441" LABEL="fangorn:0" TYPE="linux_raid_member"
/dev/sdf: UUID="dddf84e3-2ec3-4156-8d18-5f1dce7be002" UUID_SUB="be5e9e9f-d627-2968-837c-9f656d2f62ba" LABEL="fangorn:1" TYPE="linux_raid_member"
/dev/sdg: UUID="dddf84e3-2ec3-4156-8d18-5f1dce7be002" UUID_SUB="1e73b358-a0c8-1c2d-34bd-8663e7906e6f" LABEL="fangorn:1" TYPE="linux_raid_member"
/dev/sdh1: UUID="6588304d-6098-4f80-836e-0e4832e2de8f" TYPE="ext4" PARTUUID="000533fc-01"
/dev/md0: UUID="7Nyd6C-oG50-b3jJ-aGkL-feIE-pAFc-5uM7vy" TYPE="LVM2_member"
/dev/md1: UUID="MtJAdS-Jdbn-2MR7-6evR-XCvL-wEm5-PUkg8p" TYPE="LVM2_member"
/dev/mapper/fg00-Data: LABEL="data" UUID="a26b7a38-d24f-4d28-ab8d-89233db95be6" TYPE="ext4"

PROMPT> mdadm -Es 
ARRAY /dev/md/1  metadata=1.2 UUID=dddf84e3:2ec34156:8d185f1d:ce7be002 name=fangorn:1
ARRAY /dev/md/0  metadata=1.2 UUID=2d9225e8:edc101fc:7d5445fc:bdcc8020 name=fangorn:0

奇怪的:没有/dev/md/0and/dev/md/1文件,只有/dev/md0and/dev/md1

PROMPT> mdadm --detail /dev/md[01]
/dev/md0:
           Version : 1.2
     Creation Time : Wed Oct 16 16:51:48 2019
        Raid Level : raid1
        Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
     Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Oct 16 16:57:03 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : fangorn:0  (local to host fangorn)
              UUID : 2d9225e8:edc101fc:7d5445fc:bdcc8020
            Events : 2

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       64        1      active sync   /dev/sde
/dev/md1:
           Version : 1.2
     Creation Time : Wed Oct 16 16:51:56 2019
        Raid Level : raid1
        Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Oct 16 16:51:56 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : fangorn:1  (local to host fangorn)
              UUID : dddf84e3:2ec34156:8d185f1d:ce7be002
            Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       80        0      active sync   /dev/sdf
       1       8       96        1      active sync   /dev/sdg

在这两个 RAID1 阵列之上,我构建了一个卷组,将它们粘合到一个分区中,我将其用作/data挂载

问题

每次我重启,RAID 就会丢失。阵列一点痕迹都没有留下。

是的,我确实编辑了/etc/mdadm/mdadm.conf

PROMPT> cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sda /dev/sde /dev/sdf /dev/sdg

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR m++++@++++++.++

# definitions of existing MD arrays

#ARRAY /dev/md1 metadata=1.2 name=fangorn:1 UUID=7068062b:a1a9265e:a7b5dc00:586d9f1b
#ARRAY /dev/md0 metadata=1.2 name=fangorn:0 UUID=9ab9aecd:cdfd3fe8:04587007:892edf3e
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=fangorn:0 UUID=7368baaf:9b08df19:d9362975:bf70eb1f devices=/dev/sda,/dev/sde
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=fangorn:1 UUID=dc218d09:18f63682:78b5ab94:6aa53459 devices=/dev/sdf,/dev/sdg

是的,我发出了update-initramfs -u

恢复数据的唯一方法是每次重启后重新创建阵列:

mdadm --create /dev/md0 --assume-clean --level=raid1 --verbose --raid-devices=2 /dev/sda /dev/sde
mdadm --create /dev/md1 --assume-clean --level=raid1 --verbose --raid-devices=2 /dev/sdf /dev/sdg
  • 注意--assume-clean开关
  • LVM 立即重新创建并可挂载。
  • 没有数据丢失。

但是:如何让系统在重启时重新组装阵列?

磁盘上已经有相当多的数据,所以我不想重新分区底层硬件,除非我有办法在不丢失数据的情况下进行分区。

在阵列和 LVM 没有启动和运行的情况下我可以访问数据吗?

附加信息

  • 添加于 2019-10-22

重启后 - 即当重新组装失败并且我处于单用户模式时 - 我从 mdadm 获得以下输出(我mdadm.conf同时删除了它以查看是否有帮助 - 但没有):

PROMPT> mdadm --assemble --scan -v 
mdadm: looking for devices for further assembly
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no recogniseable superblock on /dev/sdh1
mdadm: Cannot assemble mbr metadata on /dev/sdh
mdadm: Cannot assemble mbr metadata on /dev/sdg
mdadm: Cannot assemble mbr metadata on /dev/sdf
mdadm: Cannot assemble mbr metadata on /dev/sde
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: no recogniseable superblock on /dev/nvme0n1p3
mdadm: no recogniseable superblock on /dev/nvme0n1p2
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1p1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: No arrays found in config file or automatically

之后我按照上面的描述重新创建了数组并得到了以下输出:

PROMPT> mdadm --create /dev/md0 --assume-clean --level=raid1 --verbose --raid-devices=2 /dev/sda /dev/sde
mdadm: partition table exists on /dev/sda
mdadm: partition table exists on /dev/sda but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: partition table exists on /dev/sde
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: size set to 3906886464K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? 
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

PROMPT> mdadm --create /dev/md1 --assume-clean --level=raid1 --verbose --raid-devices=2 /dev/sdf /dev/sdg
mdadm: partition table exists on /dev/sdf
mdadm: partition table exists on /dev/sdf but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: partition table exists on /dev/sdg
mdadm: partition table exists on /dev/sdg but will be lost or
       meaningless after creating array
mdadm: size set to 1953382464K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array?
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

之后又做了另一件事lsblk

PROMPT>  lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME            SIZE FSTYPE            TYPE  MOUNTPOINT
sda             3,7T linux_raid_member disk  
└─md0           3,7T LVM2_member       raid1 
  └─fg00-Data   5,5T ext4              lvm   /data
sde             3,7T linux_raid_member disk  
└─md0           3,7T LVM2_member       raid1 
  └─fg00-Data   5,5T ext4              lvm   /data
sdf             1,8T linux_raid_member disk  
└─md1           1,8T LVM2_member       raid1 
  └─fg00-Data   5,5T ext4              lvm   /data
sdg             1,8T linux_raid_member disk  
└─md1           1,8T LVM2_member       raid1 
  └─fg00-Data   5,5T ext4              lvm   /data
sdh           119,2G                   disk  
└─sdh1        119,2G ext4              part  /home
sr0            1024M                   rom   
nvme0n1         477G                   disk  
├─nvme0n1p1     300M vfat              part  /boot/efi
├─nvme0n1p2   442,1G ext4              part  /
└─nvme0n1p3    34,6G swap              part  [SWAP]

这是一个有人能理解的暗示吗?

PROMPT> fdisk -l

The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sda: 3,65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 513C284A-5CC0-4888-8AD0-83C4291B3D78


The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sde: 3,65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 437D10E3-E679-4062-9321-E8EE1A1AA2F5


The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdf: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DL003-9VT1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E07D5DB8-7253-45DE-92C1-255B7F3E56C8


The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdg: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Hitachi HDS5C302
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E07D5DB8-7253-45DE-92C1-255B7F3E56C8

Disk /dev/md0: 3,65 TiB, 4000651739136 bytes, 7813772928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md1: 1,84 TiB, 2000263643136 bytes, 3906764928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/fg00-Data: 5,47 TiB, 6000908173312 bytes, 11720523776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

另一条信息

PROMPT> gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.4

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!
Main header: OK
Backup header: OK
Main partition table: ERROR
Backup partition table: OK

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************

答案1

我想介绍 Martin L. 解决方案的另一种变体。它的不同之处在于它引入的停机时间要少得多,因为可以在系统运行时透明地将数据迁移到新阵列。迁移期间您只会遇到磁盘性能下降的情况。

按建议做在他的回答中直到他建议创建新 VG 的地方。

不要创建新的 VG。在新建的阵列上创建新的 PV,并使用这些 PV 扩展现有的 VG vgextend fg00 /dev/md-NEW:。

然后,使用 将逻辑卷从旧 pv 移动到新 pv pvmove /dev/md-OLD。即使在文件系统已安装并正在访问时也可以完成此操作。这将需要很长时间,但最终会完成。我会在 内运行此操作screen,并详细运行:screen pvmove -vi5 /dev/md-OLD,以确保如果 SSH 会话关闭并且每 5 秒显示一次进度,它不会中断。

可能存在新 PV 中没有足够的 PE 来执行此操作的情况。这是因为您现在使用分区而不是整个驱动器,可用空间和阵列大小略小。如果是这样,您必须减少一个 LV。例如,卸载 FS,减少(使用resize2fs)并减少 LV 大小。这将花费更长的时间,但仍然比逐个文件复制繁忙的文件系统更快。

当旧 PV 为空(pvmove 完成)时,将其从 VG 中移除,移除 PV 标签并移除旧阵列。清除那些现在未使用的驱动器,对其进行分区并添加到正在运行的阵列中。阵列重新同步也将在后台完成,在完成之前,您只会体验到磁盘性能的下降。

现在,不要忘记修复启动、即mdadam --examine --scan >> /etc/mdadm/mdadm.conf等等update-initramfs

答案2

@nh2 给出了一个简单的但是可能有危险解决方案使用分区或直接使用整个磁盘创建 mdadm 阵列有什么区别

顺便说一句,如果这种情况发生在你身上,你的数据并没有丢失。您很可能只需要sgdisk --zap该设备,然后使用例如重新创建 RAID mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc /dev/sdd(mdadm 会告诉您它已经检测到过去的数据,并询问您是否要继续重新使用该数据)。我尝试了多次并且成功了,但我还是建议您在执行此操作之前进行备份。

经过长时间的研究,我终于找到了解决方案。

这是我所做的

首先是一些状态信息

PROMPT> df -h
Dateisystem           Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/fg00-Data  5,4T    1,5T  3,8T   28% /data

然后卸载分区

PROMPT> umount /data

PROMPT> cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdg[1] sdf[0]
      1953382464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md0 : active raid1 sde[1] sda[0]
      3906886464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>

现在我降级这两个阵列

PROMPT > mdadm --manage /dev/md0 --fail /dev/sde
mdadm: set /dev/sde faulty in /dev/md0

PROMPT > mdadm --manage /dev/md1 --fail /dev/sdg
mdadm: set /dev/sdg faulty in /dev/md1

PROMPT > cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdg[1](F) sdf[0]
      1953382464 blocks super 1.2 [2/1] [U_]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md0 : active raid1 sde[1](F) sda[0]
      3906886464 blocks super 1.2 [2/1] [U_]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>

从阵列中移除磁盘

PROMPT > mdadm --manage /dev/md0 --remove /dev/sde 
mdadm: hot removed /dev/sde from /dev/md0
PROMPT > mdadm --manage /dev/md1 --remove /dev/sdg
mdadm: hot removed /dev/sdg from /dev/md1

PROMPT > cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdf[0]
      1953382464 blocks super 1.2 [2/1] [U_]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md0 : active raid1 sda[0]
      3906886464 blocks super 1.2 [2/1] [U_]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>

现在/dev/sde可以/dev/sdg自由地(重新)分区。

  • 因此我创建了新的分区,/dev/sde/dev/sdg按照建议将其大小比可用空间小几 MB。
  • 创建了新的 2 磁盘 RAID1 阵列,其中一个磁盘为活动磁盘,另一个磁盘为“丢失”
  • 使用这些新阵列作为物理卷建立一个新的 LVM 卷组
  • 在其上创建了一个逻辑卷(大小与旧逻辑卷相同,但创建分区时丢失了几个 MB)
  • 将所有数据从旧 LV 复制到新 LV
  • 销毁旧 RAID,并将磁盘分区添加到新 RAID

这是新的状态

PROMPT > lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME              SIZE FSTYPE            TYPE  MOUNTPOINT
sda               3,7T                   disk  
└─sda1            3,7T linux_raid_member part  
  └─md2           3,7T LVM2_member       raid1 
    └─fg01-Data   5,5T ext4              lvm   /data
sde               3,7T                   disk  
└─sde1            3,7T linux_raid_member part  
  └─md2           3,7T LVM2_member       raid1 
    └─fg01-Data   5,5T ext4              lvm   /data
sdf               1,8T                   disk  
└─sdf1            1,8T linux_raid_member part  
  └─md3           1,8T LVM2_member       raid1 
    └─fg01-Data   5,5T ext4              lvm   /data
sdg               1,8T                   disk  
└─sdg1            1,8T linux_raid_member part  
  └─md3           1,8T LVM2_member       raid1 
    └─fg01-Data   5,5T ext4              lvm   /data
sdh             119,2G                   disk  
└─sdh1          119,2G ext4              part  /home
sr0              1024M                   rom   
nvme0n1           477G                   disk  
├─nvme0n1p1       300M vfat              part  /boot/efi
├─nvme0n1p2     442,1G ext4              part  /
└─nvme0n1p3      34,6G swap              part  [SWAP]

PROMPT > cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md3 : active raid1 sdf1[1] sdg1[0]
      1953381376 blocks super 1.2 [2/1] [U_]
      [==>..................]  recovery = 10.0% (196493504/1953381376) finish=224.9min speed=130146K/sec
      bitmap: 0/15 pages [0KB], 65536KB chunk

md2 : active raid1 sda1[1] sde1[0]
      3906884608 blocks super 1.2 [2/1] [U_]
      [=>...................]  recovery =  6.7% (263818176/3906884608) finish=429.0min speed=141512K/sec
      bitmap: 2/30 pages [8KB], 65536KB chunk

unused devices: <none>

相关内容