如何正确防止 Ubuntu 16.04 上 RAID 阵列的自动组装?

如何正确防止 Ubuntu 16.04 上 RAID 阵列的自动组装?

我正在尝试阻止 mdadm 在连接磁盘时自动组装 RAID 阵列。我有一个适用于 Ubuntu 14.04 的解决方案,但它不适用于 16.04。我发现了一个适用于 16.04 的东西,但它有点儿像黑客,所以我希望有人能告诉我如何正确地做到这一点。

背景:我希望能够启动服务器,然后连接磁盘(具体来说,服务器是 AWS 上的一个实例,磁盘是 EBS 卷)。这些磁盘形成一个 RAID 阵列,该阵列之前已从另一个实例分离。

我不希望 mdadm 自动组装 raid 阵列;由于我的设置的各种原因,我最好mdadm --assemble手动进行组装。

在 Ubuntu 14.04 中,这相当简单。在 中/etc/mdadm/mdadm.conf,我添加了一行:

AUTO -all

这就是手册页所说的mdadm.conf正确做法。我还运行update-initramfs -u以确保系统在启动时启用了该设置。在 14.04 中,它运行良好:当我连接磁盘时,RAID 阵列不会自动组装。

但在 16.04 中,尽管有该设置,系统仍会重新组装阵列。我尝试在mdadm连接磁盘之前重新启动,以确保它能获取配置更改,运行update-initramfs -c -k all以防在启动时使用了不同的内核或需要完全重新创建 initfs,并重新启动以防有额外的服务需要重新启动。这些都无济于事:一旦磁盘连接,它们就会自动组装。

我发现唯一可行的办法是这个 ServerFault 答案-- 添加一行mdadm.conf来告诉它扫描/dev/null分区以扫描 MD 超级块:

DEVICE /dev/null

然而,这感觉像是一个相当糟糕(虽然很聪明!)的黑客行为。当磁盘连接时,它还会导致系统日志中出现令人不安的错误:

Process '/sbin/mdadm --incremental /dev/xvdg1 --offroot' failed with exit code 1.

防止自动组装的正确方法是什么?

[更新] 因此,我设法获得了一个最小的复制版,但这绝对是一个问题,即使在 AWS 上的基本 Ubuntu AMI 中也是如此(因此我认为在 Ubuntu 16.04 中普遍存在这个问题)。

这是我所做的:

  • 在 AWS 上创建了 16.04 Ubuntu 实例
  • 附加两个 10GB EBS 卷
  • 使用它们创建了一个 RAID 阵列,并在其中放置了一个“canary”文件,以确保当我稍后安装它时它确实是同一个阵列:

    /sbin/mdadm --create -l0 -n2 /dev/md0 /dev/xvdf /dev/xvdg
    mkfs.ext4 /dev/md0
    mount /dev/md0 /mnt
    touch /mnt/tweet
    
  • 停止实例,分离卷。

  • 使用 Ubuntu AMI ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170202 (ami-f0768de6) 启动了另一个实例
  • 登录到新实例,并跟踪系统日志以查看附加卷时发生的情况。由于在此阶段,我还没有更改/etc/mdadm/mdadm.conf,我期望看到它自动组装阵列。

    root@ip-10-0-0-67:~# journalctl -f
    -- Logs begin at Thu 2017-06-08 18:46:12 UTC. --
    Jun 08 18:46:27 ip-10-0-0-67 systemd-logind[1099]: New session 1 of user ubuntu.
    Jun 08 18:46:27 ip-10-0-0-67 systemd[1375]: Reached target Paths.
    ...
    Jun 08 18:46:57 ip-10-0-0-67 kernel: blkfront: xvdf: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
    Jun 08 18:46:57 ip-10-0-0-67 kernel: md: bind<xvdf>
    Jun 08 18:47:10 ip-10-0-0-67 kernel: blkfront: xvdg: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
    Jun 08 18:47:10 ip-10-0-0-67 kernel: md: bind<xvdg>
    Jun 08 18:47:10 ip-10-0-0-67 kernel: md/raid0:md127: md_size is 41910272 sectors.
    Jun 08 18:47:10 ip-10-0-0-67 kernel: md: RAID0 configuration for md127 - 1 zone
    Jun 08 18:47:10 ip-10-0-0-67 kernel: md: zone0=[xvdf/xvdg]
    Jun 08 18:47:10 ip-10-0-0-67 kernel:       zone-offset=         0KB, device-offset=         0KB, size=  20955136KB
    Jun 08 18:47:10 ip-10-0-0-67 kernel: 
    Jun 08 18:47:10 ip-10-0-0-67 kernel: md127: detected capacity change from 0 to 21458059264
    ^C
    root@ip-10-0-0-67:~# mount /dev/md127 /mnt
    root@ip-10-0-0-67:~# ls /mnt
    lost+found  tweet
    
  • 因此,此时我已确认 raid 阵列已使用默认配置自动组装,这并不奇怪。下一步是将 添加到AUTO -all/etc/mdadm/mdadm.conf如下所示:

    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    
    # by default (built-in), scan all partitions (/proc/partitions) and all
    # containers for MD superblocks. alternatively, specify devices to scan, using
    # wildcards if desired.
    #DEVICE partitions containers
    
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    
    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root
    
    # definitions of existing MD arrays
    
    AUTO -all
    
    # This file was auto-generated on Thu, 02 Feb 2017 18:23:27 +0000
    # by mkconf $Id$
    
  • 接下来,执行完整的 update-initramfs

    root@ip-10-0-0-67:~# update-initramfs -c -k all
    update-initramfs: Generating /boot/initrd.img-4.4.0-62-generic
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    root@ip-10-0-0-67:~#
    
  • 接下来,停止实例(不是终止 - 只是暂时关闭)并分离 EBS 卷。

  • 重新启动实例,并确认卷不存在。

    ubuntu@ip-10-0-0-67:~$ cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    unused devices: <none>
    
  • 再次连接卷时跟踪系统日志,它会自动组装:

    ubuntu@ip-10-0-0-67:~$ sudo journalctl -f
    Jun 08 18:55:25 ip-10-0-0-67 systemd[1]: apt-daily.timer: Adding 2h 21min 27.220312s random time.
    ...
    Jun 08 18:56:02 ip-10-0-0-67 kernel: blkfront: xvdf: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
    Jun 08 18:56:02 ip-10-0-0-67 kernel: md: bind<xvdf>
    Jun 08 18:56:15 ip-10-0-0-67 kernel: blkfront: xvdg: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
    Jun 08 18:56:15 ip-10-0-0-67 kernel: md: bind<xvdg>
    Jun 08 18:56:15 ip-10-0-0-67 kernel: md/raid0:md127: md_size is 41910272 sectors.
    Jun 08 18:56:15 ip-10-0-0-67 kernel: md: RAID0 configuration for md127 - 1 zone
    Jun 08 18:56:15 ip-10-0-0-67 kernel: md: zone0=[xvdf/xvdg]
    Jun 08 18:56:15 ip-10-0-0-67 kernel:       zone-offset=         0KB, device-offset=         0KB, size=  20955136KB
    Jun 08 18:56:15 ip-10-0-0-67 kernel: 
    Jun 08 18:56:15 ip-10-0-0-67 kernel: md127: detected capacity change from 0 to 21458059264
    ^C
    
  • 确认该数组确实是最初创建的数组:

    ubuntu@ip-10-0-0-67:~$ sudo mount /dev/md127 /mnt
    ubuntu@ip-10-0-0-67:~$ ls /mnt
    lost+found  tweet
    

[另一个更新] 因此,看起来生成到 iniramfs 中的 mdadm.conf 不是来自 /etc/mdadm/mdadm.conf!在与上次测试相同的机器上:

ubuntu@ip-10-0-0-67:~$ mkdir initramfs
ubuntu@ip-10-0-0-67:~$ cd initramfs/
ubuntu@ip-10-0-0-67:~/initramfs$ uname -a
Linux ip-10-0-0-67 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ip-10-0-0-67:~/initramfs$ ls /boot
abi-4.4.0-62-generic  config-4.4.0-62-generic  grub  initrd.img-4.4.0-62-generic  System.map-4.4.0-62-generic  vmlinuz-4.4.0-62-generic
ubuntu@ip-10-0-0-67:~/initramfs$ zcat /boot/initrd.img-4.4.0-62-generic | cpio -i
79595 blocks
ubuntu@ip-10-0-0-67:~/initramfs$ cat etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=599586fc:a53f9227:04bc7b65:8ad7ab99 name=ip-10-0-0-70:0

ubuntu@ip-10-0-0-67:~/initramfs$

我还确认,如果机器停止,移除磁盘,重新启动,然后重新生成 initramfs,那么mdadm.conf放入其中的将与上面的类似,但没有该ARRAY行。

看上去很像/etc/mdadm/mdadm.conf被完全忽略了!但man mdadm.conf在同一台机器上,它肯定说这是它的正确位置。

[又一次更新] 14.04 上的行为肯定不同。我使用默认设置启动了一个 14.04 实例mdadm.conf,并在连接磁盘时跟踪了 syslog:

root@ip-10-81-154-136:~# tail -f /var/log/syslog 
...
Jun  8 19:29:47 ip-10-81-154-136 kernel: [83702921.994799] blkfront: xvdf: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: disabled;
Jun  8 19:29:47 ip-10-81-154-136 kernel: [83702921.999875]  xvdf: unknown partition table
Jun  8 19:29:47 ip-10-81-154-136 kernel: [83702922.053469] md: bind<xvdf>
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.748110] blkfront: xvdg: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: disabled;
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.754217]  xvdg: unknown partition table
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.812789] md: bind<xvdg>
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817276] md/raid0:md127: md_size is 41910272 sectors.
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817280] md: RAID0 configuration for md127 - 1 zone
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817281] md: zone0=[xvdf/xvdg]
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817286]       zone-offset=         0KB, device-offset=         0KB, size=  20955136KB
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817287] 
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.817309] md127: detected capacity change from 0 to 21458059264
Jun  8 19:30:04 ip-10-81-154-136 kernel: [83702938.824751]  md127: unknown partition table
^C
root@ip-10-81-154-136:~# mount /dev/md127 /mnt
root@ip-10-81-154-136:~# ls /mnt
lost+found  tweet

...这意味着它们是自动组装的,正如您所期望的那样。

然后我将添加到AUTO -allmdadm.conf运行update-initramfs -c -k all,停止机器,分离磁盘,重新启动,并在连接磁盘时再次跟踪系统日志:

root@ip-10-81-154-136:~# tail -f /var/log/syslog 
...
Jun  8 19:34:29 ip-10-81-154-136 kernel: [43448402.304449] blkfront: xvdf: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: disabled;
Jun  8 19:34:29 ip-10-81-154-136 kernel: [43448402.309578]  xvdf: unknown partition table
Jun  8 19:34:51 ip-10-81-154-136 kernel: [43448424.217476] blkfront: xvdg: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: disabled;
Jun  8 19:34:51 ip-10-81-154-136 kernel: [43448424.222645]  xvdg: unknown partition table
^C

所以它们没有被组装起来。再检查一下:

root@ip-10-81-154-136:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>
**strong text**

肯定没有组装好。可以手工组装吗?

root@ip-10-81-154-136:~# mdadm --assemble /dev/md0 /dev/xvdf /dev/xvdg
mdadm: /dev/md0 has been started with 2 drives.
root@ip-10-81-154-136:~# mount /dev/md0 /mnt
root@ip-10-81-154-136:~# ls /mnt
lost+found  tweet

...是的,他们可以。那我们有什么呢initramfs

root@ip-10-81-154-136:~# mkdir initramfs
root@ip-10-81-154-136:~# cd initramfs/
root@ip-10-81-154-136:~/initramfs# uname -a
Linux ip-10-81-154-136 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root@ip-10-81-154-136:~/initramfs# ls /boot
abi-3.13.0-100-generic     config-3.13.0-53-generic       initrd.img-3.13.0-53-generic   vmlinuz-3.13.0-100-generic
abi-3.13.0-53-generic      grub                           System.map-3.13.0-100-generic  vmlinuz-3.13.0-53-generic
config-3.13.0-100-generic  initrd.img-3.13.0-100-generic  System.map-3.13.0-53-generic
root@ip-10-81-154-136:~/initramfs# zcat /boot/initrd.img-3.13.0-100-generic | cpio -i
131798 blocks
root@ip-10-81-154-136:~/initramfs# cat etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=599586fc:a53f9227:04bc7b65:8ad7ab99 name=ip-10-0-0-70:0

root@ip-10-81-154-136:~/initramfs# 

再次,我/etc/mdadm/mdadm.conf对 initramfs 所做的更改没有任何效果。另一方面,至少它似乎关注了正常文件系统中的那个!

任何想法都将不胜感激。

相关内容