每次重启后,mdadm 都会检查 RAID 5 阵列

每次重启后,mdadm 都会检查 RAID 5 阵列

系统信息:Ubuntu 20.04 软件 raid 5(添加了第 3 个 HDD 并从 raid 1 转换而来)。FS 是基于 LUKS 的 Ext4。

我发现重启后系统速度变慢,因此我通过 proc/mdstat 检查了阵列状态,并显示以下内容:

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdb[2] sdc[0] sdd[1]
      7813772928 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      [==>..................]  check = 14.3% (558996536/3906886464) finish=322.9min speed=172777K/sec
      bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>

它正在重新检查,但我不知道为什么。没有设置 cron 作业。这是系统转换为 RAID5 后每次重启后都会出现的日志条目,但我不确定它是否每次都重新检查:

Jan  3 14:34:47 <sysname> kernel: [    3.473942] md/raid:md0: device sdb operational as raid disk 2
Jan  3 14:34:47 <sysname> kernel: [    3.475170] md/raid:md0: device sdc operational as raid disk 0
Jan  3 14:34:47 <sysname> kernel: [    3.476402] md/raid:md0: device sdd operational as raid disk 1
Jan  3 14:34:47 <sysname> kernel: [    3.478290] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
Jan  3 14:34:47 <sysname> kernel: [    3.520677] md0: detected capacity change from 0 to 8001303478272

mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Wed Nov 25 23:06:18 2020
        Raid Level : raid5
        Array Size : 7813772928 (7451.79 GiB 8001.30 GB)
     Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Sun Jan  3 16:17:28 2021
    State : clean, checking
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0
            Layout : left-symmetric
        Chunk Size : 64K
Consistency Policy : bitmap
      Check Status : 16% complete
              Name : ubuntu-server:0
              UUID : <UUID>
            Events : 67928
    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       2       8       16        2      active sync   /dev/sdb

这是正常行为吗?

感谢任何意见

答案1

更新日期:2021/06/18

对于 mdcheck_start.timer mdcheck_continue.timer 中的 svc;执行 sudo systemctl stop ${svc};sudo systemctl disable ${svc};完成

取自: https://a20.net/bert/2020/11/02/disable-periodic-raid-check-on-ubuntu-20-04-systemd/

更新 04/05/2021 开始。
我之前的回答似乎没有帮助。
尽管 发生了变化,但检查再次发生/etc/default/mdadm

我发现了其他一些需要调查的东西。
mdcheck_start.service
mdcheck_start.timer
mdcheck_continue.service
mdcheck_continue.timer
/etc/systemd/system/mdmonitor.service.wants/mdcheck_start.timer
/etc/systemd/system/mdmonitor.service.wants/mdcheck_continue.timer
/etc/systemd/system/mdmonitor.service.wants/mdmonitor-oneshot.timer

systemctl status mdcheck_start.service
● mdcheck_start.service - MD array scrubbing
     Loaded: loaded (/lib/systemd/system/mdcheck_start.service; static; vendor preset: enabled)
     Active: inactive (dead)
TriggeredBy: ● mdcheck_start.timer
systemctl status mdcheck_start.timer
● mdcheck_start.timer - MD array scrubbing
     Loaded: loaded (/lib/systemd/system/mdcheck_start.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Sun 2021-05-02 19:40:50 CEST; 1 day 14h ago
    Trigger: Sun 2021-06-06 22:36:42 CEST; 1 months 3 days left
   Triggers: ● mdcheck_start.service

May 02 19:40:50 xxx systemd[1]: Started MD array scrubbing.

systemctl status mdcheck_continue.service
● mdcheck_continue.service - MD array scrubbing - continuation
     Loaded: loaded (/lib/systemd/system/mdcheck_continue.service; static; vendor preset: enabled)
     Active: inactive (dead)
TriggeredBy: ● mdcheck_continue.timer
  Condition: start condition failed at Tue 2021-05-04 06:38:39 CEST; 3h 26min ago
             └─ ConditionPathExistsGlob=/var/lib/mdcheck/MD_UUID_* was not met
systemctl status mdcheck_continue.timer
● mdcheck_continue.timer - MD array scrubbing - continuation
     Loaded: loaded (/lib/systemd/system/mdcheck_continue.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Sun 2021-05-02 19:40:50 CEST; 1 day 14h ago
    Trigger: Wed 2021-05-05 00:35:53 CEST; 14h left
   Triggers: ● mdcheck_continue.service

May 02 19:40:50 xxx systemd[1]: Started MD array scrubbing - continuation.
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdcheck_start.timer
#  This file is part of mdadm.
#
#  mdadm is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

[Unit]
Description=MD array scrubbing

[Timer]
OnCalendar=Sun *-*-1..7 1:00:00
RandomizedDelaySec=24h
Persistent=true

[Install]
WantedBy=mdmonitor.service
Also=mdcheck_continue.timer
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdcheck_continue.timer 
#  This file is part of mdadm.
#
#  mdadm is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

[Unit]
Description=MD array scrubbing - continuation

[Timer]
OnCalendar=daily
RandomizedDelaySec=12h
Persistent=true

[Install]
WantedBy=mdmonitor.service
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdmonitor-oneshot.timer 
#  This file is part of mdadm.
#
#  mdadm is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

[Unit]
Description=Reminder for degraded MD arrays

[Timer]
OnCalendar=daily
RandomizedDelaySec=24h
Persistent=true

[Install]
WantedBy= mdmonitor.service

更新 2021 年 4 月 5 日结束。


尝试sudo dpkg-reconfigure mdadm

请注意,我不确定上述提示是否有帮助。
我在 20.04 上的 raid5 也遇到了同样的问题。

首先,我尝试/etc/default/mdadm手动编辑并更改AUTOCHECK=trueAUTOCHECK=false。但这没有帮助。

今天我使用了dpkg-reconfigure mdadm。现在/etc/default/mdadm文件看起来一样了(AUTOCHECK=false),但作为其中的一部分,dpkg-reconfigure mdadm还有一个update-initramfs调用。希望这会有所帮助。

... update-initramfs: deferring update (trigger activated) ...

扩展日志:

sudo dpkg-reconfigure mdadm
update-initramfs: deferring update (trigger activated)
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-curtin-settings.cfg'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655310: /usr/sbin/grub-probe
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655310: /usr/sbin/grub-probe
Found linux image: /boot/vmlinuz-5.4.0-72-generic
Found initrd image: /boot/initrd.img-5.4.0-72-generic
Found linux image: /boot/vmlinuz-5.4.0-71-generic
Found initrd image: /boot/initrd.img-5.4.0-71-generic
Found linux image: /boot/vmlinuz-5.4.0-70-generic
Found initrd image: /boot/initrd.img-5.4.0-70-generic
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655841: /usr/sbin/grub-probe
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655841: /usr/sbin/grub-probe
done
Processing triggers for initramfs-tools (0.136ubuntu6.4) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-72-generic

完整/etc/default/mdadm文件:

cat /etc/default/mdadm 
# mdadm Debian configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#

# AUTOCHECK:
#   should mdadm run periodic redundancy checks over your arrays? See
#   /etc/cron.d/mdadm.
AUTOCHECK=false

# AUTOSCAN:
#   should mdadm check once a day for degraded arrays? See
#   /etc/cron.daily/mdadm.
AUTOSCAN=true

# START_DAEMON:
#   should mdadm start the MD monitoring daemon during boot?
START_DAEMON=true

# DAEMON_OPTIONS:
#   additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"

# VERBOSE:
#   if this variable is set to true, mdadm will be a little more verbose e.g.
#   when creating the initramfs.
VERBOSE=false

相关内容