升级到 Ubuntu 17.10 后 RAID1 为只读

升级到 Ubuntu 17.10 后 RAID1 为只读

我很困惑。我在 16.10 上有一个功能完美的 RAID1 设置。升级到17.10后,它自动神奇地检测到阵列并重新创建md0。我的所有文件都很好,但是当我挂载 md0 时,它说该数组是只读的:

cat /proc/mdstat 
Personalities : [raid1] 
md0 : active (read-only) raid1 dm-0[0] dm-1[1]
      5860390464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Jul  9 23:54:40 2016
     Raid Level : raid1
     Array Size : 5860390464 (5588.90 GiB 6001.04 GB)
  Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Nov  4 23:16:18 2017
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : x6:0  (local to host x6)
           UUID : baaccfeb:860781dd:eda253ba:6a08916f
         Events : 11596

    Number   Major   Minor   RaidDevice State
       0     253        0        0      active sync   /dev/dm-0
       1     253        1        1      active sync   /dev/dm-1

/var/log/kern.log 和 dmesg 中都没有错误。

我可以停止它并重新组装它,但没有效果:

sudo mdadm --stop /dev/md0
sudo mdadm --assemble --scan

我不明白为什么它以前工作得很好,但现在该数组是只读的,我无法检测到任何原因。这与我从 16.04 升级到 16.10 时自动神奇地重新组装的阵列相同。

研究这个问题时,我发现了一篇关于 /sys 安装只读问题的帖子,我的确实是:

ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov  5 22:28 /sys

但两者都无法修复它,因为 /sys 保持只读状态:

sudo mount -o remount,rw /sys
sudo mount -o remount,rw -t sysfs sysfs /sys
ls -ld /sys
dr-xr-xr-x 13 root root 0 Nov  5 22:29 /sys

谁能提供一些我所缺少的见解?

编辑以包含 /etc/mdadm/mdadm.conf:

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0

# This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf

设备映射器文件,似乎是可写的:

ls -l /dev/dm-*
brw-rw---- 1 root disk 253, 0 Nov  5 16:28 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Nov  5 16:28 /dev/dm-1

Ubuntu 或 Debian 还改变了其他一些东西;我不知道这些 osprober 文件在这里做什么。我以为它们只在安装时使用:

ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov  5 15:34 control
lrwxrwxrwx 1 root root       7 Nov  5 16:28 osprober-linux-sdb1 -> ../dm-0
lrwxrwxrwx 1 root root       7 Nov  5 16:28 osprober-linux-sdc1 -> ../dm-1

分手信息:

sudo parted -l
Model: ATA SanDisk Ultra II (scsi)
Disk /dev/sda: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name  Flags
 1      1049kB  81.9GB  81.9GB  ext4
 2      81.9GB  131GB   49.2GB  linux-swap(v1)
 3      131GB   131GB   99.6MB  fat32                 boot, esp
 4      131GB   960GB   829GB   ext4


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  6001GB  6001GB                     raid


Model: ATA WDC WD60EZRZ-00R (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  6001GB  6001GB                     raid


Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)                                  
Disk /dev/mapper/osprober-linux-sdc1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 

Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label
Model: Linux device-mapper (linear) (dm)                                  
Disk /dev/mapper/osprober-linux-sdb1: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 

Model: Linux Software RAID Array (md)
Disk /dev/md0: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags: 

Number  Start  End     Size    File system  Flags
 1      0.00B  6001GB  6001GB  ext4

设备映射器信息:

$ sudo dmsetup table
osprober-linux-sdc1: 0 11721043087 linear 8:33 0
osprober-linux-sdb1: 0 11721043087 linear 8:17 0

$ sudo dmsetup info
Name:              osprober-linux-sdc1
State:             ACTIVE (READ-ONLY)
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1

Name:              osprober-linux-sdb1
State:             ACTIVE (READ-ONLY)
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1

尝试将数组设置为 rw 的 strace 输出(带有一些上下文):

openat(AT_FDCWD, "/dev/md0", O_RDONLY)  = 3
fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0
ioctl(3, RAID_VERSION, 0x7fffb3813574)  = 0
fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0
ioctl(3, RAID_VERSION, 0x7fffb38134c4)  = 0
ioctl(3, RAID_VERSION, 0x7fffb38114bc)  = 0
fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0
readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31
openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4
read(4, "1.2\n", 4096)                  = 4
close(4)                                = 0
openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4
read(4, "raid1\n", 4096)                = 6
close(4)                                = 0
ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0
ioctl(3, RESTART_ARRAY_RW, 0)           = -1 EROFS (Read-only file system)
write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system
) = 66

答案1

这不能解释为什么你的数组最终处于只读模式,但是

mdadm --readwrite /dev/md0

应该恢复正常。在您的情况下,情况并非如此,原因并不完全明显:如果组成设备本身就是read-only,RAID阵列只读(这与您所看到的行为以及您尝试重新启用读写时使用的代码路径相匹配)。

这些dmsetup table信息强烈暗示了正在发生的事情:(osprober我想,考虑到设备的名称)正在寻找真正的 RAID 组件,并且出于某种原因它在它们之上创建设备映射器设备,并且这些正在被拾取并用于 RAID 设备。由于唯一的设备映射器设备是这两个osprober设备,因此最简单的解决方案是停止 RAID 设备、停止 DM 设备,然后重新扫描 RAID 阵列,以便使用底层组件设备。要停止 DM 设备,请运行

dmsetup remove_all

作为root

答案2

只是想补充一点,假设您确定 RAID 集没问题,因此不应该以只读方式安装,并且没有日志或选项,mdadm否则,我强烈建议你重新启动你的系统

就我而言,我使用sudo mdadm --assemble --scan并安装了结果却发现它是只读的。经过几次尝试修复后,我决定终止该mdadm进程并重新启动。当我重新启动后登录系统时,突然该卷像往常一样安装(读写)。

相关内容