我已经在 Ubuntu 18.04 上安装了两个 RAID1 驱动器,并已验证如果取出任何一个驱动器,系统就会启动,然后在添加第二个驱动器后重新同步。
那是几个月前的事了...我现在看到 df 的以下输出:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 7938140 0 7938140 0% /dev
tmpfs 1593780 1116 1592664 1% /run
/dev/md1 929492160 22455384 859751428 3% /
tmpfs 7968892 0 7968892 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 7968892 0 7968892 0% /sys/fs/cgroup
tmpfs 1593776 0 1593776 0% /run/user/1000
我原本期望在 df 输出中看到两个条目。一个是 /dev/md1,另一个是 /dev/md0。
这是 fdisk -l 的输出
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xee3b4e44
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 62500863 62498816 29.8G fd Linux raid autodetect
/dev/sda2 * 62500864 1953523711 1891022848 901.7G fd Linux raid autodetect
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe78f1647
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 62500863 62498816 29.8G fd Linux raid autodetect
/dev/sdb2 * 62500864 1953523711 1891022848 901.7G fd Linux raid autodetect
Disk /dev/md0: 29.8 GiB, 31981568000 bytes, 62464000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 901.6 GiB, 968068431872 bytes, 1890758656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
cat /proc/mdstat 还列出以下内容:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[1] sdb2[0]
945379328 blocks super 1.2 [2/2] [UU]
bitmap: 4/8 pages [16KB], 65536KB chunk
md0 : active raid1 sda1[1] sdb1[2]
31232000 blocks super 1.2 [2/2] [UU]
unused devices: <none>
mount 的输出为:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=7938140k,nr_inodes=1984535,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1593780k,mode=755)
/dev/md1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2459)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1593776k,mode=700,uid=1000,gid=1000)
/etc/fstab的内容为:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/md1 during installation
UUID=f10a9259-4dca-4e48-b01d-4a524ffd0daa / ext4 errors=remount-ro 0 1
# swap was on /dev/md0 during installation
UUID=8877598b-7735-420f-bd2c-0e7c30b0dd59 none swap sw 0 0
我遗漏了什么?RAID1 无法运行吗?
答案1
blkid
和fstab
都表明这/dev/md0
是一个交换空间,因此它不会出现在mount
或df
输出中。
swapon
将打印有关配置的交换设备和文件的一些详细信息。
答案2
df
我原本期望在输出中看到两个条目。一个是/dev/md1
,另一个是/dev/md0
。
df
显示已安装的文件系统。虽然不是全部,但它肯定不会显示未安装的文件系统。
(mounted on )/dev/md1
的输出中有。没有。未安装,因此它不会出现在 的输出中。mount
/
/dev/md0
/dev/md0
df
我不知道它是否应该安装或你想把它安装在哪里。如果你安装它某处,它将出现在 的输出中df
。
根据您的评论
的输出
sudo blkid /dev/md0
是/dev/md0: UUID="8877598b-7735-420f-bd2c-0e7c30b0dd59" TYPE="swap"
并根据/etc/fstab
UUID=8877598b-7735-420f-bd2c-0e7c30b0dd59 none swap sw 0 0
/dev/md0
是您的交换设备。那里没有文件系统。在此状态下,它不应被挂载,也不应出现在 的输出中df
。
如果这不清楚,让我明确说明:有两个独立的 RAID 1 阵列。
md1
较大,底层设备是sda2
和sdb2
,md0
较小,底层设备是sda1
和sdb1
。