我有一台装有 Ubuntu Server 18.04(256 RAM、2x 240 GB SSD)的专用服务器,现在的磁盘空间分配情况如下:
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 1.7M 26G 1% /run
/dev/md2 219G 145G 64G 70% /
tmpfs 126G 12K 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/loop0 90M 90M 0 100% /snap/core/7917
/dev/loop1 8.7M 8.7M 0 100% /snap/canonical-livepatch/88
/dev/md1 487M 146M 312M 32% /boot
tmpfs 26G 0 26G 0% /run/user/1000
Disk /dev/loop0: 89.1 MiB, 93454336 bytes, 182528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 8.5 MiB, 8941568 bytes, 17464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe5bc9ccf
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 467808255 466757632 222.6G fd Linux raid autodetect
/dev/sda3 467808256 468854783 1046528 511M 82 Linux swap / Solaris
Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd0864b40
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sdb2 1050624 467808255 466757632 222.6G fd Linux raid autodetect
/dev/sdb3 467808256 468854783 1046528 511M 82 Linux swap / Solaris
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 222.6 GiB, 238979842048 bytes, 466757504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
我有点困惑,因为显示的设备名称df -h
与 中的不匹配fdisk -l
。目前,我只能访问 1 个 SSD(/dev/md2),我不明白第二个 SSD 的用途,以及是否(以及如何)可以使用其上的磁盘空间。
答案1
您正在使用带有两个镜像硬盘(sda 和 sdb)的 RAID1 系统。其中fdisk -l
显示了实际的物理磁盘及其分区,df -h
显示了系统使用的 RAID 设备和分区。
即由和/dev/md1
组成,两者内容相同。如果其中一个驱动器发生故障,数据在另一个驱动器上是安全的。/dev/sda1
/dev/sdb1
要获取有关 raid 设备的更多详细信息,请使用mdadm
(假设是通过此方法完成的):
mdadm --detail /dev/md1
答案2
根据man df
:
If no file name is given, the space available on all
currently mounted file systems is shown.
并且man fdisk
:
-l, --list
List the partition tables for the specified devices and then exit. If no devices are given,
those mentioned in /proc/partitions (if that file exists) are used.
所以不同之处df
在于安装systems while fdisk -l
(不带参数)显示以下列出的分区/proc/partitions