我只想将我的 nvme-ssd 挂载到/mnt/ssd-high-NVMe
$ sudo rm -rf /mnt/ssd-high-NVMe
$ sudo rm -rf /mnt/ssd-high-NVME
$ sudo mkdir /mnt/ssd-high-NVMe
$ sudo mkdir /mnt/ssd-high-NVME
$ ls -lh
drwxr-xr-x 2 root root 4.0K Jan 20 22:58 ssd-high-NVMe
drwxr-xr-x 2 root root 4.0K Jan 20 22:42 ssd-high-NVME
$ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVMe
$ df -h
tmpfs 6.3G 2.3M 6.3G 1% /run
/dev/sdb3 110G 45G 59G 44% /
tmpfs 32G 95M 32G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sdb2 512M 7.8M 505M 2% /boot/efi
tmpfs 6.3G 180K 6.3G 1% /run/user/1000
$ sudo dmesg
[43391.301050] EXT4-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: (null)
$ sudo e2fsck /dev/nvme1n1p1
e2fsck 1.45.6 (20-Mar-2020)
NVMe-SSD:No problem,22967/30531584 files,34978829/122096384 blocks
$ sudo nvme smart-log /dev/nvme1n1p1
Smart Log for NVME device:nvme1n1p1 namespace-id:ffffffff
critical_warning : 0
temperature : 35 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
endurance group critical warning summary: 0
data_units_read : 1,665,126
data_units_written : 2,815,185
host_read_commands : 53,190,654
host_write_commands : 83,501,433
controller_busy_time : 368
power_cycles : 27
power_on_hours : 25
unsafe_shutdowns : 11
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 35 C
Temperature Sensor 2 : 40 C
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
$ sudo vim /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda3 during installation
UUID=49b55adc-d909-470d-8a6b-87401c8ae63d / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda2 during installation
UUID=5624-9AA0 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
/dev/disk/by-uuid/6a4437ab-8812-484d-b799-4fd007593db4 /mnt/ssd-high-NVME auto rw,nosuid,nodev,relatime,uhelper=udisks2,x-gvfs-show 0 0
但是,当我将挂载点更改为另一个目录(将“ssd-high-NVMe”更改为“ssd-high-NVME”)时,一切正常。
$ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVME
$ df -h
tmpfs 6.3G 2.3M 6.3G 1% /run
/dev/sdb3 110G 45G 59G 44% /
tmpfs 32G 95M 32G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sdb2 512M 7.8M 505M 2% /boot/efi
tmpfs 6.3G 180K 6.3G 1% /run/user/1000
/dev/nvme1n1p1 458G 126G 312G 29% /mnt/ssd-high-NVME <------ SUCCESS!
有一件事可能很重要:我之前曾将其/mnt/ssd-high-NVMe
用作挂载点,但在挂载时,/dev/nvme1n1p1
我对原始磁盘做了一些坏事,导致其损坏。之后,我完全重新格式化了磁盘(我确信磁盘是健康的)。我认为我的问题与此有关。但如何修复它?我应该进一步提供哪些信息?/dev/nvme1n1p1
/dev/nvme1n1p1
谢谢!
附加信息
$ sudo gdisk -l /dev/nvme1n1
GPT fdisk (gdisk) version 1.0.5
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present
***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************
Disk /dev/nvme1n1: 976773168 sectors, 465.8 GiB
Model: Samsung SSD 980 PRO 500GB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 54BF3843-FF55-41C5-8FD5-25BF87B4DEEA
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 976773134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2029 sectors (1014.5 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 976773119 465.8 GiB 8300 Linux filesystem
答案1
TLDR;简单的systemctl daemon-reload
操作mount -a
应该可以解决这个问题。
应该存在一个名为mnt-ssd\x2dhigh\x2dNVME.mount
(\x2d
来自-
您的路径)的 systemd 挂载,您可以使用以下命令进行检查:
# systemctl status mnt-ssd\x2dhigh\x2dNVME.mount
● mnt-ssd\x2dhigh\x2dNVME.mount - /mnt/ssd-high-NVMe
Loaded: loaded (/etc/fstab; generated)
Active: inactive (dead) since Thu 2021-08-12 09:00:04 CEST; 49s ago
Where: /mnt/ssd-high-NVMe
What: /dev/disk/by-uuid/UUID_OF_OLD_DISK
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
这里重要的部分是What
,它可能会显示旧磁盘的 UID。
我假设在卸载旧 nvme 磁盘和安装新磁盘之间没有发生重新启动,因为服务应该在重新启动时重新生成。
问题是,由于某些我不知道的原因,systemd-mount 似乎强制使用挂载定义中定义的磁盘,即使mount DISK PATH
使用了显式的。
就我而言,我最初能够安装新磁盘,只有在我从虚拟机中(热)分离旧磁盘后,我才无法再安装任何其他磁盘。当我分离旧磁盘时,它甚至会自动从安装点卸载新磁盘。
我认为这是与手册兼容性的一个错误(u-)mount
。Systemd 可能会看到旧磁盘被移除(该磁盘仍在 systemd-mount 中),将挂载点标记为失败(或至少为不活动),并进行一些清理工作,包括确保该路径上不再安装任何内容或类似操作。我不清楚之后无法安装另一个磁盘的原因。