LVM 占用了我的磁盘空间还是 df 撒谎了?

LVM 占用了我的磁盘空间还是 df 撒谎了?

请看下面的输出:

bob ~ # df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                       5,7G  4,0K  5,7G   1% /dev
tmpfs                      1,2G  1,5M  1,2G   1% /run
/dev/mapper/mint--vg-root  218G   66G  142G  32% /
none                       4,0K     0  4,0K   0% /sys/fs/cgroup
tmpfs                      5,7G  528M  5,2G  10% /tmp
none                       5,0M     0  5,0M   0% /run/lock
none                       5,7G   99M  5,6G   2% /run/shm
none                       100M   48K  100M   1% /run/user
tmpfs                      5,7G   44K  5,7G   1% /var/tmp
/dev/sda1                  236M  132M   93M  59% /boot

df报告说LVM分区有218G,而它必须是250G,如果用1024重新计算的话就是232G。那么14G在哪里呢?但即使 218-66=152 也不是 142!那还有 10 GB 也无处可去吗?

其他实用程序输出:

bob ~ # pvs
  PV         VG      Fmt  Attr PSize   PFree
  /dev/sda5  mint-vg lvm2 a--  232,64g    0 

bob ~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               mint-vg
  PV Size               232,65 GiB / not usable 2,00 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              59557
  Free PE               0
  Allocated PE          59557
  PV UUID               3FA5KG-Dtp4-Kfyf-STAZ-K6Qe-ojkB-Tagr83

bob ~ # fdisk -l /dev/sda

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00097b2a

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   488396799   243947521    5  Extended
/dev/sda5          501760   488396799   243947520   8e  Linux LVM

# sfdisk -l -uM

Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start   End    MiB    #blocks   Id  System
/dev/sda1   *     1    243    243     248832   83  Linux
/dev/sda2       244+ 238474  238231- 243947521    5  Extended
/dev/sda3         0      -      0          0    0  Empty
/dev/sda4         0      -      0          0    0  Empty
/dev/sda5       245  238474  238230  243947520   8e  Linux LVM

Disk /dev/mapper/mint--vg-root: 30369 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/mapper/mint--vg-root: unrecognized partition table type
No partitions found

Linux 薄荷 17.3

更新

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mint-vg/root
  LV Name                root
  VG Name                mint-vg
  LV UUID                ew9fDY-oykM-Nekj-icXn-FQ1T-fiaC-0Jw2v6
  LV Write Access        read/write
  LV Creation host, time mint, 2016-02-18 14:52:15 +0200
  LV Status              available
  # open                 1
  LV Size                232,64 GiB
  Current LE             59557
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

关于交换。最初它位于 LVM 中。然后我将其删除并用交换所使用的空间(大约12G)扩展根分区

更新2

# tune2fs -l /dev/mapper/mint--vg-root
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          0b5ecf9b-a763-4371-b4e7-01c36c47b5cc
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              14491648
Block count:              57952256
Reserved block count:     2897612
Free blocks:              40041861
Free inodes:              13997980
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1010
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Thu Feb 18 14:52:49 2016
Last mount time:          Sun Mar 13 16:49:48 2016
Last write time:          Sun Mar 13 16:49:48 2016
Mount count:              22
Maximum mount count:      -1
Last checked:             Thu Feb 18 14:52:49 2016
Check interval:           0 (<none>)
Lifetime writes:          774 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       6160636
Default directory hash:   half_md4
Directory Hash Seed:      51743315-0555-474b-8a5a-bbf470e3ca9f
Journal backup:           inode blocks

更新3(最终)

感谢乔纳斯,发现了空间损失

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/mint--vg-root  218G   65G  142G  32% /


# resize2fs /dev/mapper/mint--vg-root
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/mapper/mint--vg-root is mounted on /; on-line resizing required
old_desc_blocks = 14, new_desc_blocks = 15
The filesystem on /dev/mapper/mint--vg-root is now 60986368 blocks long.

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/mint--vg-root  229G   65G  153G  30% /

这是 resize2fs 运行之前和之后的une2fs命令输出的差异

# diff /tmp/tune2fs_before_resize2fs /tmp/tune2fs2_after_resize2fs
13,17c13,17
< Inode count:              14491648
< Block count:              57952256
< Reserved block count:     2897612
< Free blocks:              40041861
< Free inodes:              13997980
---
> Inode count:              15253504
> Block count:              60986368
> Reserved block count:     3018400
> Free blocks:              43028171
> Free inodes:              14759836
21c21
< Reserved GDT blocks:      1010
---
> Reserved GDT blocks:      1009
38c38
< Inode size:           256
---
> Inode size:             256
42c42
< First orphan inode:       6160636
---
> First orphan inode:       5904187

答案1

让我们做一些研究。我以前就注意到了这种差异,但从未详细检查过将损失归因于什么。看看我的场景进行比较:fdisk 显示以下分区:

/dev/sda3       35657728 1000214527 964556800  460G 83 Linux

由于我的文件系统位于 luks 容器中,因此会有一些损失,但这应该只有几个 MiB。 df 显示:

Filesystem      Size  Used Avail Use% Mounted on
/dev/dm-1       453G  373G   58G  87% /

(luks容器也是/dev/sda3与/dev/dm-1不匹配的原因,但它们确实是同一个设备,中间有加密,没有LVM。这也说明LVM不对你的损失负责,我有他们也是。)

现在让我们向文件系统本身询问这个问题。调用tune2fs -l,它会输出许多有关 ext 系列文件系统的有趣信息,我们得到:

root@altair ~ › tune2fs -l /dev/dm-1
tune2fs 1.42.12 (29-Aug-2014)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          0de04278-5eb0-44b1-9258-e4d7cd978768
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30146560
Block count:              120569088
Reserved block count:     6028454
Free blocks:              23349192
Free inodes:              28532579
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Oct 14 09:27:52 2015
Last mount time:          Sun Mar 13 12:25:50 2016
Last write time:          Sun Mar 13 12:25:48 2016
Mount count:              23
Maximum mount count:      -1
Last checked:             Wed Oct 14 09:27:52 2015
Check interval:           0 (<none>)
Lifetime writes:          1426 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       26747912
Default directory hash:   half_md4
Directory Hash Seed:      4723240b-9056-4f5f-8de2-d8536e35d183
Journal backup:           inode blocks

扫了一眼,第一个映入眼帘的应该是Reserved blocks。将其与Block size(也来自输出)相乘,我们得到 dfUsed+Avail 和 Size 之间的差:

453GiB - (373GiB+58GiB) = 22 GiB
6028454*4096 Bytes = 24692547584 Bytes ~= 23 GiB

足够接近,特别是考虑到 df 轮次(使用不带 -h 的 df 并重复计算,仅留下 16 MiB 的Used+Avail 和 Size 之间的差异无法解释)。保留块的保留对象也写在tune2fs 输出中。是根。这是一个安全网,可确保非 root 用户无法通过填充磁盘来使系统完全无法使用,并且保持百分之几的磁盘空间未使用也有助于防止碎片。

现在看看 df 报告的大小与分区大小之间的差异。这可以通过查看索引节点来解释。 ext4 预分配 inode,因此该空间无法用于文件数据。Inode count将 乘以,Inode size即可得到:

30146560*256 Bytes = 7717519360 Bytes ~= 7 GiB
453 GiB + 7 GiB = 460 GiB

索引节点基本上是目录条目。让我们向 mkfs.ext4 询问详细信息(来自 man mkfs.ext4):

-我bytes-per-inode

指定字节/索引节点比率。 mke2fs 为 bytes-per-inode 磁盘上的每个字节空间创建一个 inode。该比率越大 bytes-per-inode ,创建的 inode 就越少。该值通常不应小于文件系统的块大小,因为在这种情况下,将创建比可以使用的更多的 inode。请注意,文件系统创建后无法更改该比率,因此请谨慎确定该参数的正确值。请注意,调整文件系统大小会更改 inode 数量以维持此比率。

有不同的预设可用于不同的场景。在具有大量 Linux 发行版映像的文件服务器上,传递 eg-T largefile甚至-T largefile4.含义在这些示例和我的系统中-T定义:/etc/mke2fs.conf

largefile = {
    inode_ratio = 1048576
}
largefile4 = {
    inode_ratio = 4194304
}

所以对于-T largefile4, 的数量远小于默认值(我的默认比率是 16384 /etc/mke2fs.conf)。这意味着为目录条目保留的空间更少,为数据保留的空间更多。当索引节点用完时,您将无法创建新文件。增加现有文件系统中的 inode 数量似乎不可能。因此,默认的 inode 数量是相当保守地选择的,以确保普通用户不会过早地用完 inode。

我只是在查看我的数字时发现了这一点,请告诉我它是否(不)适合您☺。

答案2

一个容易检查的地方是逻辑卷(不必与物理卷一样大)。用来lvdisplay查看尺寸。

如果没有显示出差异,通常的解释是保留了供 root 使用的空间,而df普通用户则不会显示该空间。

进一步阅读:

相关内容