zfs 错误的空间使用情况

zfs 错误的空间使用情况

我有一台带有 ZFS 的备份服务器(Ubuntu 16.04;32GB RAM,4x6TB HDD,raidz2)。最近我发现可用空间存在问题。

# zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool  21.6T  19.9T  1.76T         -    62%    91%  2.30x  ONLINE  -
  raidz2  21.6T  19.9T  1.76T         -    62%    91%
    sda5      -      -      -         -      -      -
    sdb5      -      -      -         -      -      -
    sdc5      -      -      -         -      -      -
    sdd5      -      -      -         -      -      -

看起来几乎所有空间都已分配。我不知道是什么占用了它。看看卷大小:

# zfs list -o space
NAME                                  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
pool                                  425G  13.4T         0    140K              0      13.4T
pool/backup                           425G   742G         0    140K              0       742G
pool/backup/avol                      425G  69.0G         0    198K              0      69.0G
pool/backup/avol/old_dumps            425G  69.0G         0   69.0G              0          0
pool/backup/nnn                       425G   517G         0    163K              0       517G
pool/backup/nnn/cdvol                 425G  5.00G         0   5.00G              0          0
pool/backup/nnn/himvol                425G  98.3G         0   98.3G              0          0
pool/backup/nnn/irvol                 425G  33.8G         0    140K              0      33.8G
pool/backup/nnn/irvol/smavol          425G  33.8G         0   33.8G              0          0
pool/backup/nnn/menvol                425G   931M         0    931M              0          0
pool/backup/nnn/nevvol                425G  77.9G         0   77.9G              0          0
pool/backup/nnn/scovol                425G  27.4G         0   27.4G              0          0
pool/backup/nnn/vm                    425G   274G         0   16.5M              0       274G
pool/backup/nnn/vm/123                425G  1.47G         0   1.47G              0          0
pool/backup/nnn/vm/124                425G  9.23G         0   9.23G              0          0
pool/backup/nnn/vm/125                425G  13.5G         0   13.5G              0          0
pool/backup/nnn/vm/126                425G  10.5G         0   10.5G              0          0
pool/backup/nnn/vm/128                425G  16.9G         0   16.9G              0          0
pool/backup/nnn/vm/130                425G  8.96G         0   8.96G              0          0
pool/backup/nnn/vm/131                425G   147G         0    147G              0          0
pool/backup/nnn/vm/132                425G  11.3G         0   11.3G              0          0
pool/backup/nnn/vm/135                425G  39.7G         0   39.7G              0          0
pool/backup/nnn/vm/136                425G  16.0G         0   16.0G              0          0
pool/backup/old                       425G  50.5G         0    140K              0      50.5G
pool/backup/old/himvol                425G  50.5G         0   50.5G              0          0
pool/backup/telvol                    425G   105G         0    105G              0          0
pool/backup2                          425G  2.74T         0    140K              0      2.74T
pool/backup2/nnn                      425G  2.74T         0    140K              0      2.74T
pool/backup2/nnn/vm                   425G  2.74T         0    151K              0      2.74T
pool/backup2/nnn/vm/101               425G  28.0G         0   28.0G              0          0
pool/backup2/nnn/vm/103               425G  38.0G         0   38.0G              0          0
pool/backup2/nnn/vm/104               425G   333G         0    333G              0          0
pool/backup2/nnn/vm/105               425G   526M         0    526M              0          0
pool/backup2/nnn/vm/106               425G  17.1G         0   17.1G              0          0
pool/backup2/nnn/vm/107               425G  17.0G         0   17.0G              0          0
pool/backup2/nnn/vm/109               425G   235G         0    235G              0          0
pool/backup2/nnn/vm/110               425G   321G         0    321G              0          0
pool/backup2/nnn/vm/111               425G  1.11G         0   1.11G              0          0
pool/backup2/nnn/vm/112               425G  73.6G         0   73.6G              0          0
pool/backup2/nnn/vm/114               425G  1.27T         0   1.27T              0          0
pool/backup2/nnn/vm/116               425G  1.31G         0   1.31G              0          0
pool/backup2/nnn/vm/117               425G  19.9G         0   19.9G              0          0
pool/backup2/nnn/vm/119               425G  7.15G         0   7.15G              0          0
pool/backup2/nnn/vm/121               425G   178G         0    178G              0          0
pool/backup2/nnn/vm/122               425G   237G         0    237G              0          0

最近我关闭了重复数据删除功能并复制了所有卷(zfs send | zfs accept; zfs destroy)以删除重复数据删除的数据,但它仍然存在:

# zpool status -D
  pool: pool
 state: ONLINE
  scan: scrub in progress since Wed Jul 12 11:23:27 2017
    1 scanned out of 19.9T at 1/s, (scan is slow, no estimated time)
    0 repaired, 0.00% done
config:

    NAME        STATE     READ WRITE CKSUM
    pool        ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        sda5    ONLINE       0     0     0
        sdb5    ONLINE       0     0     0
        sdc5    ONLINE       0     0     0
        sdd5    ONLINE       0     0     0

errors: No known data errors

 dedup: DDT entries 41434395, size 978 on disk, 217 in core

bucket              allocated                       referenced          
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    25.3M   2.41T   1.95T   1.99T    25.3M   2.41T   1.95T   1.99T
     2    5.00M    469G    340G    347G    11.2M   1.03T    762G    779G
     4    7.37M    549G    438G    451G    36.9M   2.69T   2.14T   2.21T
     8    1.41M    124G   80.7G   83.5G    14.6M   1.26T    833G    862G
    16     281K   16.8G   10.7G   11.5G    5.72M    337G    219G    235G
    32    73.7K   4.57G   3.79G   3.96G    3.14M    198G    167G    174G
    64    40.5K   2.58G   2.32G   2.41G    3.25M    215G    195G    202G
   128    8.49K    358M    272M    298M    1.38M   60.2G   45.7G   50.0G
   256    3.22K    201M    171M    180M    1.10M   69.8G   59.7G   62.7G
   512    1.46K   56.1M   52.2M   56.9M    1.20M   41.1G   38.1G   42.1G
    1K      372   12.5M   10.4M   11.7M     501K   19.5G   16.3G   18.0G
    2K      169   7.41M   6.14M   6.78M     468K   20.3G   17.0G   18.8G
    4K       64   3.40M   2.69M   2.85M     358K   19.1G   15.0G   15.9G
    8K       14    316K    172K    238K     151K   3.37G   1.82G   2.52G
   16K       10   35.5K   31.5K   75.6K     206K    738M    667M   1.54G
   32K        4    102K   85.5K    105K     185K   4.71G   3.93G   4.79G
  256K        2      1K      1K   11.6K     704K    352M    352M   4.00G
 Total    39.5M   3.55T   2.81T   2.87T     106M   8.36T   6.42T   6.61T

也许这就是原因? 有没有办法检查哪些数据在使用重复数据删除并将其删除? 还有什么会消耗磁盘空间?

zpool scrub 有点奇怪。我 6 小时前(CEST 时区)启动了它,目前状态为:

  scan: scrub in progress since Wed Jul 12 15:48:20 2017
    1 scanned out of 20.0T at 1/s, (scan is slow, no estimated time)
    0 repaired, 0.00% done

服务器负载很大(正常运行时间从 2 增加到 80),iostat 显示磁盘利用率为 100%,但没有任何进程正在运行(ssh 服务器除外)。

更新:今天我有近 1TB 的可用空间。服务器上没有执行任何操作,也许 zfs 需要一些时间来清理旧数据?

已解决:问题已解决。重复数据删除表现在为空,有 6.75 TB 的可用空间!zfs 花了大约 6 天时间才清理干净。

答案1

运行python脚本检测并删除重复的数据文件:

http://code.activestate.com/recipes/362459-dupinator-detect-and-delete-duplicate-files/

相关内容