当我使用 fio 和 mmap 作为 ioengine 测量 NVME squance 写入时,为什么磁盘统计信息显示许多读取操作

当我使用 fio 和 mmap 作为 ioengine 测量 NVME squance 写入时,为什么磁盘统计信息显示许多读取操作

这是我的 fio 配置和报告:

# cat fio-write.fio 
[global]
name=fio-seq-writes
filename=test
rw=write
bs=1M
direct=0
numjobs=1
[file1]
size=1G
ioengine=mmap
iodepth=1

# fio --version
fio-3.30
# fio fio-write.fio 
file1: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=mmap, iodepth=1
fio-3.30
Starting 1 process
Jobs: 1 (f=1): [W(1)][-.-%][w=373MiB/s][w=373 IOPS][eta 00m:00s]
file1: (groupid=0, jobs=1): err= 0: pid=421: Sun Nov 14 21:12:09 2021
  write: IOPS=330, BW=330MiB/s (346MB/s)(1024MiB/3102msec); 0 zone resets
    clat (usec): min=2118, max=11668, avg=2598.40, stdev=1333.02
     lat (usec): min=2171, max=11754, avg=2673.15, stdev=1339.15
    clat percentiles (usec):
     |  1.00th=[ 2114],  5.00th=[ 2147], 10.00th=[ 2147], 20.00th=[ 2147],
     | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2343],
     | 70.00th=[ 2409], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 4621],
     | 99.00th=[ 9241], 99.50th=[10945], 99.90th=[11600], 99.95th=[11731],
     | 99.99th=[11731]
   bw (  KiB/s): min=122880, max=385024, per=99.76%, avg=337237.33, stdev=105105.84, samples=6
   iops        : min=  120, max=  376, avg=329.33, stdev=102.64, samples=6
  lat (msec)   : 4=94.14%, 10=4.98%, 20=0.88%
  cpu          : usr=28.25%, sys=61.08%, ctx=253, majf=262144, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=330MiB/s (346MB/s), 330MiB/s-330MiB/s (346MB/s-346MB/s), io=1024MiB (1074MB), run=3102-3102msec

Disk stats (read/write):
  nvme0n1: ios=1908/757, merge=0/0, ticks=1255/3876, in_queue=5130, util=84.41%

正如你所看到的,磁盘统计数据显示有 1908 次读取和 757 次写入(ios 是所有组执行的 I/O 数),这个测试用例是仅序列写入(通过配置rw=写入),为什么它显示我的 NVME issues 1908 读数?

我也尝试过

  • 顺序读取(只读)
Disk stats (read/write):
  nvme0n1: ios=2026/0, merge=0/0, ticks=630/0, in_queue=631, util=69.47%
  • 随机读取(255234 次读取,991 次写入)
Disk stats (read/write):
  nvme0n1: ios=255234/991, merge=0/6, ticks=3936/1739, in_queue=5674, util=95.55%
  • 随机读(259349读,2写)
Disk stats (read/write):
  nvme0n1: ios=259349/2, merge=0/0, ticks=3453/0, in_queue=3454, util=93.49%

我还尝试了其他 ioengine,如 libaio、io_uring 和 psync(fio 的默认 ioengine),它们的 seq|non-seq read 只发出读操作,而 seq|non-seq write 只发出写操作,这是预期的,所以只有 mmap 行为奇怪。

答案1

根据mmap手册,mmap返回的内存块将由目标文件/磁盘的内容初始化。因此,fio-mmap 引擎将首先从文件/磁盘读取数据,这是读取请求的来源。

https://man7.org/linux/man-pages/man2/mmap.2.html

文件映射的内容(与匿名映射相反;请参阅下面的 MAP_ANONYMOUS)使用从文件描述符 fd 引用的文件(或其他对象)中的偏移量 offset 开始的长度字节进行初始化。

相关内容