S/W RAID6 和 4k 扇区格式 - 缓慢重新同步

S/W RAID6 和 4k 扇区格式 - 缓慢重新同步

我正在寻找 Linux 软件 RAID-6 重新同步比我预期慢得多的原因。

我已使用 6 个 WDC WD40EFRX HDD(具有 4k 物理扇区)来创建 RAID-6 阵列。

$ sudo mdadm -v -C /dev/md6 -l6 -e1 -n6 /dev/sd[a-f]

当重新同步过程开始时,我发现它出乎意料地慢。

Personalities : [raid1] [raid6] [raid5] [raid4]
md6 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
      15627548672 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [========>............]  resync = 44.6% (1743624744/3906887168) finish=1056.3min speed=34129K/sec

CPU 负载远未达到 100%:

$ top
  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
14499 root      20   0     0    0    0 S  24.3  0.0 173:26.29 md6_raid6
16789 root      20   0     0    0    0 D  21.6  0.0 162:51.05 md6_resync

HDD操作强度也不太高:

$ sudo iostat -dkx sd{a,b,c,d,e,f} 5
Linux 3.2.0-4-amd64 (mrs)       05/24/2014      _x86_64_        (4 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda            2700.46   950.79 3361.95   42.59 24260.48  3974.70    16.59     4.40    1.29    1.25    4.44   0.09  31.80
sdd            2779.61   963.48 3281.19   29.14 24254.07  3971.56    17.05     4.50    1.36    1.31    6.54   0.10  32.88
sde            2819.95   964.17 3240.85   29.26 24254.00  3974.75    17.26     4.63    1.42    1.37    6.43   0.10  33.73
sdc            2714.32   949.45 3346.47   43.88 24254.02  3974.64    16.65     4.41    1.30    1.26    4.38   0.09  31.96
sdb            2856.48  1913.76 3204.31   71.79 24254.02  7945.13    19.66     4.87    1.49    1.38    6.03   0.12  38.66
sdf            2988.96  1922.34 3071.82   63.27 24253.94  7944.90    20.54     5.34    1.70    1.57    8.06   0.13  41.73

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda            4000.40  1311.20 4308.20   70.60 33234.40  5528.80    17.70     6.39    1.46    1.42    3.75   0.11  46.88
sdd            4093.80  1367.80 4213.80   38.60 33185.60  5565.60    18.23     6.02    1.42    1.38    5.10   0.11  46.88
sde            4189.20  1353.80 4125.80   46.20 33394.40  5617.60    18.70     5.98    1.45    1.41    4.78   0.11  46.80
sdc            3970.60  1327.20 4338.20   54.00 33235.20  5525.60    17.65     5.55    1.26    1.23    3.73   0.10  43.20
sdb            4156.60  2670.40 4158.00   91.60 33322.40 11055.20    20.89     6.50    1.54    1.46    5.18   0.13  53.20
sdf            4370.20  2670.20 3937.80   93.40 33205.60 11080.80    21.97     7.51    1.86    1.73    7.43   0.15  60.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda            3808.80  1286.20 4317.00   70.20 32426.40  5427.20    17.26     6.03    1.38    1.36    2.61   0.10  42.40
sdd            3930.80  1289.20 4193.20   43.00 32465.60  5390.40    17.87     6.68    1.58    1.55    4.76   0.11  47.68
sde            4006.20  1300.20 4123.40   49.60 32394.40  5324.80    18.08     6.46    1.55    1.46    8.84   0.11  46.24
sdc            3713.20  1295.40 4412.40   60.80 32442.40  5427.20    16.93     5.65    1.26    1.23    3.33   0.09  42.00
sdb            3664.20  2595.20 4462.00  117.40 32444.00 10854.40    18.91     6.22    1.36    1.26    5.02   0.10  47.84
sdf            4050.80  2620.20 4075.00   92.60 32458.40 10858.40    20.79     8.04    1.93    1.77    8.75   0.14  59.28

读取测试表明硬件性能良好:

$ sudo hdparm -T -t /dev/md6

/dev/md6:
 Timing cached reads:   12684 MB in  2.00 seconds = 6349.46 MB/sec
 Timing buffered disk reads: 1490 MB in  3.00 seconds = 496.21 MB/sec

$ sudo hdparm -T -t /dev/sda

/dev/sda:
 Timing cached reads:   12582 MB in  2.00 seconds = 6298.96 MB/sec
 Timing buffered disk reads: 438 MB in  3.01 seconds = 145.72 MB/sec

因此,在我看来,唯一需要担心的是块到扇区的对齐。我想请任何人展示如何确定 RAID 块到扇区的对齐是否正确。

我可以确定有关 RAID-6 阵列布局的以下信息:

$ sudo mdadm --examine /dev/sda
/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 17f8705f:e4cf176a:514d669c:04ae747f
           Name : mrs:6  (local to host mrs)
  Creation Time : Fri May 23 22:46:54 2014
     Raid Level : raid6
   Raid Devices : 6

 Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
     Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
  Used Dev Size : 7813774336 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : efb373eb:cd5cd27e:fee8f9cc:0e59ff15

    Update Time : Sat May 24 09:04:04 2014
       Checksum : 608b32d8 - correct
         Events : 6

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAAA ('A' == active, '.' == missing)

$ sudo mdadm --detail /dev/md6
/dev/md6:
        Version : 1.2
  Creation Time : Fri May 23 22:46:54 2014
     Raid Level : raid6
     Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
  Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Sat May 24 09:04:04 2014
          State : active, resyncing
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 36% complete

           Name : mrs:6  (local to host mrs)
           UUID : 17f8705f:e4cf176a:514d669c:04ae747f
         Events : 6

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       4       8       64        4      active sync   /dev/sde
       5       8       80        5      active sync   /dev/sdf

多谢!

答案1

/proc/sys/dev/raid/speed_limit_min可以通过和调整用于重新同步的 I/O 带宽量/proc/sys/dev/raid/speed_limit_max

相关内容