在 balance-rr 配置中绑定两个网卡:
root@server:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
3: enp0s31f6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
该债券运作良好,并通过 netplan 进行如下配置:
network:
ethernets:
enp0s31f6:
dhcp4: false
enp1s0:
dhcp4: false
version: 2
bonds:
bond0:
interfaces: [enp0s31f6,enp1s0]
addresses: [10.0.10.10/16]
gateway4: 10.0.0.1
mtu: 9000
nameservers:
addresses: [10.0.0.1]
parameters:
mode: balance-rr
mii-monitor-interval: 100
但是我注意到一些奇怪的事情。当从单个服务器(10G 连接)通过 NFS 传输大型文件时,我实现了最大 180MB/s,其中通过 enp0s31f6 传输的速度约为 120MB/s,通过 enp1s0 传输的速度约为 60MB/s。如果我拔掉 enp0s31f6,另一个接口 enp1s0 的最大吞吐量为 120MB/s。
知道为什么负载似乎以 2:1 的比例分布吗?