我正在尝试让 NIC 绑定与 balance-rr 配合使用,以便将三个 NIC 端口组合在一起,这样我们就能获得 3 Gbps 而不是 1 Gbps。我们在连接到同一交换机的两台服务器上执行此操作。但是,我们只能获得一条物理链路的速度。
我们使用 1 台 Dell PowerConnect 5324,SW 版本 2.0.1.3,Boot 版本 1.0.2.02,HW 版本 00.00.02。两台服务器都是 CentOS 5.9 (Final),运行 OnApp Hypervisor (CloudBoot)
服务器 1 正在使用端口通道 1 中的端口 g5-g7。服务器 2 正在使用端口通道 2 中的端口 g9-g11。
转变
show interface status
Port Type Duplex Speed Neg ctrl State Pressure Mode
-------- ------------ ------ ----- -------- ---- ----------- -------- -------
g1 1G-Copper -- -- -- -- Down -- --
g2 1G-Copper Full 1000 Enabled Off Up Disabled Off
g3 1G-Copper -- -- -- -- Down -- --
g4 1G-Copper -- -- -- -- Down -- --
g5 1G-Copper Full 1000 Enabled Off Up Disabled Off
g6 1G-Copper Full 1000 Enabled Off Up Disabled Off
g7 1G-Copper Full 1000 Enabled Off Up Disabled On
g8 1G-Copper Full 1000 Enabled Off Up Disabled Off
g9 1G-Copper Full 1000 Enabled Off Up Disabled On
g10 1G-Copper Full 1000 Enabled Off Up Disabled On
g11 1G-Copper Full 1000 Enabled Off Up Disabled Off
g12 1G-Copper Full 1000 Enabled Off Up Disabled On
g13 1G-Copper -- -- -- -- Down -- --
g14 1G-Copper -- -- -- -- Down -- --
g15 1G-Copper -- -- -- -- Down -- --
g16 1G-Copper -- -- -- -- Down -- --
g17 1G-Copper -- -- -- -- Down -- --
g18 1G-Copper -- -- -- -- Down -- --
g19 1G-Copper -- -- -- -- Down -- --
g20 1G-Copper -- -- -- -- Down -- --
g21 1G-Combo-C -- -- -- -- Down -- --
g22 1G-Combo-C -- -- -- -- Down -- --
g23 1G-Combo-C -- -- -- -- Down -- --
g24 1G-Combo-C Full 100 Enabled Off Up Disabled On
Flow Link
Ch Type Duplex Speed Neg control State
-------- ------- ------ ----- -------- ------- -----------
ch1 1G Full 1000 Enabled Off Up
ch2 1G Full 1000 Enabled Off Up
ch3 -- -- -- -- -- Not Present
ch4 -- -- -- -- -- Not Present
ch5 -- -- -- -- -- Not Present
ch6 -- -- -- -- -- Not Present
ch7 -- -- -- -- -- Not Present
ch8 -- -- -- -- -- Not Present
服务器 1:
cat /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=00:1b:21:ac:d5:55
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-eth4
DEVICE=eth4
HWADDR=68:05:ca:18:28:ae
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-eth5
DEVICE=eth5
HWADDR=68:05:ca:18:28:af
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond
DEVICE=onappstorebond
IPADDR=10.200.52.1
NETMASK=255.255.0.0
GATEWAY=10.200.2.254
NETWORK=10.200.0.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
cat /proc/net/bonding/onappstorebond
Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:ac:d5:55
Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:28:ae
Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:28:af
服务器2:
cat /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=00:1b:21:ac:d5:a7
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-eth4
DEVICE=eth4
HWADDR=68:05:ca:18:30:30
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-eth5
DEVICE=eth5
HWADDR=68:05:ca:18:30:31
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes
cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond
DEVICE=onappstorebond
IPADDR=10.200.53.1
NETMASK=255.255.0.0
GATEWAY=10.200.3.254
NETWORK=10.200.0.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
cat /proc/net/bonding/onappstorebond
Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:ac:d5:a7
Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:30:30
Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:30:31
以下是 iperf 的结果。
------------------------------------------------------------
Client connecting to 10.200.52.1, TCP port 5001
TCP window size: 27.7 KByte (default)
------------------------------------------------------------
[ 3] local 10.200.3.254 port 53766 connected with 10.200.52.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 950 MBytes 794 Mbits/sec
答案1
从交换机到系统的传入负载平衡由交换机控制。
您可能有 3Gbps 的无序 TCP 传输,但只有 1Gbps 的接收,因为交换机只发送一个从属设备。
您无法获得完整的 1Gbps,因为这balance-rr
通常会导致无序的 TCP 流量,因此 TCP 会加班来重新排序您的 iperf 流。
根据我的经验,可靠地对单个 TCP 流进行负载平衡实际上是不可能的。
正确配置的绑定可以让你拥有总吞吐量在适当的条件下,从属设备的带宽,但你的最大吞吐量是一个从站的最大速度。
我个人会使用模式 2(交换机上使用 EtherChannel)或模式 4(交换机上使用 LACP)。
如果您需要超过 1Gbps 的速度,那么您就需要更快的 NIC。