centos 5.5 中的绑定驱动程序,balance-rr 模式不起作用

centos 5.5 中的绑定驱动程序,balance-rr 模式不起作用

我在centos5.5中创建了一个bonding接口,模式为balance-rr,使用从属eth6和eth7,eth6和eth7连接正常,配置文件如下:

[root@50:B3:42:00:00:A3 network-scripts]# cat ifcfg-bond1 
DEVICE=bond1
IPADDR=172.16.252.225
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
MTU=9000
ONBOOT==yes
BOOTPROTO=none
USERCTL=on
BONDING_OPTS="mode=balance-rr miimon=100"

[root@50:B3:42:00:00:A3 network-scripts]# cat ifcfg-eth6
DEVICE=eth6
USERCTL=no
ONBOOT=yes
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
[root@50:B3:42:00:00:A3 network-scripts]# cat ifcfg-eth7
DEVICE=eth7
USERCTL=no
ONBOOT=yes
MASTER=bond1
SLAVE=yes
BOOTPROTO=none

并且bonding接口创建成功,sysfs中的参数为:

[root@50:B3:42:00:00:A3 network-scripts]# cat /sys/class/net/bond1/bonding/slaves 
eth6 eth7
[root@50:B3:42:00:00:A3 network-scripts]# cat /sys/class/net/bond1/bonding/mode 
balance-rr 0
[root@50:B3:42:00:00:A3 network-scripts]# cat /sys/class/net/bond1/bonding/mii
miimon      mii_status  
[root@50:B3:42:00:00:A3 network-scripts]# cat /sys/class/net/bond1/bonding/miimon 
100

[root@50:B3:42:00:00:A3 network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.4.0 (October 7, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth6
MII Status: up
Link Failure Count: 0
Permanent HW addr: 50:b3:42:00:00:74

Slave Interface: eth7
MII Status: up
Link Failure Count: 0
Permanent HW addr: 50:b3:42:00:00:75

并且可以ping通bond接口,链路状态正常:

# ping 172.16.252.225 
PING 172.16.252.225 (172.16.252.225) 56(84) bytes of data.
64 bytes from 172.16.252.225: icmp_seq=1 ttl=64 time=1.88 ms
64 bytes from 172.16.252.225: icmp_seq=2 ttl=64 time=0.122 ms
64 bytes from 172.16.252.225: icmp_seq=3 ttl=64 time=0.112 ms
64 bytes from 172.16.252.225: icmp_seq=4 ttl=64 time=0.110 ms
64 bytes from 172.16.252.225: icmp_seq=5 ttl=64 time=0.117 ms

然后使用IOmeter通过bonding接口对磁盘进行流量读写,查看数据流量的分布,bonding接口的模式是balance-rr,使用命令来分析数据流量:

sar -n DEV 2 100

但结果和想象的不一样:

# sar -n DEV 2 100
Linux 2.6.18-194.17.1.el5 (50:B3:42:00:00:A3)   04/28/2012

06:32:32 PM     IFACE   rxpck/s   txpck/s   rxbyt/s   txbyt/s   rxcmp/s   txcmp/s  rxmcst/s
06:32:34 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:32:34 PM      eth0    318.78   4316.24  19691.88 439105.58      0.00      0.00      0.00
06:32:34 PM    inner0      0.00      0.51      0.00     47.72      0.00      0.00      0.00
06:32:34 PM      eth5      1.02      0.00     93.40      0.00      0.00      0.00      0.00
06:32:34 PM      eth6   4499.49      0.00 2631274.62      0.00      0.00      0.00      0.00
06:32:34 PM      eth7    236.55      0.00  14350.76      0.00      0.00      0.00      0.00
06:32:34 PM      eth8      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:32:34 PM      sit0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:32:34 PM     bond0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:32:34 PM     bond1   4736.04      0.00 2645625.38      0.00      0.00      0.00      0.00

我们可以看到数据流不平衡,大部分数据流在eth6上,可能某些参数设置不正确,或者存在其他错误,你能帮助我吗?非常感谢!!!

机器和系统信息:

Linux 2.6.18-194.17.1.el5 x86_64  GNU/Linux

绑定驱动程序版本:

 v3.4.0 (October 7, 2008)

网卡:

intel 82574L 1000Mb/s

转变:

H3C S5800

# interface GigabitEthernet1/0/7 port link-aggregation group 1
# interface GigabitEthernet1/0/9 port link-aggregation group 1
# interface GigabitEthernet1/0/12 port link-aggregation group 1
# interface GigabitEthernet1/0/22 port link-aggregation group 

我的机器连接到交换机的这4个端口。同时我将相应接口的link-port类型配置为使用权

答案1

您没有显示交换机配置。通常,balance-rr 模式需要交换机上的 Etherchannel(Cisco)或 trunk 配置才能正常工作。您是否在交换机上建立了分组H3C S5800交换机

此处的注释

The balance-rr, balance-xor and broadcast modes generally
require that the switch have the appropriate ports grouped together.
The nomenclature for such a group differs between switches, it may be
called an "etherchannel" (as in the Cisco example, above), a "trunk
group" or some other similar variation.

相关内容