Linux 服务器上的 Intel 网卡 - 尽管绑定了端口,但性能有限

Linux 服务器上的 Intel 网卡 - 尽管绑定了端口,但性能有限

我接管了一台配备 Intel NIC 的 Debian 7 服务器,其中端口绑定在一起以实现负载平衡。这是硬件:

lspci -vvv | egrep -i 'network|ethernet'
04:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
07:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
07:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

首先,让我感到困惑的是,显示了四个入口,并且系统显示 eth0 - eth3(四个端口),尽管 NIC 在规格中只有两个端口。但是,实际上只有 eth2 和 eth3 处于启动和运行状态,因此有两个端口:

ip 链接显示

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT qlen 1000
    link/ether 00:25:90:19:5c:e4 brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT qlen 1000
    link/ether 00:25:90:19:5c:e7 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000
    link/ether 00:25:90:19:5c:e6 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT qlen 1000
    link/ether 00:25:90:19:5c:e5 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 00:25:90:19:5c:e6 brd ff:ff:ff:ff:ff:ff

问题是我得到的速度比预期的要低。当运行两个 iperf 实例(每个端口一个)时,我只得到 942 MBit/s 的总速度,每个端口 471 MBit/s。我期望的速度会更高,因为每个端口都可以达到 1 Gbps!为什么 - 绑定没有配置为最大性能?

[  3] local xx.xxx.xxx.xxx port 60868 connected with xx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-180.0 sec  9.87 GBytes   471 Mbits/sec
[  3] local xx.xxx.xxx.xxx port 49363 connected with xx.xxx.xxx.xxx port 5002
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-180.0 sec  9.87 GBytes   471 Mbits/sec

/etc/network/interfaces 中的绑定配置:

auto bond0
iface bond0 inet static
    address xx.xxx.xxx.x
    netmask 255.255.255.0
    network xx.xxx.xxx.x
    broadcast xx.xxx.xxx.xxx
    gateway xx.xxx.xxx.x
    up /sbin/ifenslave bond0 eth0 eth1 eth2 eth3
    down /sbin/ifenslave -d bond0 eth0 eth1 eth2 eth3

配置的 Bond 模式为:

猫/ proc / net / bonding / bond0

 Bonding Mode: transmit load balancing

ifconfig 的输出:

bond0     Link encap:Ethernet  HWaddr 00:25:90:19:5c:e6  
          inet addr:xx.xxx.xxx.9  Bcast:xx.xxx.xxx.255  Mask:255.255.255.0
          inet6 addr: fe80::225:90ff:fe19:5ce6/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:19136117104 errors:30 dropped:232491338 overruns:0 frame:15
          TX packets:19689527247 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:20530968684525 (18.6 TiB)  TX bytes:17678982525347 (16.0 TiB)

eth0      Link encap:Ethernet  HWaddr 00:25:90:19:5c:e4  
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:235903464 errors:0 dropped:0 overruns:0 frame:0
          TX packets:153535554 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:202899148983 (188.9 GiB)  TX bytes:173442571769 (161.5 GiB)
          Memory:fafe0000-fb000000 

eth1      Link encap:Ethernet  HWaddr 00:25:90:19:5c:e7  
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:3295412 errors:0 dropped:3276992 overruns:0 frame:0
          TX packets:152777329 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:213880307 (203.9 MiB)  TX bytes:172760941087 (160.8 GiB)
          Memory:faf60000-faf80000 

eth2      Link encap:Ethernet  HWaddr 00:25:90:19:5c:e6  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:18667703388 errors:30 dropped:37 overruns:0 frame:15
          TX packets:9704053069 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:20314102256898 (18.4 TiB)  TX bytes:8672061985928 (7.8 TiB)
          Memory:faee0000-faf00000 

eth3      Link encap:Ethernet  HWaddr 00:25:90:19:5c:e5  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:229214840 errors:0 dropped:229214309 overruns:0 frame:0
          TX packets:9679161295 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13753398337 (12.8 GiB)  TX bytes:8660717026563 (7.8 TiB)
          Memory:fae60000-fae80000 

编辑: 我找到了答案,这要感谢下面的观点。系统运行在 Bond-mode 5 (TLB) 下,为了获得双倍速度,它必须在 Bond-mode 4 (IEEE 802.3ad 动态链路聚合) 下运行。谢谢!

答案1

如果您只知道两个端口,并且只连接了两个端口,那么您应该:

  1. 弄清楚您目前看不到的另外两个端口的情况。它们可能集成在服务器主板上,或者甚至连接到您意想不到的东西上。
  2. 在您的软件网络配置中仅将物理连接到同一台交换机的设备绑定在一起。

处理完这些问题后,您就可以更充分地找到识别性能问题所需的信息。

相关内容