我安装了Intel千兆网卡,显示为:
[root@mail ~]# ethtool eth0
Settings for eth0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes
[root@mail ~]#
因此我得到错误:
[root@mail ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:07:E9:0A:75:A5
inet addr:78.158.192.29 Bcast:78.158.192.127 Mask:255.255.255.128
inet6 addr: fe80::207:e9ff:fe0a:75a5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:29351806494030 dropped:4891967749005 overruns:0 frame:19567870996020
TX packets:0 errors:9783935498010 dropped:0 overruns:0 carrier:14675903247015
collisions:4891967749005 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Base address:0xb000 Memory:ff700000-ff720000
其他网卡显示为“MII”,工作正常。是否可以将端口类型从 FIBER 更改为 MII?ethtool 无法更改。
谢谢
答案1
您可以尝试将您的问题发布到 e1000 开发人员邮件列表,该列表由 Intel NIC 驱动程序开发人员积极监控。请务必包含您的操作系统发行版、“ethtool -i eth0”的输出,以及 lspci 的输出。
您可以在 SourceForge 网站上找到英特尔 NIC 驱动程序的邮件列表信息: https://lists.sourceforge.net/lists/listinfo/e1000-devel
答案2
当 ethtool 不确定要说什么时,它只会说某物换句话说,它在撒谎。
更新内核以包含 NIC 的最新驱动程序。这样 ethtool 就会了解更多信息,并正确执行。
答案3
怀疑这么长时间后我是否能得到任何答复,但我只想说“我也是”。我2009
使用8086:1010 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper)
双端口以太网卡和 Ubuntu 22.04.2 LTSe1000
驱动程序时遇到了与 op 相同的问题:
# ethtool eth1
Settings for eth1:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Auto-negotiation: on
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
# ethtool eth2
Settings for eth2:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Auto-negotiation: on
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
# ifconfig eth1
eth1: flags=6147<UP,BROADCAST,SLAVE,MULTICAST> mtu 1500
ether MAC_BOND0 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# ifconfig eth2
eth2: flags=6147<UP,BROADCAST,SLAVE,MULTICAST> mtu 1500
ether MAC_BOND0 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
我在 LACP 中配置了bond0
:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-69-generic
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100
Peer Notification Delay (ms): 0
802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: MAC_BOND0
bond bond0 has no active aggregator
Slave Interface: eth1
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC_ETH1
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: MAC_BOND0
port key: 0
port priority: 255
port number: 1
port state: 71
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1
Slave Interface: eth2
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC_ETH2
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: MAC_BOND0
port key: 0
port priority: 255
port number: 2
port state: 71
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1
当然,这是down
因为两个接口都有MII Status: down
。dmesg 显示:
# dmesg | grep -E "bond0|eth[1|2]"
[ 42.999281] e1000 0000:01:0a.0 eth1: (PCI:33MHz:32-bit) MAC_ETH1
[ 42.999292] e1000 0000:01:0a.0 eth1: Intel(R) PRO/1000 Network Connection
[ 43.323358] e1000 0000:01:0a.1 eth2: (PCI:33MHz:32-bit) MAC_ETH2
[ 43.323366] e1000 0000:01:0a.1 eth2: Intel(R) PRO/1000 Network Connection
[ 65.617020] bonding: bond0 is being created...
[ 65.787883] 8021q: adding VLAN 0 to HW filter on device eth1
[ 67.790638] 8021q: adding VLAN 0 to HW filter on device eth2
[ 70.094511] 8021q: adding VLAN 0 to HW filter on device bond0
[ 70.558364] 8021q: adding VLAN 0 to HW filter on device eth1
[ 70.558675] bond0: (slave eth1): Enslaving as a backup interface with a down link
[ 70.560050] 8021q: adding VLAN 0 to HW filter on device eth2
[ 70.560354] bond0: (slave eth2): Enslaving as a backup interface with a down link
因此,eth1 和 eth2 都处于 UP 状态并被识别,ethtool
但Link detected: yes
它们的链接处于 DOWN 状态。我有相同的令人困惑的 FIBRE 端口类型ethtool
(报告的功能lshw
是capabilities: pm pcix msi cap_list rom ethernet physical fibre 1000bt-fd autonegotiation
)。这很奇怪,我怀疑是硬件或固件问题。欢迎提出任何想法。
PS:这不是交换机或交换机端口的问题,也不是已经测试过的电缆的问题。