我想在 VirtualBox 的 CentOS 7 中使用 2 个 eth 设备设置 bond0 接口?

我想在 VirtualBox 的 CentOS 7 中使用 2 个 eth 设备设置 bond0 接口?

我想在 VirtualBox 中设置一个 CentOS 7.x 虚拟机,以便我可以尝试绑定接口。如何设置此虚拟机,使其具有以下接口:

  • eth1(专用网络 - 192.168.56.101)
  • eth2(从属到 bond0)
  • eth3(从属到 bond0)
  • bond0(使用 LACP)

使用 Vagrant 来简化设置会很有帮助,因此更容易复制。

笔记:我想手动进行设置,因此请显示一个禁用 NetworkManager 的示例。

答案1

设置 Vagrant

首先,您可以使用以下命令Vagrantfile来构建您的虚拟机:

$ cat Vagrantfile
Vagrant.configure("2") do |config|

  config.vm.box = "centos/7"
  config.vm.hostname="box-101"
  config.ssh.forward_x11 = true

  config.vm.network "private_network", ip: "192.168.56.101"
  config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)", auto_config: false
  config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)", auto_config: false

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
  end

  config.vm.provision "shell", inline: <<-SHELL
    yum install -y git vim socat tcpdump wget sysstat
    yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  SHELL
end

笔记:我使用的 NICpublic_network是我的 Macbookbridge: "en0: Wi-Fi (Wireless)"设备。如果您在其他任何事情上执行此操作,则需要将其更改为运行 Vagrant/VirtualBox 的主机系统上适当的 NIC。

上述文件包含启动 VM 时生成的 3 个 NIC。要启动 VM 并通过 SSH 进入其中:

$ vagrant up
$ vagrant ssh

初始网络设置

如果我们查看生成的网络,我们将看到以下内容:

$ ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c0:42:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85127sec preferred_lft 85127sec
    inet6 fe80::5054:ff:fec0:42d5/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ce:88:39 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.101/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fece:8839/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::df68:9ee2:4b5:ad5f/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:59:b0:69 brd ff:ff:ff:ff:ff:ff

以及对应的路由:

$ ip r
default via 10.0.2.2 dev eth0 proto dhcp metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 102

禁用网络管理器

对于这个虚拟机,我们将禁用 NetworkManager,这样我们就可以手动配置绑定接口 + 从属接口。

$ for i in NetworkManager-dispatcher NetworkManager NetworkManager-wait-online; do
    systemctl disable $i && systemctl stop $i
  done

确认 NM 现已禁用:

$ systemctl list-unit-files |grep NetworkManager
NetworkManager-dispatcher.service             disabled
NetworkManager-wait-online.service            disabled
NetworkManager.service                        disabled

设置绑定接口

首先,我们将构建 3 个文件。 1 个用于 bond0 接口,1 个用于我们将用作从设备的 2 个接口(eth2 和 eth3)。

ifcfg-bond0

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
Type=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.232
PREFIX=24
GATEWAY=192.168.1.2
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"

笔记: mode=4又名 (802.3ad)。 LACP。miimon=100是 100ms 检查间隔,并且lacp_rate=1是来自合作伙伴的快速 TX。您可以通过此命令查看bonding模块接受的所有参数modinfo bonding

以太坊2

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
IPV6INIT=no
MASTER=bond0
SLAVE=yes

以太坊3

$ cat /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
IPV6INIT=no
MASTER=bond0
SLAVE=yes

笔记:在上面,我静态地为 bond0 接口分配了 IP 地址 192.168.1.232 和网关 192.168.1.2。您需要将它们更改为适合您情况的内容。

启动界面

到目前为止,启动网络最简单的方法是重新启动网络服务:

$ systemctl restart network

如果我们看一下接口和路由:

$ ip a l
..
..
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.232/24 brd 192.168.1.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fed7:c2ec/64 scope link
       valid_lft forever preferred_lft forever

$ ip r
default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003
169.254.0.0/16 dev bond0 scope link metric 1006
192.168.1.0/24 dev bond0 proto kernel scope link src 192.168.1.232
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101

粘合细节

我们还可以查看绑定接口的设备,以获取有关接口状态的更多详细信息:

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 08:00:27:d7:c2:ec
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 1
    Actor Key: 9
    Partner Key: 1
    Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:d7:c2:ec
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: 08:00:27:d7:c2:ec
    port key: 9
    port priority: 255
    port number: 1
    port state: 207
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 3

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:59:b0:69
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: 08:00:27:d7:c2:ec
    port key: 9
    port priority: 255
    port number: 2
    port state: 199
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 3

验证外部连接

下面您可以看到我在网络中的另一个机器上运行的 bond0 IP 地址的 ping 输出。一旦我们重新启动network服务,我们就可以看到它变得可以访问:

$ ping 192.168.1.232
From 192.168.1.10 icmp_seq=7414 Destination Host Unreachable
From 192.168.1.10 icmp_seq=7415 Destination Host Unreachable
64 bytes from 192.168.1.232: icmp_seq=7416 ttl=64 time=886 ms
64 bytes from 192.168.1.232: icmp_seq=7417 ttl=64 time=3.58 ms
64 bytes from 192.168.1.232: icmp_seq=7418 ttl=64 time=3.52 ms
64 bytes from 192.168.1.232: icmp_seq=7419 ttl=64 time=3.46 ms
64 bytes from 192.168.1.232: icmp_seq=7420 ttl=64 time=3.15 ms
64 bytes from 192.168.1.232: icmp_seq=7421 ttl=64 time=3.50 ms

重启提示

在 CentOS 7.x 上,bond0 接口在引导期间正常启动似乎存在错误/问题。此问题的解决方法是将以下内容添加到:

 $ echo "ifup bond0" >> /etc/rc.d/rc.local
 $ chmod +x /etc/rc.d/rc.local

这将保证bond0在启动期间正确启动该接口。

参考

相关内容