Home > System Tutorial > LINUX > Analyzing network card binding mode

Analyzing network card binding mode

王林
Release: 2024-01-11 21:42:18
forward
458 people have browsed it

There are currently seven network card binding modes (0~6) bond0, bond1, bond2, bond3, bond4, bond5, bond6

Analyzing network card binding mode

There are three commonly used ones:

mode=0: Balanced load mode, with automatic backup, but requires "Switch" support and settings.

mode=1: Automatic backup mode. If one line is disconnected, other lines will automatically be backed up.

mode=6: Balanced load mode, automatic backup, no need for "Switch" support and settings.

illustrate:

It should be noted that if you want to achieve mode 0 load balancing, just setting optionsbond0 miimon=100 mode=0 is not enough. The switch connected to the network card must be specially configured (these two ports should be aggregated) , because the two network cards used for bonding use the same MAC address. Analyze it from the principle (bond runs in mode0):

In mode 0, the IPs of the network cards bound to bond are all modified to the same mac address. If these network cards are connected to the same switch, then there will be multiple ports corresponding to the mac address in the switch's arp table. , then to which port should the switch forward the packet sent to this mac address? Under normal circumstances, the mac address is globally unique. One mac address corresponding to multiple ports will definitely confuse the switch. Therefore, if the bond under mode0 is connected to the switch, the ports of the switch should be aggregated (cisco is called ethernetchannel, foundry is called portgroup), because after the switch is aggregated, several ports under the aggregation are also bundled into a mac Address. Our solution is to connect the two network cards to different switches.

There is no need to configure the switch in mode6 mode because the two network cards used for bonding use different MAC addresses.

Description of the seven bond modes: mod=0, that is: (balance-rr)Round-robin policy

Features: The order of transmitting data packets is sequential transmission (that is: the first packet goes to eth0, the next packet goes to eth1... and the cycle continues until the last transmission is completed), this mode provides Load balancing and fault tolerance; but we know that if a connection or session data packet is sent from different interfaces and passes through different links in the middle, the problem of data packets arriving out of order is likely to occur on the client side, without Data packets arriving out of sequence need to be sent again, so the network throughput will decrease

mod=1, that is: (active-backup)Active-backup policy (main-backup policy)

Features: Only one device is active. When one goes down, the other one will immediately switch from backup to primary device. The mac address is visible from the outside. From the outside, the bond's MAC address is unique to avoid confusion in the switch. This mode only provides fault tolerance; it can be seen that the advantage of this algorithm is that it can provide high network connection availability, but its resource utilization is low. Only one interface is in working state. In the case of N network interfaces, Resource utilization rate is 1/N

mod=2, that is: (balance-xor)XOR policy (balance strategy)

Features: Transmit data packets based on the specified transmission HASH policy. The default strategy is: (source MAC address XOR destination MAC address)% slave number. Other transmission policies can be specified through the xmit_hash_policy option. This mode provides load balancing and fault tolerance

mod=3, that is: broadcast (broadcast strategy)

Features:Transmit each packet on each slave interface, this mode provides fault tolerance

mod=4, that is: (802.3ad) IEEE 802.3ad Dynamic link aggregation

Features: Create an aggregation group that shares the same speed and duplex settings. According to the 802.3ad specification, multiple slaves work under the same activated aggregate. Slave election for outbound traffic is based on the transport hash policy, which can be changed from the default XOR policy to other policies through the xmit_hash_policy option. It should be noted that not all transmission strategies are compatible with 802.3ad, especially considering the packet reordering problem mentioned in Chapter 43.2.4 of the 802.3ad standard. Different implementations may have different adaptability.

Necessary conditions:

Condition 1: ethtool supports obtaining the rate and duplex settings of each slave

Condition 2: switch supports IEEE802.3ad Dynamic link aggregation

Condition 3: Most switches require specific configuration to support 802.3ad mode

mod=5, that is: (balance-tlb)Adaptive transmit load balancing (adapter transmission load balancing)

Features: Does not require any special switch (switch) to support channel bonding. Distribute outbound traffic on each slave based on current load (calculated based on speed). If the slave that is receiving data fails, another slave takes over the MAC address of the failed slave.

Necessary conditions for this mode: ethtool supports obtaining the rate of each slave

mod=6, that is: (balance-alb)Adaptive load balancing (adapter adaptive load balancing)

Features: This mode includes the balance-tlb mode, plus receive load balance (rlb) for IPV4 traffic, and does not require any switch (switch) support. Receive load balancing is achieved through ARP negotiation. The bonding driver intercepts the ARP reply sent by the local machine and rewrites the source hardware address to the unique hardware address of a slave in the bond, so that different peers use different hardware addresses for communication.

Received traffic from the server will also be balanced. When the local machine sends an ARP request, the bonding driver copies the peer's IP information from the ARP packet and saves it. When the ARP reply arrives from the peer, the bonding driver extracts its hardware address and initiates an ARP reply to a slave in the bond. One problem with using ARP negotiation for load balancing is that the bond's hardware address is used every time an ARP request is broadcast. Therefore, after the peer learns this hardware address, all received traffic will flow to the current slave. This problem can be solved by sending updates (ARP replies) to all peers containing their unique hardware addresses, causing traffic to be redistributed. When a new slave is added to the bond, or when an inactive slave is reactivated, the received traffic must also be redistributed. The received load is distributed sequentially (roundrobin) on the highest-speed slave in the bond. When a link is reconnected, or a new slave is added to the bond, the receiving traffic is redistributed among all currently active slaves. Initiate an ARP reply to each client using the specified MAC address. The updelay parameter introduced below must be set to a value greater than or equal to the forwarding delay of the switch (switch) to ensure that the ARP reply sent to the opposite end will not be blocked by the switch (switch).

Necessary conditions:

Condition 1: ethtool supports obtaining the rate of each slave;

Condition 2: The underlying driver supports setting the hardware address of a certain device, so that there is always a slave (curr_active_slave) using the bond's hardware address, while ensuring that each slave in the bond has a unique hardware address. If curr_active_slave fails, its hardware address will be taken over by the newly elected curr_active_slave. In fact, the difference between mod=6 and mod=0: mod=6, first fill up eth0 traffic, then eth1,...ethX; and mod =0, you will find that the traffic of the two ports is very stable, with basically the same bandwidth. And mod=6, you will find that the traffic of the first port is very high, and the second port only accounts for a small part of the traffic

Linux network port binding:

Through network port binding (bond) technology, network port redundancy and load balancing can be easily achieved, thereby achieving high availability and high reliability. Prerequisite agreement:

2个物理网口分别是:eth0,eth1
绑定后的虚拟口是:bond0
服务器IP是:10.10.10.1
Copy after login

The first step is to configure the setting file:

[root@woo ~]# vi  /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0

BOOTPROTO=none

ONBOOT=yes

IPADDR=10.10.10.1

NETMASK=255.255.255.0

NETWORK=192.168.0.0

[root@woo ~]# vi  /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=none

MASTER=bond0

SLAVE=yes

[root@woo ~]# vi  /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=none

MASTER=bond0

SLAVE=yes
Copy after login

第二步,修改modprobe相关设定文件,并加载bonding模块:

1.在这里,我们直接创建一个加载bonding的专属设定文件/etc/modprobe.d/bonding.conf

[root@woo ~]# vi /etc/modprobe.d/bonding.conf

alias bond0 bonding

options bonding mode=0 miimon=200
Copy after login

2.加载模块(重启系统后就不用手动再加载了)

[root@woo ~]# modprobe bonding
Copy after login

3.确认模块是否加载成功:

[root@woo ~]# lsmod | grep bonding

bonding 100065 0
Copy after login

第三步,重启一下网络,然后确认一下状况:

[root@db01 ~]# service network restart

Shutting down interface bond0:  [  OK  ]

Shutting down loopback interface:  [  OK  ]

Bringing up loopback interface:  [  OK  ]

Bringing up interface bond0:  [  OK  ]

[root@db01 ~]#  cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: None

Currently Active Slave: eth0

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

Slave Interface: eth0

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 40:f2:e9:db:c9:c2

Slave Interface: eth1

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 40:f2:e9:db:c9:c3

[root@db01 ~]#  ifconfig | grep HWaddr

bond0     Link encap:Ethernet  HWaddr 40:F2:E9:DB:C9:C2

eth0      Link encap:Ethernet  HWaddr 40:F2:E9:DB:C9:C2

eth1      Link encap:Ethernet  HWaddr 40:F2:E9:DB:C9:C2
Copy after login

从上面的确认信息中,我们可以看到3个重要信息:

1.现在的bonding模式是active-backup

2.现在Active状态的网口是eth0

3.bond0,eth1的物理地址和处于active状态下的eth0的物理地址相同,这样是为了避免上位交换机发生混乱。

任意拔掉一根网线,然后再访问你的服务器,看网络是否还是通的。

第四步,系统启动自动绑定、增加默认网关:

[root@woo ~]# vi /etc/rc.d/rc.local

#追加

ifenslave bond0 eth0 eth1

route add default gw 10.10.10.1
Copy after login

#如可上网就不用增加路由,0.1地址按环境修改.

————————————————————————

留心:前面只是2个网口绑定成一个bond0的情况,如果我们要设置多个bond口,比如物理网口eth0和eth1组成bond0,eth2和eth3组成bond1,

多网口绑定:

那么网口设置文件的设置方法和上面第1步讲的方法相同,只是/etc/modprobe.d/bonding.conf的设定就不能像下面这样简单的叠加了:

alias bond0 bonding

options bonding mode=1 miimon=200

alias bond1 bonding

options bonding mode=1 miimon=200
Copy after login

正确的设置方法有2种:

第一种,你可以看到,这种方式的话,多个bond口的模式就只能设成相同的了:

<span style="”color:#000000;”">alias bond0 bonding

alias bond1 bonding

options bonding max_bonds=2 miimon=200 mode=1

</span>
Copy after login

第二种,这种方式,不同的bond口的mode可以设成不一样:

<span style="”color:#000000;”">alias bond0 bonding

options bond0 miimon=100 mode=1

install bond1 /sbin/modprobe bonding -o bond1 miimon=200 mode=0

</span>
Copy after login

仔细看看上面这2种设置方法,现在如果是要设置3个,4个,甚至更多的bond口,你应该也会了吧!

后记:

miimon 监视网络链接的频度,单位是毫秒,我们设置的是200毫秒。

max_bonds 配置的bond口个数

mode bond模式,主要有以下几种,在一般的实际应用中,0和1用的比较多。

The above is the detailed content of Analyzing network card binding mode. For more information, please follow other related articles on the PHP Chinese website!

source:linuxprobe.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template