[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Ethernet bonding mode 5 only using one Slave adapter.



On 10/8/2013 4:41 AM, Muhammad Yousuf Khan wrote:
> i am using bond mode balance-alb. and here is my "/etc/network/interfaces"
...
> auto bond0
> 
> iface bond0 inet static
> address 10.5.X.200
> netmask 255.255.255.0
> newtork 10.5.x.0
> gateway 10.5.x.9
> slaves eth2 eth3
> #bond-mode active-backup
> bond-mode balance-alb
> bond-miimon 100
> bond-downdelay 200
...
> note : as you can see in /proc/net/bonding/bond0 file the active link is
> eth2 and the bwm-ng showing transmission is also on eth2. even i use two
> session i thought it could work like round robin as in "mode 0" but both
> sessions are transmitting data from eth2.

With balance-alb, packet distribution across links is dictated by the
receiving system's bond interface logic, not the sending system.  The
receiving system uses ARP replies to trick the sending host's interface
into transmitting to one of the receiver's multiple physical interfaces.

Thus, when balance-alb receive isn't working, it's usually due to this
ARP reply trickery not working correctly.  See "balance-alb" in

https://www.kernel.org/doc/Documentation/networking/bonding.txt

> what i want to achive is per packet load balancing. 

This is not possible with mode balance-alb.  The best you can get with
balance-alb is per session load balancing, not per packet.  If you want
per packet transmit load balancing you must use balance-rr, but
balance-rr does not balance receive traffic.

> if i send two packets
> it moves out from both link eth2 and eth3. so i can combine 4x1 LAN card
> and achieve 4 GB of transmit rate and redundancy.

The only way to achieve this is between two hosts both with the same
port count and both using balance-rr, either with x-over cables, or a
dumb (non-managed) switch.

> i know 4 GB network output achievement depends upon the hardware quality. i

Physical link throughput by the hardware is the least of your worries.
The major hurdle is getting the bonding driver working the way you want
it to.  If not, the majority of the time you'll only get 1 GbE link of
throughput.

> will check on that too. but at least bwm-ng could show me packet activity
> on all the links not only active link.

Read the kernel doc I provided.  My guess is that in the test case for
which you provided numbers, only one slave on the receiving system was
active.  Follow the examples in the kernel doc and you should be able to
straighten this out and achieve performance closer to your goal.

What workload do you have that requires 400 MB/s of parallel stream TCP
throughput at the server?  NFS, FTP, iSCSI?  If this is a business
requirement and you actually need this much bandwidth to/from one
server, you will achieve far better results putting a 10GbE card in the
server and a 10GbE uplink module in your switch.  Yes, this costs more
money, but the benefit is that all client hosts get full GbE bandwidth
to/from the server, all the time, in both directions.  You'll never
achieve that with the Linux bonding driver.

-- 
Stan


Reply to: