What workload do you have that requires 400 MB/s of parallel stream TCP
throughput at the server? NFS, FTP, iSCSI? If this is a business
requirement and you actually need this much bandwidth to/from one
server, you will achieve far better results putting a 10GbE card in the
server and a 10GbE uplink module in your switch. Yes, this costs more
money, but the benefit is that all client hosts get full GbE bandwidth
to/from the server, all the time, in both directions. You'll never
achieve that with the Linux bonding driver.
--
Stan