Is anyone aware of the CPU overhead incurred by bonded gigabit devices?
It is quite high... 30% under many netpipe scenarios, but I was using the Intel PWLA8390MT. YMMV.
After skimming through the bonding documents with the kernel, it appears that boding is possible over any switches, however it seems to be implying that this is for high availability boding, rather than high performance bonding. From previous posts to this list, I seem to recall that its possible to have eth0 and eth1 connected through different switches on different subnets and relying one the kernel bonding to handle the rest, is this accurate?
Not different subnets. The bond0 device disregards any assignments you make to eth0 and eth1. But you do have to keep the NICs physically separated on "dumb" switches since a given switch won't be able to distinguish them by MAC address after you bond them. See...
lists.debian.org/debian-beowulf/2001/debian-beowulf-200111/msg00020.htmlThe other bonding options are only available with a "good" switch (that supports FEC and GEC, for instance).
http://www.intel.com/support/network/adapter/1000/linux/e1000.htm