Re: Bonded gigabit cards
On Thu, 2003-08-07 at 16:02, Eric Nodwell wrote:
*snip*
> You're correct, that you construct two physically independent
> networks, each with their own switches. You could use either of these
> networks in normal (non-channel-bonded) mode.
>
> Channel bonding is transparent to practically all network protocols,
> but channel bonding does *not* interoperate with non-channel-bonded
> networking. (The non-channel bonded machines will not receive every
> second packet.) One consequence is that network booting does not work
> (DHCP, BOOTP, Etherboot, PXE, whatever). This can be an
> inconvenience. For example, we use FAI to automatically install
> nodes, but if for example we have a disk failure on a node, we have to
> schedule a cluster shutdown to switch to non-channel-bonded to
> reinstall the node.
>
Unfortunately my cluster is long gone but I had a theory to solve this
problem. I was going to try the following:
eth0 on 192.168.0.1
eth1 on 192.168.1.1
bond0 on 192.168.2.1
Then I could set up routing for all cluster traffic over bond0 by ip
address in the routing tables. This would also allow for dhcp and bootp
dameons to listen and send only over eth0 (or 1) for non bonded machines
and route through the non bonded interfaces for those machines. With a
good DNS set up i.e. cluster1 = 192.168.2.1 and cluster1-eth0 =
192.168.0.1 it would be pretty transparent I would think.
I've wanted to give it a shot but I don't have any hardware to test this
theory at the moment.
--mike
Reply to: