[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Getting Started


> I have put together a µCluster that addresses some of these concerns.

Very interesting, yours is a more solid-engineering type of approach, not
a surprise considering you address!...

> I eschewed latency concerns in favor of bandwidth ones and implemented a
> channel-bonding scheme much like the one used at http://ilab.usc.edu/beo.

How does bonding of 100 Mbps cards compare with Gigabit Ehternet these
days? The cards are not all that expensive anymore, but I guess the
network equipment still is. We got a 3Com server card for about US$ 500
some time ago, but I hear there are some going for less than US$ 200 now.

> I agree with many of Jorge's suggestions (I ran his page through the URL
> translator at worldlingo.com)

If this worked well, Andrew's translation job may already be half done!

> and for raw heat-spewing, number-crunching, beowulf-thrashing
> performance, AMD CPUs are the way to go...

Yes, and for having a bit of heat trouble too. One of the nodes has been
bothering us sporadically. So we have 50 large fans in the buying queue.

> ...now if only we could get the data to and from the AMD MoBos faster!   

Well, speaking of latency, I understand there are two major causes: the
kernel's TCP stack and the switches. With wire-speed switches I presume
the former is the main one these days. I hear there is a public drive
somewhere for 3Com Gigabit cards for using them without going through the
kernel networking layer. Does anybody know anything about this? If true it
could be a big plus for reducing latency.

        Jorge L. deLyra,  Associate Professor of Physics
            The University of Sao Paulo,  IFUSP-DFMA
       For more information: finger delyra@latt.if.usp.br

Reply to: