[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Getting Started



As far as gige vs. fast ethernet a more important consideration is
physical cabling. Considering that it would take 6 channel bonded ethernet
cards to get the same speed as 1 gige card.
$80 for 2 gige cards + $2 cable=$82
$96 for 12 100Mb cards + $12 for6 cable= $108

so the 6 card channel bonded connection is more expensive and is 6x the
cabling mess at the same speed. 

Therefore i think gige is the best way to go.
In fact i think the choice between using gige and using 100mbit channel
bonded is the same as asking should i use 100mbit or  channel bonded 10
mbit cards. It doesn't make sense to use 10mbit cards when you can get the
same technology (ethernet) at higher speeds.  

Rob 
On Fri, 15 Mar 2002, Jorge L.
deLyra wrote:

> > I have heard, although I cannot speak from experience, that the new
> > GigE-over-copper cards deliver less than 600Mbps in practice under
> > Linux.
> 
> Well, I suppose the 100 Mbps cards don't give you nominal band either.
> Anyway, prices are coming down all the time so, if you want to develop now
> the use of a technology which is likely to be run-of-the-mill stuff in a
> few years, Gbit may still be the way to go...
> 
> > If that is the case, Prof. Itti claims that you get arithmetically
> > better performance with progressive addition of Realtek (100Mbps) cards
> > with channel-bonding up to a point.
> 
> I think that the Realtek cards are quite bad, though inexpensive. Is there
> any evidence that it is better price/performance wise to use these as
> opposed to one of the really good cards like 3COM 3C905, Digital DE500 or
> its many Tulip clones, or Intel EtherExpressPro?
> 
> > That point is determined more by shared interrupt problems than Data
> > Link Layer issues. This begs the question; how many cards can you
> > channel bond if you had interrupts to spare?
> 
> What about this ability the kernel has to deal with several PCI cards on
> the _same_ interrupt line? I suppose that several PCI network cards on a
> single interrupt line and a bonding driver should work more or less like a
> single faster card?
> 
> > Heat removal has certainly become a cottage industry. I have seen many
> > overclocker sites that have various suggestions (other than good heat
> > sinks / fans / thermal compound) on ways to get your CPU core down a
> > degree or 2 more by leaving the case side off or some other inexpensive
> > tricks.
> 
> Well, for those out there using K7's and having trouble, overheating isn't
> the only problem, watch the voltages of your power supplies too. Recently
> we had two of ours drop the CPU core voltage by 10% (from 3.3V to 3.0V),
> which caused hardware hangs after a period of operation ranging from a few
> minutes to a few hours.
> 
> > Well, this is the issue of the day, IMHO. Every subcomponent of the
> > Beowulf archetype has been improving at the pace of Moore's Law except
> > for the high-speed interconnect. I would certainly prefer to use a
> > protocol at the Network layer because of the application programming
> > implications but if there was something that would give us lower latency
> > for the same bandwidth it would certainly be a hit...
> 
> Well, there is a group in the University Computer Center here which is
> going to try a 30-node machine with Gbit and the bypass of the kernel TCP
> stack. Their idea is to have two network cards on each node, a 100 Mbps
> one for normal TCP networking and a Gbit one for inter-node communication.
> Remote boot it to be done using the 100 Mbps cards, since there still
> aren't any Etherboot drivers for Gbit. They have chosen Gbit as opposed to
> Myrinet, I'm told. I know several people on the team and, if this really
> works out, I will let the list know...
> 							Cheers,
> 
> ----------------------------------------------------------------
>         Jorge L. deLyra,  Associate Professor of Physics
>             The University of Sao Paulo,  IFUSP-DFMA
>        For more information: finger delyra@latt.if.usp.br
> ----------------------------------------------------------------
> 
> 
> 
> 
> 
> -- 
> To UNSUBSCRIBE, email to debian-beowulf-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> 
> 



Reply to: