[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Beowulf cluster



On 28 Aug, Ossama Othman wrote:
>> from what I remember Beowulf uses PVM, and that is already packaged.
>> They basically seemed when I read their pages to be makeing "add ons" which
>> make PVM more powerfull 
> 
>>From what I recall, PVM is being superseded by MPI.  There is already a
> Debian packaged implementation of MPI called "mpich."  I'd suggest using
> MPI instead of PVM.
> 
> By the way, has anyone adopted MPICH since it was orphaned (poor little
> mpich :))? If not, I'll adopt it.
> 
> -Ossama

My impression, from having looked into them both, is that PVM is better
at some things and MPI is better at others, so both are still used.

BTW, in answer to the original question (which I've since deleted), is
that you can use SMP in a Beowulf cluster.  At least, there isn't any
technical reason that I know of why you couldn't.  Simply build the
Beowulf cluster out of SMP-capable systems, install multiple processors
in each one, and make sure to compile SMP into the kernels when you
set it up.  My guess is that most Beowulf class clusters don't do this,
because the limiting factor on computation speed in most clusters is
communication speed.  Having more than one processor in the same box
only exaggerates the communication speed problem.  Also, most PC-based
SMP implementations have difficulty with memory bandwidth.  I followed
the linux-smp mailing list for a short while; one of the things that
came up was that there were many SMP boxes which ran two memory
intensive processes slower in parallel than they did sequentially. 
That is, it was actually faster to not use the second processor!

My guess is that you haven't seen anything on SMP Beowulfs preciesly
because of the memory and network bandwidth problems, but I've never
built a Beowulf, so I don't really want to put words in the mouth of
anyone who has.
-- 
Stephen Ryan                   Debian GNU/Linux
Mathematics graduate student, Dartmouth College


Reply to: