[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: MOSIX



On Fri, 15 Sep 2000 19:16:48 Dariush Pietrzak wrote:
> i've seen how MOSIX easily migrates processess, but I have no idea how
> delays affect performance ('ve seen it used for number crunching at
> physics department, so large startup doesen't affect this environment so
> much) 
> 

As long as processes aren't too large or too small, it's going to work :)
communication costs are usually big in process migration, and performance
depends on implementation. (Many choices to be made there) I have no
experience of MOSIX first-hand, I only read docs and looked at some of
the papers. My supervisor told me that we won't be dealing with any
system with "home-nodes" so I dropped it.

Process migration works for processes that a single node can
process. :)

My impression is that MOSIX won't be very scalable, but if
you have up to 15-20 machines, it might not be visible. I might be
wrong and MOSIX might be the ultimate distributed OS that supports
1000s of boxen of course. :)

> > note that you might go for HA quite easily but in many cases HPC will be
> > very hard to attain on any system that is just POSIX.
>  what other systems allow easy HPC?

Ones that have a fast message passing model. Which means rather few. You
can usually pass messages in systems like Mach or Amoeba in a convenient
and efficient way. Nowadays people use a message passing / vm library for
that functionality. Like MPI, PVM, BSP Lib, p4, E/PX... This is the idea
behind Beowulf.

There are naturally some libraries that implement most of required functionality
for scientific computing. Linear algebra, mostly. blas, scalapack, etc.

People also have tried automatic parallelization of sequential code, but that
doesn't work. :)

Unfortunately, there aren't many data-parallel languages/systems. If you'd like
to do some really neat work, I suggest you to have a look at HPF (High Performance
Fortran), this seems to be the only serious data-parallel language around. Not that
I find others inferior, but this is the only one that comes close to some sort
of architecture-independence. And before you recall how you despise Fortran,
be sure that it is going to be viable for at least a couple of decades. :)

> > Do it on MPI, you need a paper or two about HMM computation in parallel probably.
> How is MPI better then MPI ? ( i've used pvm, and my teacher said sth
> about MPI being known as better technique ). Is MPI available for
> different types of machines? ( with pvm I can make ?cluster? out of 
> bunch of PCs in labs and add to it my 2 processor sparc with solaris )

Check out the lam implementation, it works well and MPI has more modern
design and utilities. MPI does have some advantages over PVM, so if you're writing
a new parallel code, switch to MPI. Surely, using libraries is the way to go if you
aren't going to do low-level stuff. Most of what you'll need is already available
in potato and woody distributions. LAM can also handle heterogeneous networks,
though I use it on a dedicated Beowulf cluster.

When writing a parallel program, apply only tried-and-true methods, the others
will distract you from the problem. If the available code doesn't give you much
leverage, the rest is black magic.

Some physicists like Charm++ also, but I don't find that very efficient.

Thanks,

-- 
Eray (exa) Ozkural
Comp. Sci. Dept., Bilkent University, Ankara
e-mail: erayo@cs.bilkent.edu.tr
www: http://www.cs.bilkent.edu.tr/~erayo



Reply to: