[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: PETSc Debian package switch to lam?



Eray Ozkural (exa) wrote:

LAM seems to be more practical, and since it doesn't bring performance
penalties it's our implementation of choice.

That's about what I thought.

Nevertheless, I would not suggest
switching to lam unless we have the 6.5.2 stable version available in
debian which we do not. The 6.5.2 will require packaging changes,
the debian diffs for 6.3.2 won't work. I will present new versions for
testing soon hopefully. Anyway, the new lam version builds and installs
with ./configure, make and make install so it's quite trivial.

The technical justification for mpich would be
1. a more complete / up-to-date implementation
2. better performance

Generally, I haven't observed (2). However, for custom hardware mpich
may be better (i.e. myrinet) And note that it has been used and extended
by many vendors as a reference implementation. (1) Standards compliance
is surely a more important matter. For instance, there is the new

MPI_AlltoallW(... many args ...)

function intended for matrix computations. If the new mpich has it while lam
lacking, then for the sake of standard mpich would be better.

That makes sense. I'll leave it with mpich at least until lam 6.5.2 is in unstable.

Regarding standards, since it is built to work with both, I imagine PETSc either doesn't use mpich-specific interfaces, or else has workarounds for interface differences, so "whichever is faster" is the correct choice. But this means I probably should do more extensive testing than just a few nonlinear finite difference problems, maybe a couple of dense matrix tests too, which could give mpich an advantage if PETSc does use MPI_AlltoallW().

Also, since we're largely an x86 community, and Linux has only recently started to scale to larger SMP machines, I imagine most Debian PETSc users are of the Beowulf variety, with relatively inexpensive networking, i.e. not myrinet. But I could be wrong... In any case, "whichever is faster" should be the fastest implementation for the most people, and let the rest just do (assuming lam is chosen):

   apt-get source petsc
   cd petsc-2.1.0
   debian/rules PETSC_MPI=mpich binary

Thank you for bringing these issues to my attention, I'm pretty much "just a user" with barely enough knowledge of internals to make stuff work. :-)

Hmm, I think I should look at the neat petscgraphics ;)

Now that it's uploaded for more than just powerpc, you should be able to just apt-get install petscgraphics1-demo to drag in its dependencies, and then run chts or "mpirun -np X /usr/bin/chts". I'm working out a divide by zero issue, then will upload for alpha too.

At this point, the slow steps are sending the triangles to geomview, and rendering them there; the triangulation generation itself is fast even on one CPU. And it exposes the geomview transparency bug, so for all those reasons I need to get away from the current architecture. I'm thinking of using Evas to render the semi-transparent triangles into a pixmap on each processor, then just have node 0 layer those semi-transparent pixmaps. Or is there a good, fast alternative to Evas? :-)

Zeen,
--

-Adam P.

GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Welcome to the best software in the world today cafe! <http://lyre.mit.edu/%7Epowell/The_Best_Stuff_In_The_World_Today_Cafe.ogg>





Reply to: