[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: PETSc Debian package switch to lam, openmpi, or something else?



Great, that would be awesome. I need this for my python-petsc and
libmesh packages.

Ondrej

On 6/19/07, Adam C Powell IV <hazelsct@debian.org> wrote:
Hello again,

With no discussion, I'm going to go ahead and try using petsc with
openmpi, and if that works, will make that the default, with lam and
mpich(1) as build-time options as before.  (If it doesn't work, I'll use
lam, which is known to work.)

-Adam

On Mon, 2007-06-11 at 14:42 -0400, Adam C Powell IV wrote:
> Greetings,
>
> I would like to revisit an issue last discussed here six years ago, see
> http://lists.debian.org/debian-beowulf/2001/06/msg00071.html
>
> Back then, the reasons to stay with mpich vs. switching to lam included:
>       * standards conformance (mpich as the reference implementation)
>       * vendor preference (many other MPI implementations are based on
>         mpich)
>       * performance (though tests run at the time showed them very even)
>       * ease of setup (no need to lamboot etc.)
>
> Recently, a new motivation has emerged to switch to lam: the petsc4py
> python bindings work with lam, but not with mpich; openmpi is not yet
> tested.  So I would like to revisit this question with the list again,
> and if there are no concerns or objections, to switch to lam or openmpi
> as the default MPI implementation for PETSc.  I am open to suggestions
> on which of these is better to use, I suspect openmpi.
>
> Regarding the possibility of more than one in the repository, the answer
> is no.  With eleven architectures (and more on the way, thanks to
> kfreebsd and hurd), PETSc takes more than 70 MiB per distribution, and
> we have four distributions now (oldstable, stable, testing, unstable).
> For a package with 44 users out of 53,323 on popcon (for the most
> popular PETSc binary), I am not going to consider taking any more mirror
> space than we have now.
>
> Furthermore, I've gone out of my way to make it very easy to build with
> alternative MPI implementations, just do "debian/rules PETSC_MPI=lam
> binary" (or in the future PETSC_MPI=mpich if lam is the default).
>
> Thanks for your input.
>
> Regards,
> -Adam

--
GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Welcome to the best software in the world today cafe!
http://www.take6.com/albums/greatesthits.html





Reply to: