Re: PETSc Debian package switch to lam, openmpi, or something else?
- To: Debian Beowulf <firstname.lastname@example.org>
- Cc: Ondrej Certik <email@example.com>, Lisandro Dalcin <firstname.lastname@example.org>
- Subject: Re: PETSc Debian package switch to lam, openmpi, or something else?
- From: Adam C Powell IV <email@example.com>
- Date: Tue, 19 Jun 2007 16:21:34 -0400
- Message-id: <1182284495.21498.28.camel@veryst>
- In-reply-to: <1181587361.17470.36.camel@veryst>
- References: <1181587361.17470.36.camel@veryst>
With no discussion, I'm going to go ahead and try using petsc with
openmpi, and if that works, will make that the default, with lam and
mpich(1) as build-time options as before. (If it doesn't work, I'll use
lam, which is known to work.)
On Mon, 2007-06-11 at 14:42 -0400, Adam C Powell IV wrote:
> I would like to revisit an issue last discussed here six years ago, see
> Back then, the reasons to stay with mpich vs. switching to lam included:
> * standards conformance (mpich as the reference implementation)
> * vendor preference (many other MPI implementations are based on
> * performance (though tests run at the time showed them very even)
> * ease of setup (no need to lamboot etc.)
> Recently, a new motivation has emerged to switch to lam: the petsc4py
> python bindings work with lam, but not with mpich; openmpi is not yet
> tested. So I would like to revisit this question with the list again,
> and if there are no concerns or objections, to switch to lam or openmpi
> as the default MPI implementation for PETSc. I am open to suggestions
> on which of these is better to use, I suspect openmpi.
> Regarding the possibility of more than one in the repository, the answer
> is no. With eleven architectures (and more on the way, thanks to
> kfreebsd and hurd), PETSc takes more than 70 MiB per distribution, and
> we have four distributions now (oldstable, stable, testing, unstable).
> For a package with 44 users out of 53,323 on popcon (for the most
> popular PETSc binary), I am not going to consider taking any more mirror
> space than we have now.
> Furthermore, I've gone out of my way to make it very easy to build with
> alternative MPI implementations, just do "debian/rules PETSC_MPI=lam
> binary" (or in the future PETSC_MPI=mpich if lam is the default).
> Thanks for your input.
GPG fingerprint: D54D 1AEE B11C CE9B A02B C5DD 526F 01E8 564E E4B6
Welcome to the best software in the world today cafe!