Re: Atlas proposal
On Wed, 18 Aug 2010, Samuel Thibault wrote:
> Don Armstrong, le Tue 17 Aug 2010 17:24:05 -0700, a écrit :
> > [Optimization is] about maximizing the throughput of a particular
> > problem, which may mean that atlas shouldn't use all of the cache,
> > or shouldn't use as many cores as exist on a particular machine,
> > etc.
> That's way harder to do in an optimized way, and that's probably why
> Atlas doesn't do it.
It's probably close to impossible at build time; you'd certainly be
able to come closer if you optimized on the fly. But this is a very
difficult problem, and atlas may have already hit on the right
combination of automated build optimization vs. automated on-the-fly
optimization vs. expert knowledge optimization.
> I agree on this. What we still don't agree on is whether you can
> build an optimized package at all, since Atlas will optimize it for
> the machine where it got built, and the optimizations it does will
> potentially make performance worse on another machine...
You should be able to select a set of optimizations that Atlas will
apply which are relatively conservative and work across a reasonable
swath of machines, even if this means that you haven't gotten every
last iota out of some of the hardware. Where the balance is between
the number of packages, the number of machines with a particular
hardware architecture, etc. is something that the maintainer can
decide. I think that most people who are doing HPC are going to be
running relatively new hardware, with relatively high-end CPUs, so
erring towards that end is probably reasonable.
In any event, if this means one package plus documentation on how to
build more or an auto package which helps with building, that's fine
by me. So long as we're not doing debconf prompting by default.
"There's no problem so large it can't be solved by killing the user
off, deleting their files, closing their account and reporting their
REAL earnings to the IRS."
-- The B.O.F.H..