[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

What should an autobuilder do?



On Sun, May 27, 2001 at 03:21:45PM +0200, Marcus Brinkmann wrote:
> On Sun, May 27, 2001 at 09:13:32AM -0400, Steve M. Robbins wrote:
> > That "build-essential" packages must be installed is a fact,
> > i.e. "knowledge".  The rule to satisfy dependence on a virtual package
> > is also knowledge.  If you don't want to call this knowledge
> > "special", so be it.  I don't want to get hung up on definitions.
> 
> What you called knowledge is what I'd call an interface.  An interface
> should be defined, in this case policy defines it, and programs implement
> it.

[ ... ]
 
> > A clueful human might try reverting to autoconf2.13 if a package
> > failed to build.
> 
> There is no programmable interface which let's me know that autoconf2.13 is
> what I'd expect to be autoconf.  What you are proposing is the use of a
> non-existing interface which is not implemented.

Hey!  I wasn't proposing anything so concrete.  I was asking questions.
The main question is: what is the advantage of making the autobuilders
so brittle that they fall over on the slightest mistake, causing their
keepers a lot of extra work going through reams of failed build logs?


> The new autoconf would
> need a tag that points back to autoconf2.13 to make this work.  Such a tag
> doesn't exist, so we can't use it.

That's a neat trick!  You have shifted the debate by re-defining the
terms from "knowledge" to "interface", even after I explicitly said
that I don't want to quibble about definitions.  Now you argue against
a particular style of interface.

 
> > Why would you want autobuilders to be less smart?
> > Let's automate some of this stuff!
> 
> Such interfaces are easily cast in stone, and should not be added lightly, on
> the rush, just because of a single problem (which is more a problem of
> schedule rather than inherent technical issues anyway).  You are welcome to
> think this through and propose an interface, but we should not first
> implement it, then define it and afterwards see if it is really feasible.

I partly agree.  I think one shouldn't willy-nilly add "interfaces",
or extra control-file tags.  The reason I believe this to be a bad
idea is that it is prone to featuritis: a tag may seem at first to be
a good idea, and then afterwards one finds out it is only useful for a
small number of packages, say, and becomes a burden for a large
fraction of the others.

However, I think experimentation also has a role to play.  If careful
design was always enough, then interfaces would never need to change.
But they do, sometimes because "experience in the field" shows up
a defect that was never thought of at the design stage.

But I'm not interested in proposing an interface.  Rather, I'm curious
about how the autobuilders work, and how they might work better.

It occurs to me that there are two different roles that one might
legitimately desire an "autobuilder" to fulfill.  One role is to be
populating the archive for a less-popular architecture.  This is what
a user might expect the word "autobuilder" or "build daemon" to mean.
A user of, say, m68k should expect to be able to install binary
packages at will, even though most developers compile on, say, i386.
This will only happen if m68k packages are compiled automatically
at a reasonable pace.

A second function of the autobuilders, as they are currently
implemented, is an automated verification of the control information
--- notably, of the build-depends.  This is also a valuable service.

There is some tension between these two roles, of course.

As currently constituted, the build daemons simply fail when the
build-depends are not precise.  This hurts the function of populating
the binary archives.  On the other hand, if the daemon builds packages
in a robust manner (e.g. a sufficiently rich set of packages
installed), incomplete build-depends will not be detected.  Indeed,
the main reason that build-depends are incomplete in the first place
is that the human builders *have* a large set of packages installed,
and because checking to see precisely which are involved is a tedious
process.

I'm not going to advocate raising one of these roles above the
other.  Both are important enough to do.  If they can't both
be performed simultaneously, then why not do them separately?

To be concrete: why not run one autobuilder per architecture with the
express purpose of populating that architecture's binary archives?
Particularly with a freeze looming, it would make sense (wouldn't it?) 
to get "testing" equalized across the board.  A second set of
autobuilders could be set up to run as they are now, to verify control
information like build-depends.

An autobuilder that is intended to "just build stuff" would likely
need to be populated with a bunch of packages that are commonly used
and frequently forgotten from build-depends.  It would also make sense
to be cognizant of the recent history of certain packages --- like,
for example, the package that triggered this: autoconf.  For the near
future, it is reasonable to just stick in autoconf2.13 and be done
with it.  To keep this scheme going in the future, one might envision
heuristics ("if it fails with autoconf, re-try the build using
autoconf2.13") that I mentioned previously.
 
> Personally, I don't think such an interface is reasonable.  I don't want to
> try K packages, walking my way down in history, to see if some of these
> works.

What leads to to this feeling?  Personal aesthetics?  Or do you have
evidence that a large number of packages would require such treatment?
I don't have any evidence one way or the other, so I welcome experience
from those running autobuilders!!

My expectation is that there are a small number of key tools for which
one might want to try an alternate version.  Autoconf is one.  A new
automake is on the way.  I saw two posts the other day from Brian May
suggesting that libtool 1.4 has exposed bugs in packages that use
"make install prefix=debian/tmp" rather than the more correct "make
install DESTDIR=debian/tmp".  (Conversely, some packages don't handle
DESTDIR correctly; sigh) I see "gcc-3.0", "gcc-2.95", "gcc272", and
"altgcc" all in the archive.  That suggests some packages may need
different compilers.

I don't expect a lot of packages to be on this list.  But, again, I
don't run an autobuilder, so I'd really like feedback from those who
do.  What *are* the common causes of build failures?  A few weeks
back, one post that claimed it was mostly incomplete build-depends.
True?  What is the second largest category?


> Also, a failure might not lead to a compilation error and introduce
> subtle bugs.

Perhaps.

On the other hand, compiling an elderly package against a _newer_
library may also introduce or expose bugs (without a compile-time
error).  But this is done all the time.

I'd almost feel better rolling backwards, though.  Again take
autoconf as an example.  Suppose the package build-depends on autoconf,
but the author made assumptions about the autoconf internals that
changed in 2.50.  The build will fail with autoconf 2.50.  Re-trying
the build using autoconf 2.13 is precisely the right thing to do.

Regards,
-Steve


-- 
by Rocket to the Moon,
by Airplane to the Rocket,
by Taxi to the Airport,
by Frontdoor to the Taxi,
by throwing back the blanket and laying down the legs ...
- They Might Be Giants



Reply to: