[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: unexpected NMUs || buildd queue



On Sat, Jul 17, 2004 at 08:35:18PM +0200, Goswin von Brederlow wrote:
> Wouter Verhelst <wouter@grep.be> writes:
> > On Sat, Jul 17, 2004 at 10:58:57AM +0200, Goswin von Brederlow wrote:
> >> Wouter Verhelst <wouter@grep.be> writes:
> >> > In any case, buildd doesn't write to disk what it's doing (the
> >> > build-progress file is written by sbuild), so if it's aborted
> >> > incorrectly (i.e., it doesn't have time to write a REDO file), that
> >> > information goes lost.
> >> >
> >> > That's probably a bug, but once you know about it, it's easy to work
> >> > around (it just means you have to clean up after a crash, but you have
> >> > to do that anyway, so...)
> >> 
> >> Which is one of the things realy screwed up on the buildd/sbuild
> >> combination.
> >
> > What's your alternative?
> >
> > You have to clean out the chroot anyway when the system goes down
> > unexpectedly, or anything horrible might happen. The alternative would
> > be to clean out and rebuild the chroot automatically -- don't tell me
> > multibuild tries to do that?
> 
> That depends on what method of chroot cleaning / regeneration is being
> configured.
> 
> One option is to have a template chroot (as tar.gz for example) and to
> untar that for every build anew. Cleaning is a simple rm.
> 
> Another option is to have an LVM volume and make a new snapshot of it
> for every build. Cleaning removes the snapshot.

When and how are those template chroots or volumes updated? What steps
are taken to ensure those updates don't take away too many resources
whilst still ensuring the chroots are (reasonably) up-to-date?

In case of the tar.gz template, what happens when - say - a bug exists
in a postrm script of one of the GNOME packages, resulting in a number
of multi-gigabyte chroots laying around on the disk?

> The current way corresponds best to having a fixed chroot and cleaning
> via debfoster.

Sorry, parse error. Do you mean to say that this is what multibuild does
by default currently? If not, I'd appreciate it if you could elaborate a
bit.

> There is also the possibility of rebuilding the chroot from scratch
> (which calls cdebootstrap and a few extra commands to configure the
> chroot).

I'd hope this is not the default, unless you've given up on
outperforming buildd/sbuild ;-P

> The build also has two levels of creating a chroot:
> 
> 1. bootstraping a new template (which is usualy done with cdebootstrap
> but could be untaring a meta template and updating it)
> 
> 2. cloning a template for a specific build (which means untaring,
> making a snapshot or linking the static template into the right place)
> 
> Under normal operation the buildd just clones a new chroot for every
> build and removes it afterwards (debfoster and unlink for the static
> case). If a chroot failure is detected (like repeated failures to
> install or purge packages) the build will try to bootstrap a fresh
> template and might stop if that fails or also doesn't work.

... possibly resulting in a buildd which doesn't do shit for 9 hours. Or
so. The buildd scenario (failure to uninstall a package resulting in not
bothering to uninstall it anymore) is far more effective at avoiding
that issue, I think. If not, at least it doesn't waste CPU and starts
idling sooner (so it appears in the logs sooner, too).

> Most systems will no longer suffer from install/purge problems from
> one build getting dragged into the next build.

Well, that doesn't really happen all that often with buildd either. In
case things break down at uninstall or purge time, buildd simply doesn't
care; it leaves the packages installed in the chroot, and goes on to
build the next package. Fast and simple; no time is wasted trying to
clean up. Indeed, every once in a blue moon the chroot breaks more or
less because of a postinst bug, but it doesn't happen that much that I'd
want to waste CPU time and disk buffers to useless stuff such as
"recreating a buildd chroot from scratch, because we *think* it might be
broken". That sounds almost like the "Format C:" strategy many would-be
computer experts practice far too often.

Frankly, all this trouble to get a clean chroot seems a bit excessive to
me. There's nothing requiring us to build in a perfectly clean chroot,
you know; all buildd does is make sure the build-depends and
build-conflicts are fulfilled. What more do you need?

Of course, avoiding broken chroots is cool; but you'll get those anyway.
If not because an install didn't work, then probably because debootstrap
or some upgrade failed. Why waste so many of your precious CPU cycles to
avoid something if it'll happen anyway?

> I expect using a tar.gz template will be the most used config. Also a
> buildd should stop before it runs amok and fails 200 packages. That is
> the plan anyway.

Hey, that'd be a cool feature, indeed.

> As a sidenote the build dir will be mounted into the chroot normaly
> and umounted before cleaning up. So failed builds still remain when
> the chroot is just wiped.

Sometimes builds fail because the phase of the moon wasn't right. I
don't think I want to see the build chroot wiped out too fast, but
that's probably just me.

Forgive me for being sceptical; I might sound negative, but I'm really
just interested in how you're dealing with some issues I found out about
when I had the "wonderful" idea of fixing the numerous bugs in the
hackish bunch of scripts I thought buildd and sbuild were. It was only
then that I realized how great some of their concepts are. Which is not
to say that their coding style is great, but that's a different issue
altogether ;-)

-- 
         EARTH
     smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
         WATER
 -- with thanks to fortune

Attachment: signature.asc
Description: Digital signature


Reply to: