[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: unexpected NMUs || buildd queue



On Sat, Jul 17, 2004 at 09:46:33PM +0200, Goswin von Brederlow wrote:
> Wouter Verhelst <wouter@grep.be> writes:
> 
> > On Sat, Jul 17, 2004 at 08:08:21PM +0200, Thiemo Seufer wrote:
> >> Alternative: Keep an clean unstable chroot tarball around and update
> >> it regularily.
> >
> > Waste of resources. buildd doesn't crash every day (luckily), and
> > updating a chroot tarball requires quite a bit of resources (in CPU time
> > and disk buffers): "untar+gunzip tarball, chroot (which loads a number
> > of binaries to memory, thereby pushing other stuff out of the disk
> > buffers that are used to do useful things with the system), apt-get
> > update (which requires gunzip and some relatively cpu-intensive parsing
> > as well), apt-get upgrade (which could fail or loop), exit the chroot
> > and tar+gzip"
> 
> Loughable for any recently build system.

The time required for running *one* of those is, indeed, laughable. The
time required for running those "regularly", whatever that means, isn't;
especially not on a system where CPU time is the primary resource and
thus has to be considered scarce by design; You're interested in
efficiency, not in time. To put it otherwise, the important question is
"what percentage of our non-idle time is spent on the actual
dpkg-buildpackage run?"

> Even creating the chroot from scratch with cdebootstrap is a matter of
> one to a few minutes nowadays.

Not bothering to clean the chroot takes no time at all; and experience
tells me that buildd is fairly capable at maintaining a chroot which,
although not perfectly clean and up-to-date, can still be used to build
packages in.

In fact, in the three years that I've been a buildd maintainer now, I
can remember only a handful of occasions where the buildd chroot had
become so badly broken that I had to intervene before buildd would start
to build again, and where the cause was *not* a power or hardware
failure.

I do remember many horror stories of (c)debootstrap failures, though
(although admittedly less cdebootstrap problems).

> And for m68k considering disk buffers as problem is a joke. The 128 MB
> ram will have been flushed and reflushed just by installing/purging
> those 200MB Build-Depends of the last gnome or kde build.

That's not the issue.

If you're building someting which uses a lot of build-time dependencies,
disk buffers are more than just a nice to have (else the system will
need to read stuff off of the hard disk every time you #include some
header file).

If you start untarring and apt-get upgrade'ing and the like while such a
package is building, you slow down your build. Doing that to one build
doesn't really matter; doing that all the time will severly reduce the
efficiency of your system.

I was just trying to show how you lose a lot of time, always; either you
lose it because the whole system is waiting for apt-get upgrade, either
you lose time because you reduce the efficiency of the disk buffers.

And no, I don't think that time is negligible just because *one*
"apt-get update; apt-get dist-upgrade" run takes up less than half a
minute.

> You also can't count the time the apt-get itself takes since with the
> current setup you do do exactly the same calls to update the system.

Yes, but only once; once it's updated, it stays updated.

In a cloned chroot scenario, you either need to update your template
chroot between builds (which increases the risk of ending up with a
broken chroot, and increases the time it takes to start a build,
reducing the efficiency of your system), or you risk having to update a
certain package which is pulled in by a common build-dependency on each
and every build you do.

Either way, I think your scenarios all result in a less efficient build
system.

> So the difference is untar/gzip and tar/gzip. Yes, they can take some
> time on m68k.

They take the same percentage of time away from builds on /every/
architecture. Wasting time isn't an issue because your processor is
slow; it's an issue because the processor is the resource you're trying
to use as efficiently as possible.

It's just more visible on m68k, that's all.

> But that is easily gained by not failing a kde or gnome package build
> that installs 200Mb Build-Depends just to notice the installed version
> isn't good enough.

I agree that there are some bugs in the system currently in use; this is
one of them (well. A design issue, really). It's not related to the
issues we're discussing, though.

> >> > There are probably more things I could come up with, but I didn't try
> >> > hard. Wiping out and recreating the buildd chroot isn't an option.
> >> > Neither is creating a new one alongside the original, unless the disk
> >> > space requirements are a non-issue (which isn't true for some of our
> >> > archs).
> >> 
> >> Worst case would be to stop the buildd in such a condition. 
> >
> > You're advocating manual cleanup again here :-P
> 
> Yes. better than keeping on building with a broken system as is done now.

I guess this is where we differ in opinion.

I do not consider a chroot where "apt-get build-dep foo; apt-get -b
source foo" succeeds to be broken (no, I specifically do not care about
uninstallation). Only if that fails for reasons specific to the chroot,
I agree it is broken.

What buildd does is, if there's a build-time dependency that cannot be
uninstalled, to just not bother and continue building with the
superfluous dependency installed. That's far more efficient IMO.

(not that the above command sequence is what sbuild actually does, but
you know what I mean)

[...]
> Here another feature of multibuild comes to mind.
> 
> Multibuild keeps track of the build times of packages and the relative
> speeds of buildds. Multibuild can then guess the expected build time
> for a package. If that time is exceeded by a sizeable margin the
> buildd admin can be notified and on inaction the package will be
> returned so another buildd can have a shot at it.
> 
> The same goes for pakages getting stuck in other temporary states,
> like being state uploaded for a week.

Hmm. These sound cool.

> Packages that have finished being build will remain in the buildd
> admins control only for a limited time before getting assigned to a
> pool for that arch or maybe even a general pool of all buildd admins.
> Packages that aren't handled by the buildd admin for some reason (like
> sickness) get then processed by any admin having some spare time to
> process the pool.

I don't like this one as much, though. Oh well; maybe it's just me.

-- 
         EARTH
     smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
         WATER
 -- with thanks to fortune

Attachment: signature.asc
Description: Digital signature


Reply to: