[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: unexpected NMUs || buildd queue



On Sun, Jul 18, 2004 at 09:32:39AM +0200, Goswin von Brederlow wrote:
> Wouter Verhelst <wouter@grep.be> writes:
> > The time required for running *one* of those is, indeed, laughable. The
> > time required for running those "regularly", whatever that means, isn't;
> > especially not on a system where CPU time is the primary resource and
> > thus has to be considered scarce by design; You're interested in
> > efficiency, not in time. To put it otherwise, the important question is
> > "what percentage of our non-idle time is spent on the actual
> > dpkg-buildpackage run?"
> 
> Relevant is how much time is idle. As long as there is a large amount
> of idle time there for future grows I don't see a problem.
> 
> But not all archs have that. So we do care about reducing overall
> build time. If we can save 10 minutes per build but waste 1 minute for
> preparing the chroot for the build we still have 9 minutes saving.
> But can we? A successfull build will always take the same amount of
> time in dpkg-buildpackage no matter what. For those cases you are
> right, reducing the time not spend in dpkg-buildpackage is the only
> way to speed things up.
> I guess we have to implement it and compare different configurations
> for the chroot handling to judge the overhead produced by different
> methods. There is not much point arguing about it now since its mainly
> just statistics (which we don't have).

Fair point.

> But what we can reduce, and what also eats up a lot of time, is the
> time spend in build failures. If we can avoid a build failure (like
> one resulting in dep-wait) by setting it to dep-wait directly we win a
> lot. (Far more than the overhead of chroot maintainance I think).

Agreed, but that's orthogonal to the issue at hand; It's not because you
schedule packages more efficiently that you suddenly have time available
to waste...

> The design ideas of the multibuild server is such as to reduce the
> work being wasted on the buildd clients or on buildd admins. The main
> change there is tracking of Build-Depends and Depends so as not to try
> to build source that fail and have to be put into dep-wait by the
> admin. But there are some other things planed.
> 
> Avoiding a build for gnome or kde packages or building them in the
> right order with proper delays for dinstall runs inbetween will save
> hours of wasted build time for m68k (less for others), save a huge
> delay till the admin comes around and saves the admins time. For
> several archs the admin time seems to be the limiting factor (not for
> m68k, true).

That's their own fault. Look up "AddPkg" in the buildd source some day;
it creates a packages repository in ~buildd which allows freshly built
packages to be installed if they're needed for future builds, even
before they've gone through dinstall. On m68k we don't use that anymore
because it's a waste of disk space for little gain, but on an
architecture with one or only a few buildd machines...

> >> Even creating the chroot from scratch with cdebootstrap is a matter of
> >> one to a few minutes nowadays.
> >
> > Not bothering to clean the chroot takes no time at all; and experience
> > tells me that buildd is fairly capable at maintaining a chroot which,
> > although not perfectly clean and up-to-date, can still be used to build
> > packages in.
> >
> > In fact, in the three years that I've been a buildd maintainer now, I
> > can remember only a handful of occasions where the buildd chroot had
> > become so badly broken that I had to intervene before buildd would start
> > to build again, and where the cause was *not* a power or hardware
> > failure.
> 
> But what about power failure? Its pretty hard to recover from
> that. You can't be sure the chroot is in good health and even apt/dpkg
> can't be truest to tell you,

Actually, that isn't true. The build-essential packages usually remain
on the system, so you can be quite sure that they'll still be intact.
If the system was installing or removing packages at the time of the
power failure, you just need to check up on any packages not
build-essential.

> esspecially without journaling FS or large disk caches that might have
> been lost.

Not having journaling filesystems doesn't help, indeed, but that's not
the problem.

[...]
> >> But that is easily gained by not failing a kde or gnome package build
> >> that installs 200Mb Build-Depends just to notice the installed version
> >> isn't good enough.
> >
> > I agree that there are some bugs in the system currently in use; this is
> > one of them (well. A design issue, really). It's not related to the
> > issues we're discussing, though.
> 
> Agreed. The statement that multibuild will build packages faster is
> based on the multibuild server design. That part greatly affects
> (reduces) the build failures that can be avoided and will be themain
> selling argument in our eyes.

IC.

In that case, it'd be nice if you could create a wanna-build compatible
interface, so that people prefering to keep using buildd (because they
know it better, if nothing else) can do so.

That doesn't even have to be through ssh; you could create a wrapper
which is installed on the buildd host and leave $ssh_cmd empty in the
buildd configuration.

It might even be nice to allow people to run the multibuild client with
wanna-build, I'd say...

> Comparatively to that we are arguing about peanuts and non existing
> statistics. Its all 'i think it will be' so lets stop guessing and
> wait for it. Ok?

OK :-)

[...]

-- 
         EARTH
     smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
         WATER
 -- with thanks to fortune

Attachment: signature.asc
Description: Digital signature


Reply to: