Re: unexpected NMUs || buildd queue
Wouter Verhelst <firstname.lastname@example.org> writes:
> On Sun, Jul 18, 2004 at 08:51:30AM +0200, Goswin von Brederlow wrote:
>> Wouter Verhelst <email@example.com> writes:
>> The buildd will update build-essential and a buildd-essential package
>> when cloneing the template to ensure builds are always done with
>> current core packages. The multibuild buildd could compare the version
>> of buildd-essential from the template and the updated one and notifiy
>> the buildd admin if they differ to much (e.g. debian revision differs
>> is fine but major version gives a notification).
> I would recommend against doing this, because it introduces another
> potential problem (and you don't need that). Usually, slightly outdated
> toolchains don't really matter; and when they do, buildd admins usually
> know about this and should update the toolchain anyway.
Depends: libc6-dev | libc-dev, gcc (>= 3:3.3), g++ (>= 3:3.3), make, dpkg-dev (>= 126.96.36.199)
The version requirements are very loose. Similar loose versions would
be in buildd-essential. I wouldn't expect either package to update
very often but the possibility is there.
The next change I expect would be amd64 switching to gcc-3.4 as
default compiler. If that happens it becomes important that no buildd
compiles with gcc-3.3 anymore.
FYI: buildd-essential Depends on dh-buildinfo, fakeroot, devscripts,
lintian (from memory). Nothing major.
>> > In case of the tar.gz template, what happens when - say - a bug exists
>> > in a postrm script of one of the GNOME packages, resulting in a number
>> > of multi-gigabyte chroots laying around on the disk?
>> A failure in purging the Build-Depends doesn't mean the build has
> No, obviously not, but then that was only an example :-)
>> Normaly if a build succeeds you don't keep a copy of the chroot and a
>> cleanup failure will recreate the chroot form the template. On the
>> other hand you might want to keep the chroot on build failures but
>> then that is before the cleanup and postrm won't be caled. You get
>> those multi-gigabyte chroots laying around even with working postrm,
>> if you so choose.
>> A normal configuration (what I consider normal) would not keep the
>> chroots around. Only on build failure the source build tree is kept
>> with a log of what was used in the chroot. A tool to recreate a chroot
>> by looking at a buildd log and using snapshots.debian.net is on my
>> todo list and in my opinion that is a better solution than keeping
>> broken chroots around.
> OIC. Of course, theoretically the build could fail because it started
> modifying loads of stuff in the chroot, resulting in the new chroot to
> behave differently, but I understand why you wouldn't want to even
> consider that issue :-)
The chroot could have / and /usr read-only and just have /tmp, /var
and /buildd (or where we put it) read/write. That would make it even
harder to mess up than using fakeroot does.
>> Another thing is that multibuild plans include giving packages with
>> simmilar build-depends preferably to one buildd. That should optimize
>> (minimize) downloading Build-Depends. Multibuild is also ment to give
>> packages with build-depends chains to one buildd which can use the
>> local debs (probably only post signing) instead of having to wait for
>> a dinstall run.
> Hm. It makes sense to give them to the same buildd, but it doesn't make
> sense to wait 'till one of the packages has been signed; dinstall runs
> every 15 minutes, and the buildd machines have instant access, so then
> it might end up faster on a different machine.
The build could be done preemtively but only be accepted for uploading
if the Build-depends get signed as well. If the Build-Depends gets
failed after being maybe-successfull the build would be thrown away
It depends on how much AI programming someone is willing to add to the
multibuild server. That is something to see over time.
>> The multibuild client could implement purging of only those packages
>> that are not needed for the next build. After each build the purge
>> function for the chroot type is called and gets a list of future
>> builds as parameter. Currently that is complety ignored (I always wipe
>> the chroot and untar a new one in my test config) but I had exactly
>> that feature in mind designing the buildd <-> chroot interface.
>> Think about the time saved for m68k by not purging and reinstalling
>> gnome between two gnome package builds.
>> But back to your question again: Say you do keep a copy (tar.gz) of
>> the chroots around after a failure. Hopefully gnome packages will be
>> build in a bunch and the postrm will only be called once.
> Oh. Have a look at that.
> Hmm. You've cleared a lot up with this mail, and taken away some of my
> scepticism. That doesn't mean I'll immediately start using it once it's
> ready, but that's a different matter.
Conversion will take time, but we will get you eventually. :) It is
also unclear on how multibuild and wanna-build are going to interact.
Someone could write a component that does multibuild-client <->
wanna-build interactions. The states and commands of both are simiar
and could be mapped without loosing much.
Or someone writes code that makes the multibuild-server interact with
wanna-build. Ask wanna-build for the needs build queue to setup the
multibuild queue and take explicit packages (the top one from the
multibuild queue) from wanna-build when someone requests a package.
Any of those would allow having the old and the new buildd for an arch
and compare both. We will see.