[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Binaryless uploads [Was: FTBFS: architecture all packages]



Wouter Verhelst <wouter@grep.be> writes:

> Op ma 18-08-2003, om 04:38 schreef Joe Wreschnig:
> > > > Because if it doesn't build, it doesn't enter the archive, because there
> > > > are no packages to enter the archive.
> > > 
> > > Firstly, that's just wrong. The source package would enter the archive
> > > and fail to build - so you'd have a package which was out of date on
> > > _every_ architecture.
> > 
> > Under the current system. I'm envisioning one where the buildds aren't
> > tied so closely to the archive, and so it won't enter the archive until
> > it's been built by at least one (or possibly all that it claims to
> > support).
> 
> Then what are you suggesting? I don't see how we could avoid that.

see below.

> > What causes FTBFS bugs now:
> > 1. Bad Build-Depends.
> > 2. Broken code that doesn't port (either to another architecture, to
> > something that's not the maintainer's home directory, whatever).
> 
> 3. Architectures being out of sync. Which is a special case of 1., but
> ...

That only applies to packages that will be build by buildds, which
excludes binary-all and uploads with all archs present already.

> > All of 1 and many of 2 would be solved by binaryless uploads. A new
> > type, "maintainer never tried to build package ever (and so debian/rules
> > has a syntax error or debian/control has a bad version or whatever)"
> > would be introduced - but such packages would be blocked from entering
> > the archive anyway.
> 
> Yeah, but 11 buildd machines (one for every architecture) would try to
> build it, and fail. That's a waste of precious CPU cycles.

But currently 10 buildds would fail and one (mostly i386) would have
the prebuild deb. I thing i386 is the arch with the most cpu cycles to
spare (assuming DDs wouldn't start uploading never build sources).

> > Plus, I would like to think that most maintainers
> > are smart enough to try building their packages at least *once* before
> > uploading them.
> 
> rotfl.

Thats a bit like all maintainers should be smart enough to have a
clean chroot to build in. But a chroot is too much trouble for many.

> > In this particular case (webmin), binaryless uploads *would* solve the
> > problem. Since the webmin maintainer couldn't manually do whatever magic
> > he does when he builds the package, he'd have to work it into the build
> > scripts that the buildds - and users - use.
> 
> It would solve one problem at the expense of creating another.
> 
> > > Rather, what happens is that the package builds, or not, depending on
> > > a set of (usually complicated) criteria. These change over time as the
> > > rest of the packages in the archive change. Just because a package
> > > happened to build at a given time on a given system, does not mean
> > > that it will always build on all systems in the future. It's not all
> > > that unusual for a package to build on some of the buildds and fail on
> > > others. At this point the whole thing falls apart.
> > 
> > Right. But I'm not talking about the current archive and buildd
> > relationship, so it doesn't matter.
> 
> Actually, it does. I'd like to see a suggestion from you where
> * The buildd's aren't linked as "tight" to the archive as you suggest
>   they are.
> * The buildd's are still able to compile packages against *current*
>   unstable. Not that of yesterday, not that of two weeks ago, that of
>   *today*.
> 
> Until you can come up with one, I suggest you leave everything as it is.

Uploads go to unbuild when accepted. The buildds would grab them
there, build them and upload. Only then would it go to sid.
Alternatively sources could stay in incoming till a buildd has build
them.

I think requireing at least one buildd build before a package can go
into testing would be a good idea. But that would require rebuilding
binary-all packages at the least.

> > If at any point a package doesn't build on an architecture that it a)
> > used to, and b) still claims to, that's a bug. Both under the current
> > system, and one of binaryless uploads. The change is how far they get
> > before the bug is noticed. Currently they can make it all the way to
> > testing, and maybe even stable.
> 
> Correct. Still, you're assuming an environment that does not change,
> which is an incorrect assumption.
> 
> Let me give you a real-life example: When I first uploaded doc-linux-nl
> -- an arch:all package -- I made sure the build-dependencies were
> correct, by using pbuilder. As such, when I uploaded the package,
> everything worked perfectly.
> 
> However, a few months later, I received a bug report that the package
> didn't build from source. Surprised, I tried building it under an
> up-to-date pbuilder chroot, and found out that, indeed, it didn't build
> anymore.

You can't prevent that without adding a magic-8 ball to the debian
systems. :)

But do you think the revers will happen? The maintainer having a
broken package but the buildds having different versions that make it
build correctly? At least one of the buildds should be as uptodate as
the maintainer at all times if not more current. And updates usually
rather break builds than fix them.

> What had happened? The dependencies of one of my build-dependencies had
> changed, modifying one package which was depended on (groff) into a
> recommends. As such, since I did use the functionality provided by
> groff, the package did not build from source anymore. The fix,
> obviously, was to add groff to the build-depends, rebuild, and upload.
> 
> Your suggestion will *not* prevent such things from happening again: at
> the time my package was uploaded, everything was perfectly allright, so
> would you have tried to build it before it could've gone into the
> archive, you wouldn't have any problem. However, would I not have
> received this bugreport, the package could have gone into stable,
> unbuildable.

Which isn't the point here.

MfG
        Goswin



Reply to: