[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: update on binary upload restrictions



James Troup <james@nocrew.org> writes:

> Hi,
>
> 			       Summary
> 			       =======
>
> I've done some work in dak to improve the binary upload restrictions
> that are currently in place to hopefully reduce some of the collateral
> damage that resulted from the initial implementation.  Binary upload
> restrictions are now per-suite/component/architecture which means that
> uploads to experimental and/or non-free should no longer be affected.
> The current restrictions are listed below[1] for reference.
>
>
> 		 Why restrictions on binary uploads?
> 		 ===================================
>
> So there are several reasons why these restrictions have been put in
> place:
>
> (o) reproducibility
>
> It's vitally important that packages in our archive can be rebuilt on
> our buildds and not require a custom environment or source
> modifications or other special treatment.  When they can't, scaled
> across as many packages and architectures as we have, it makes the job
> of the security team nearly impossible.
>
> The best (and IMO, only) pragmatic way of doing this is to actually
> have built them on a real buildd.
>
> (o) logging 
>
> The build logs at buildd.debian.org are invaluable in trying to debug
> problematic builds.  Byhand builds and other unofficial builds often
> don't send an associated log to buildd.debian.org.

Which requires buildd.d.o not to remove logs from unofficial
sources. The amd64 logs did get removed when we send them there for
the maintainers convenience.

Another idea would be to include the build log from debuild in the
changes file and enforce their presence. The dak can split them out
and send them on to buildd.d.o.

> (o) build effort coordination
>
> There's a reason the buildd suite is called 'wanna-build'.  The core
> of it, both when Roman first wrote it all those years ago and now, is
> the sensible and efficient coordination of builds amongst multiple
> build daemons.  Having a random additional build daemon that's not
> part of the 'wanna-build' system breaks this and all the advantages it
> brings.

Which requires that you and your team members actualy add new ssh keys
for new buildds to the access lists. One m68k has been waiting for
over a year.

> (o) emulated/cross-compiled buildd-ing considered potentially harmful
>
> The idea of emulated buildds or cross compiling has been around for a
> long time.  Personally I don't think it's a good idea, but that's not
> really the point.  The point is that one person should not
> unilaterally make the decision that they are or are not OK.  If it's
> the consensus of the release managers and the architecture porting
> team that they want to use emulated buildds and/or cross compiling, I
> absolutely will not stop them from doing so.

M68k has been doing cross-compiling the kernel for a long time now and
that seems to work well. And the last months emulated building has been
setup too by the m68k porters team. I hope you consider their team
effort to produce faster m68k buildds not as unilaterall.

...
> So, again, this is not something I personally think is a good idea but
> I won't stand in the way of consensus of the Release Managers and the
> developer community as a whole.  I think it's a bad idea for two
> reasons:
>
>  (a) we don't currently have the buildd infrastructure for this - it
>      would require a minimum of 2 (preferably 3) machines dedicated to
>      being i386 buildds.  It would also make i386 uploads much more
>      sensitive to delays and really require better coverage than one
>      human could provide.

We have the hardware and given the increasing number of ppc and amd64
uploads a set of i386 buildds is long overdue. I wouldn't be too
surprised if amd64 overtakes i386 for lenny.

So I consider this a non argument. This is required to be solved with
and without source only uploads.

>  (b) source only uploads are in my experience very often badly tested
>      if they're even tested at all.  For a long time after Ubuntu
>      switched to source only uploads, it was really obvious that a
>      large number of them hadn't even been test built, never mind
>      installed or used.[3][4]

There a different levels of tested:

- tested dpkg-buildpackage

The buildd can and does do this automatically and way better than
humans can and do. If a source doesn't build it already FTBFS on 11
architectures including all the problematic ones. Having it fail on
the upload architecture (usualy i386/amd64/ppc) is no problem. They
have enough spare time.

If you think this would increase human time spend on it then better
coordinate gross FTBFS handling to flag a source bad for all archs so
other buildd admins don't have to waste time too.

Note that packages FTBFS with obvious bugs in the build scripts that
could never have worked even on the uploaders system and still they
managed to upload debs. People seem to fix bugs, build without clean
and then upload the results or something.

So requireing binary uploads doesn't even give us this minimal test.

- install/update/remove/purge test

You say a lot of uploads aren't test installed. Maybe they are, maybe
they aren't. Requiring binary uploads changes nothing here.  Lazy
maintainer will upload without test installing anyway.

Luckily there is a testsuite for this that can automate this testing a
lot.

Note that most archs don't do this since they are buildd build anyway.

- Testing functionality

Even less people do test runs of the package while this is actually
the most important thing to do. You need a human mind to see if a
package works right.

> We should probably fix (a) regardless, but the point is that it's not
> where it needs to be right now.  And maybe I'm wrong about how much
> (b) would be a problem.  *shrug* Just MO.

I think you are.

Consider that most architectures have buildd build packages. The
source+bin uploads should have roughly the same arch split as the
users do, which currently means mostly i386. So the architecture where
we have the most testers gets the manual build packages that
maintainers already tested (or not). The risky buildd builds on the
other hand have the minority of testers. Isn't that the wrong way
around?


Also you can still require source+bin uploads but then throw away the
debs and use the buildd debs for all archs. Or stall the original debs
until the archs buildd uploads debs and then throw away that archs
buildd debs (less good imho). Both ways would enforce as much testing
as the current system but makes sure the source builds on all archs
and has buildd logs for all archs.

MfG
        Goswin

PS: could someone from ubuntu give a summary of their recent
experience of source only uploads? Is it still bad or just started
that way?



Reply to: