Re: Bits (Nybbles?) from the Vancouver release team meeting
On Mon, Mar 14, 2005 at 01:02:34PM +0100, David Schmitt wrote:
> On Monday 14 March 2005 11:00, Sven Luther wrote:
> > On Mon, Mar 14, 2005 at 01:14:47AM -0800, Steve Langasek wrote:
> > > There are a few problems with trying to run testing for architectures
> > > that aren't being kept in sync. First, if they're not being kept in
> > > sync, it increases the number of matching source packages that we need
> > > to keep around (which, as has been discussed, is already a problem);
> > > second, if you want to update using the testing scripts, you either have
> > > to run a separate copy of britney for each arch (time consuming,
> > > resource-intensive) or continue processing it as part of the main
> > > britney run (we already tread the line in terms of how many archs
> > > britney can handle, and using a single britney check for archs that
> > > aren't keeping up doesn't give very good results); and third, if
> > > failures on non-release archs are not release-critical bugs (which
> > > they're not), you don't have any sane measure of bugginess for britney
> > > to use in deciding which packages to keep out.
> > What about building the scc (or tier 2 as i would say) arches from testing
> > and not unstable ? this way you would have the main benefit of testing (no
> > RC bugs, no breakage of the day kind of problems).
> I'm only guessing: because keeping those archs in testing didn't work out and
> is (broadly) the cause dropping them in the first place?
No, you didn't understand. let's tell the plan again :
1) people upload to unstable always. Only source are considered, and people
not having tested them and upload unbuildable sources are utherly flamed for
their lack of discern :).
2) the autobuilder build those packages for unstable for the tier 1 arches.
3) after some time, the packages are moved to testig, as done by the testing
script for the tier 1 arches.
4) the tier 2 arches build their stuff from testing. there are two results
of this :
4.1) the package builds without problem, it is added to the tier 2
4.2) the package fails to build. This used to be a RC critical FTBFS, but
is not so anymore. The porter are responsible for fixing the bug and
uploading a fixed package to unstable, as they do now.
4.2.1) the unstable built package passes testing rather quickly, and is
then rebuild for the tier 2 arches, back to 4).
4.2.2) the unstable built package is held out of testing for whatever
not tier2 arch relevant issue. They can then be built in an
arch-specific way, and uploaded directly to the arch in question, or
maybe through a arch-specific-mini-testing-script.
This would have the benefit of :
- Not having slower arches hold up testing.
- not overloading the testing scripts.
- allow the tier 2 arches to have the benefit of testing, that is an archive
with packages suffering from RC bugs and breakage-of-the-day, as if they
build from unstable.
- diminish the workload for the tier 2 autobuilders, since they only have to
build proven good packages, and not random stuff going in unstable.
- still allow the tier 2 arches to be part of debian, and hope for a sarge
release, which yields to :
5) Once a stable release is done, the above can be repeated by the tier 2
arches, until they obtain the release quality and maybe be part of a future
stable point release.
Now, given this full description, does my proposal seem more reasonable ?
> > > For these reasons, I think the snapshotting approach is a better option,
> > > because it puts the package selection choices directly in the hands of
> > > the porters rather than trying to munge the existing testing scripts
> > > into something that will make reasonable package selections for you.
> > So, why don't you do snapshoting for testing ? Do you not think handling
> > all those thousands of packages manually without the automated testing
> > thinhy would be not an over-burden for those guys ?
> Obviously britney/dak is available from cvs.d.o and meanwhile also as debian
> package. So the question for me (administrating two sparc boxes) is why _we_
> don't setup our own testing when obviously the ftp-masters and core release
> masters are not willing to do the work for us?
I guess this is also the message i get from them. The same happens for NEW
processing, and the solution is to setup our own unofficial archive, thus
leading to the split and maybe future fork of debian.
> My answer is that I don't care enough for tow out of 15 boxes for the hassle,
> I will update them to sarge, be grateful for the gracetime given and - iff
> nobody steps up to do the necessary porting and security work - donate them
> to Debian when etchs release leaves my current nameserver without security
> What would you say, if I asked you to provide security support for sparc
> because _I_ need it for my nameservers?
There was no comment from the security team about this new plan, we don't know
for sure that this is the problem, we don't even know in detail what the
problems are and how do they relate to the drastic solutions (in france we
would say horse-remedies) proposed here.
> > You are really just saying that the testing scipts don't scale well, and
> > instead of finding a solution to this, you say let's drop a bunch of
> > architecture, and make it another-one's problem.
> I think you have hit the point of this reorganisation head on: the people who
> did the work until now, feel that they cannot do the work with the expected
> quality _and_ the current number of arches. Thus they make the hard decision
They don't scale well, and have passed the past couple of year insisting that
there is no problem apart from the waste majority of DDs likeing to complain
and flame overmuch.
> to put down hard, objective and verifyable rules where everyone can decide
> whether an arch deserves use of central Debian resources like mirrorspace on
> the central network.
But why, and is it the good/best solution ? Why did they not consult with the
arch porters before hand ? Why did they not put the announcement in a more
diplomatic and inviting way ?