Re: The sarge release disaster - some thoughts
Adrian Bunk wrote:
> The milestone that included the start of the official security support
> for sarge was only 6 days after the announcement, but is was missed by
> more than 6 months.
> Whyever it was expected to get testing-security for sarge that quick, it
> should have been obvious 6 days later that it wasn't possible that
> What would have been a second plan?
> Use testing-proposed-updates.
> Using testing-proposed-updates for security fixes, users might have
> gotten security updates one or two days after the DSA on some
> Would this have been an ideal solution?
> But it would have worked without a great impact on the release date.
No, it wouldn't, since t-p-u isn't autobuilt for all architectures
either. We would win nothing by using it without manually building
the packages on the missing architectures.
> RC bugs - only a metric
> Nowadays, it seems the main metric to measure the quality of a release
> inside Debian is the RC bug count.
> As with any metrics, work on improving the metric might make the metric
> look much better, but doesn't at the same time imply that the overall
> quality improved that much.
> An example:
> A major problem in Debian are MIA developers.
> Consider a MUA maintained by a MIA developer with the following bugs:
> - #1 missing build dependency (RC)
> - #2 MUA segfaults twice a day (not RC)
These are fixed during BSPs, so no problem to spend more time on.
> Consider the two possible solutions:
> 1. a NMU fixing #1
> 2. - ensure that the maintainer is MIA
> - orphan all packages of the MIA maintainer
> - new maintainer adopts MUA
> - new maintainer fixes both bugs
This is already the case
> Dump testing?
> It seems noone asks the following question:
> Testing - is it worth it?
As a preparation for stable and an interim "solution" between stable
and unstable it's quite well.
> Several people have stated that with the size of Debian today, it
> wasn't possible to manage a release without testing with a "traditional"
> freeze (unstable will be frozen at a date, announced several months
> before), and that only testing makes releasing possible.
I believe that it helps a lot to get and keep the software in proper
shape for a release with all supported architectures and depending
> I remember that when testing was introduced, it was said that testing
> might always be in a releasable state. History has shown that testing
> was sometimes in a better shape as unstable, but also sometimes in a
> worse shape. Testing has some advantages over unstable (always
> fulfillable dependencies, some kinds of brown paperbag bugs are very
> unlikely), but serious data loss bugs like #220983 are always possible.
As in unstable...
Assume a working testing-proposed-updates and testing sounds near perfect.
> testing was expected to make shorter freezes possible.
> Neither the woody nor the sarge freeze support this claim.
> This might not only be the fault of testing, but the positive effects of
> testing (if any) aren't visible.
Hmm, we're not waiting for testing to freeze but we're waiting for
missing infrastructure to be implemented. Or am I mistaken?
> This might make a freeze a bit longer?
> But consider the disadvantages of testing:
> - Testing causes additional work for both the release team and all
> Debian developers.
> As an example, library transitions are always a pain due to testing.
Uh? For the user, they're a lot better since they will happen instantly,
something that usually doesn't work with unstable. Yes, they do cause
work for the release people and some maintainers are annoyed by the
delay to get all affected packages and architectures in sync.
> And RC bugs already fixed in unstable but not in testing need to be
When testing ist not partially frozen and the package runs fine in the
archive, it'll migrate into testing automatically. Tracking is only
required during a (partial) freeze.
> - Architectures have to be in sync due to testing.
... due to an upcoming release.
> An architecture without an autobuilder is dead.
> But if an architecture doesn't has any autobuilder for two weeks
> this wouldn't cause any problems if testing wouldn't exist.
It would be easier if more buildds would be accepted for such architectures
or if it would be easier to add an interim buildd for these architectures.
Ten years and still binary compatible. -- XFree86
Please always Cc to me when replying to me on the lists.