[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Ubuntu discussion at planet.debian.org



> Steve Langasek

> It is not correct.  At the time testing freezes for sarge, there are likely
> to be many packages in unstable which either have no version in testing, or
> have older versions in testing.  The list of such packages is always visible
> at <http://ftp-master.debian.org/testing/update_excuses.html.gz>.  While
> it's a goal of the release team to ensure that *incidental* issues don't
> keep package fixes/updates out of testing, there are plenty of package
> updates which will come too late for consideration, or will be RC-buggy in
> their own right, that won't be part of sarge.

That's the URL I was trying to remember; thanks.  That's what I meant
by "the interesting thing about testing is the dependency analysis". 
I think the information in update_excuses mostly supports the
"convergence is readiness" hypothesis.

It seems to me that Jérôme's observation also takes into account the
fact that "experimental" exists, so that changes that maintainers know
would break britney don't get put into unstable late in the cycle. 
Without that, I wouldn't expect testing -> unstable convergence ever
to happen.  But don't you think that, until testing converges (nearly)
to unstable, it's hard to know how much of testing will FTBFS on
testing itself?

Although it does sometimes happen that an update breaks something that
works in the version in "testing", I think it's more common for an RC
bug to apply to earlier versions as well, even when it's an FTBFS for
something that used to build.  (That often seems to mean that one of
the build-deps evolved out from under the package or got removed
because it was old or broken, and the source that's made it into
"testing" won't build there either.)  So I would expect that the vast
majority of RC bugs filed against packages in sid have to be handled
by really fixing them -- and letting the fix propagate into testing --
or excluding the package from sarge.

Freezing base+standard at this stage saves the package maintainers the
trouble of uploading to experimental instead of unstable for a while,
and makes it a lot easier for the RMs to allow fixes in selectively. 
Otherwise, progressive freezes don't really alter this analysis.

> And immediately *after* the freeze point, I think we can safely expect
> unstable to begin diverging even further from testing.

True enough.  In a lot of commercial software development, the
interval between "code freeze" / VC branch and release is necessary so
that QA can finally do a full run through the test plan and the senior
coders are free to fix any RC bug they can.  Everybody else works on
the trunk.  So apply the "testing (almost) = unstable" criterion to
the freeze point rather than the release point, with the understanding
that the packages for which it's not true are exactly the ones that
need more / different attention during the freeze than they were
getting before.

> While getting KDE updates into testing has been a significant task in the
> past, I'm not aware of any point during the sarge release cycle when KDE has
> been a factor delaying the release.

Er, does the current situation fit?  An awful lot of update_excuses
seems to boil down to Bug#266478, and it's hard to see the RC bug
count on KDE 3.2 apps dropping by much until the debate about letting
KDE 3.3 in is resolved.  I think the C++ toolchain issues I mentioned
were a factor in KDE 3.2 propagation into testing being delayed to the
point that KDE 3.3 is even worth discussing.  But I haven't been
following those issues at all lately, so don't take my opinion on this
too seriously; maybe I should just ignore that portion of
update_excuses.

Cheers,
- Michael



Reply to: