Re: problems with the concept of unstable -> testing
On Tue, Dec 16, 2008 at 09:27:55PM +0100, Bastian Venthur wrote:
> Steve McIntyre schrieb:
> > I'm curious about that myself. We've tried that in the past, and a
> > 3-year release cycle was what happened. Experience tells us that we
> > have much too big a system to suddenly one day declare "release"
> > without a lot of preparation beforehand.
> Actually, I don't know since I'm not long enough involved to know what
> happened "back then". Did testing at some point in time fork from
> unstable and developed slowly into stable while unstable was still
Pretty much. What used to happen was that at some point the release
manager decided to freeze unstable, creating a new distribution called
frozen. This was a straight fork of unstable, there was no technical
link between them once the fork was done.
> developing concurrently? Did uploads go directly to testing or to
> something before testing (like the current frozen unstable)? What was
Uploads were done directly to frozen. Uploads could be done
simultaneously to both (ie, you could upload to "frozen unstable" -
you'll see such uploads in older changelogs) or to one distribution
> the problem that lead to a slow development back then? Was it that it
> was still possible to upload into unstable and so noone was actually
> interested in fixing RC bugs?
Well, one of the problems was that you could end up with substantial
divergence between the two distributions which tended to end up causing
breakage so there was still some attempt to keep things broadly in sync.
A search through the list archives from the time AJ introduced testing
and after the first release using it should turn up plenty of discussion
around the issue.
> What I see *now* is that the freezes during the last two and the current
> release are getting longer and longer (~1,5 months, ~4 months and for
> Lenny at least 5 months). For me this seems to be a serious problem we
> should not ignore. Important software is outdated in unstable and
> current hardware doesn't work anymore without resorting to grab packages
> from experimental or unofficial sources.
Of course, these problems would all also apply to a frozen distribution
like we used to have. My recollection of those times is that the long
freezes we had back then had pretty similar effects on general
development - the win from testing is in theory that testing should be
in much better shape at any given moment than a random snapshot of
unstable would be so we should have much more chance of getting the
freeze over quickly.
I certainly agree that we should be looking at ways of reducing the
freeze time but I'm not sure that the freeze mechanism is an important
factor in this. In terms of reducing the freeze time I think things
like the availability of people willing to work on core packages is more
of a limiting factor than anything else.
"You grabbed my hand and we fell into it, like a daydream - or a fever."