Re: Another "testing" vs "unstable" question
David Fokkema wrote:
On Tue, Jun 22, 2004 at 04:01:20PM +0800, John Summerfield wrote:
Monique Y. Mudama wrote:
Indeed. I actually meant my statement to be in support of the stable
distribution. I guess I should have made that clearer.
Still, no one benefits from having blinders over their eyes. Stable is
the most stable, and it's also the least current. I don't see how it
could be any other way. They're on opposite ends of the same spectrum.
For me its lack of currency is becoming a serious problem. I'm deploying
new systems: do I really want to deploy software that's not going to be
supported much beyond a year? Do I really want to go through migration
to new releases just after I've got it bedded down?
That's the beauty of stable. It _is_ supported for well over a year.
Actually, make that two years. The only problem _right now_ is that if
you go with stable _now_, there is sarge coming. But apart from that,
stable is supported for years.
It's an on-going problem because new stables come out so infrequently.
Someone deploying Sarge as soon as it becomes stable can look forward to
four years (going on past performance) of support for it. I have no
problem with that.
The problem is someone deploying stable _now_ has a little over a year,
someone deploying stable in two years can expect two years of life...
The cycles are too long.
I'm a refugee from Red Hat because its free support model became too
short, and there was no paid-for support I found attractive (far too dear).
No I don't.
My choices are going with testing: what then about security patches? or
unstable? From my reading it's not unknown for unstable to be seriously
borked for a time: I think new glibc did it a while ago, and gcc was
forecast to do it shortly after.
If I want to support a USB Laserjet 1200, then I really need the latest
hpoj stuff: Woody is far too old.
Woody is old, but have you looked at www.backports.org? A list of
I have.
well-supported backports is available there. Security updates will be a
tad slower than unstable, which is behind stable. But then, you're not
backporting glibc, but imap servers or whatever.
I need my security updates _before_ they become a problem. Don't you?
What I find myself doing increasingly is building the occasional package
from Sid for Woody: sometimes it's easy, sometimes it's too much trouble
(think xfree where I think I found circular dependancies).
Also, see www.apt-get.org for various backports, including xfree. But
then, www.backports.org also has an xfree backport. Check it out.
I have been to www.apt-get.org and I got Mozilla from here, pine from
there, KDE from somewhere else, Xfree from another... Do you get the
picture?
What if the pine person also provided Mozilla? Or worse, glibc 2.3? It
got completely out of control.
My desktop system, while it still worked, was becoming a real mess until
I upgraded to Sarge (not without some difficulty, the upgrade wanted to
remove lots of kit I didn't want removed).
Don't tell me I could pin things, until you point to the obvious
documentation I missed.
And what about security updates? I'm not going down that path with any
system I'm paid to support.
A coordinated, official system of official backports would be a fine
thing, and the workforce to do it is already there - they're the people
making these unofficial backports.
Until Red Hat Linux 8.0, Red Hat had two cycles of releases:
Major numbers, 5.x, 6.x, 7.x maintained binary compatibility. Those came
out with about the same frequencies as Debian releases.
Then there were the minor releases, x.[0-3] coming out at about
six-monthly intervals. One could take a package from x.2 and install it
with minimal bother on x.0 or x.1, with every expectation of not
breaking anything.
It's a model Debian would do well to look at and see how it can adapt
it, adopt it. Using this model, Sarge would be 4.0, not 3.1 because it
breaks binary compatibility (new gcc, new glibc).
Reply to: