[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Another "testing" vs "unstable" question



On Wed, Jun 23, 2004 at 01:10:11AM +0000, John Summerfield wrote:
> David Fokkema wrote:
> 
> >On Tue, Jun 22, 2004 at 04:01:20PM +0800, John Summerfield wrote:
> > 
> >
> >>Monique Y. Mudama wrote:
> >>
> >>   
> >>
> >>>Indeed.  I actually meant my statement to be in support of the stable
> >>>distribution.  I guess I should have made that clearer.
> >>>
> >>>Still, no one benefits from having blinders over their eyes.  Stable is
> >>>the most stable, and it's also the least current.  I don't see how it
> >>>could be any other way.  They're on opposite ends of the same spectrum.
> >>>
> >>>
> >>>     
> >>>
> >>For me its lack of currency is becoming a serious problem. I'm deploying 
> >>new systems: do I really want to deploy software that's not going to be 
> >>supported much beyond a year? Do I really want to go through migration 
> >>to new releases just after I've got it bedded down?
> >>   
> >>
> >
> >That's the beauty of stable. It _is_ supported for well over a year.
> >Actually, make that two years. The only problem _right now_ is that if
> >you go with stable _now_, there is sarge coming. But apart from that,
> >stable is supported for years.
> >
> > 
> >
> 
> It's an on-going problem because  new stables come out so infrequently. 
> Someone deploying Sarge as soon as it becomes stable can look forward to 
> four years (going on past performance) of support for it. I have no 
> problem with that.

Ok.

> The problem is someone deploying stable _now_ has a little over a year, 
> someone deploying stable in two years can expect two years of life...
> 
> The cycles are too long.

If the cycles were shorter, people would install systems which would be
outdated in less than six months time, to take a RedHat point release
for an example. Right after 7.1, you would have 6 months. After _only
three months_, you would have to start looking for 7.2.

The cycles are too short. RedHat's and some other's, that is.

Just my opinion, of course.

> I'm a refugee from Red Hat because its free support model became too 
> short, and there was no paid-for support I found attractive (far too dear).

If you found RedHat's cycle too short, why is Debian's too long?

> >>No I don't.
> >>
> >>My choices are going with testing: what then about security patches? or 
> >>unstable? From my reading it's not unknown for unstable to be seriously 
> >>borked for a time: I think new glibc did it a while ago, and gcc was 
> >>forecast to do it shortly after.
> >>
> >>If I want to support a USB Laserjet 1200, then I really need the latest 
> >>hpoj stuff: Woody is far too old.
> >>   
> >>
> >
> >Woody is old, but have you looked at www.backports.org? A list of
> > 
> >
> 
> I have.
> 
> >well-supported backports is available there. Security updates will be a
> >tad slower than unstable, which is behind stable. But then, you're not
> >backporting glibc, but imap servers or whatever.
> >
> > 
> >
> I need my security updates _before_ they become a problem. Don't you?

Of course. That's why I won't even think of installing testing on a
server and won't install unstable either. In my opinion, servers should
be running stable. So they do. But then, I like mutt on my server to be
the same version as on my desktop running unstable. So I run that as a
backport. Also, a newer spamassassin is nicer than the one from stable.
These are both applications which run for local users and don't allow
incoming connections or things like that. If there is a security problem
with them, I can live with the slight delay. In my experience, security
updates are thoroughly and quickly done by the DD and the people at
backports.org read debian-security-announcements as well.

Again, I'm not backporting glibc, some unsafe kernel, apache or a mail
server. Probably that won't even be problem...

> >>What I find myself doing increasingly is building the occasional package 
> >>from Sid for Woody: sometimes it's easy, sometimes it's too much trouble 
> >>(think xfree where I think I found circular dependancies).
> >>   
> >>
> >
> >Also, see www.apt-get.org for various backports, including xfree. But
> >then, www.backports.org also has an xfree backport. Check it out.
> > 
> >
> 
> I have been to www.apt-get.org and I got Mozilla from here, pine from 
> there,  KDE from somewhere else, Xfree from another... Do you get the 
> picture?

Yes, I get it. Your sources.list grows. But if you have been careful
constructing it (newlines, comments, etc.) you know what comes from
where.

> What if the pine person also provided Mozilla?  Or worse, glibc 2.3? It 
> got completely out of control.

Then don't go to apt-get.org and stick with backports.org. You have to
specify new sources for _each_ package. You _only_ get what you want.

> My desktop system, while it still worked, was becoming a real mess until 
> I upgraded to Sarge (not without some difficulty, the upgrade wanted to 
> remove lots of kit I didn't want removed).
> 
> Don't tell me I could pin things, until you point to the  obvious 
> documentation  I missed.

I won't.

> And what about security updates? I'm not going down that path with any 
> system I'm paid to support.

I agree with you. No testing on servers.

> A coordinated, official system of official backports would be a fine 
> thing, and the workforce to do it is already there - they're the people 
> making these unofficial  backports.

Official? Debian has made its decisions: a great amount of freezing,
testing, installing, testing, trying to break things, etc. goes into a
stable release. The _whole_ system is tested, not individual packages.
You can't do that for individual packages between stable releases,
because updates come too fast. Run unstable for a while, you'll know
what I mean.

And hey! What's the difference between official and unofficial anyway?
What kind of extra support do you expect if Debian would just label
www.backports.org as official?

> Until Red Hat Linux 8.0, Red Hat had two cycles of releases:
> 
> Major numbers, 5.x, 6.x, 7.x maintained binary compatibility. Those came 
> out with about the same frequencies as Debian releases.
> 
> Then there were the minor releases, x.[0-3] coming out at about 
> six-monthly intervals. One could take a package from x.2 and install it 
> with minimal bother on x.0 or x.1, with every expectation of not 
> breaking anything.

I'm sorry, I downloaded RedHat cd's, used them for a while with _no
updates_ whatsoever, downloaded a new set of cd's and the fun would
start all over again. I tried up2date or whatever it was called, but I
never got it too work properly. Maybe I'm too stupid, but it took me
only a while to figure out Debian and within a few weeks, I was tracking
unstable on my desktop. Of course, I could only do that because of this
fine list!

> It's a model Debian would do well to look at and see how it can adapt 
> it, adopt it. Using this model, Sarge would be 4.0, not 3.1 because it 
> breaks binary compatibility (new gcc, new glibc).

Please, no. Debian stable is rock solid, something RedHat, in my
opinion, has never been able to achieve. I would love to hear from
people who are still running a RedHat system older than two years. I
know of a lot of people who are running such Debian systems and are
satisfied with it, apart from the usual thoughts: oh, would that I had
_both_ that stability _and_ the newer software. But still, they choose
stability.

David

-- 
Hi! I'm a .signature virus. Copy me into
your ~/.signature to help me spread!



Reply to: