[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Old versions [LONG - lots of comments added in reply]



On Sun, Jul 23, 2006 at 02:38:13AM -0400, Godless Infidel wrote:
> On Sunday 23 July 2006 01:49, Kevin Mark wrote:
> > On Sun, Jul 23, 2006 at 01:34:42AM -0400, Godless Infidel wrote:
> > > Is it true, as I have heard, that you must run "testing" or "unstable"
> > > in order to run the recent versions of applications?
> >

It varies. "Unstable" is unstable primarily in the rate of change: large 
amounts of package churn, new packages to include relatively bleeding 
edge software, sometimes packaged within a few hours of release. There are a 
relatively few uninstallable packages at any one time: there may be 
large chunks uninstallable for a period e.g. when KDE changes 
major versions or there is a GCC change - that's usually only for 
a couple of days/weeks as packages gradually work through to the updated 
versions. There may be individual breakages in packages at any time but 
they get fixed quickly. Unstable is usable all day every day - but there 
may be some instability. It's never released as such: your "unstable" of 
today's install is "testing" a while later ... see below. Sid codename -
Sid broke toys in Toy Story. [For rough comparison: X.org 7.0 moving to 
X.org 7.1 here]

After a period, the packages move to "Testing". Testing originally had 
the goal of being release ready at any time: packages percolate in, get 
a period of hammering on and wait for a stable release. Some packages may 
stay in testing for a long time - the release process takes a long time 
- but stuff thats in testing is generally usable for all purposes at all 
times. Stuff will still break from time to time: there will be active 
updates tracking moves above in unstable. Close to a release as 
stable, stuff may get dropped from testing if it's not release 
quality. Currently Etch codename - will be released as stable 
release Debian 4.0 in December 2006 if all goes to plan. 
[Was X.org 6.8.2 or so -> X.org 6.9 -> moving to X.org 7.0]

Stable: REALLY stable - it's been testing for anything up to 18 months. 
As it releases, it freezes solid. No changes except security related 
changes or really huge bugs. No major kernel changes. No sneaky compiler 
changes on updates. Debian will backport fixes to stable where 
feasible so that stable essentially doesn't change behaviour. Some 
people can't live with this pace of change, hence backports - building 
e.g. OpenOffice 2.0 for stable using the compiler and libraries 
within stable, producing it as a drop in package and putting it on a 
backports server. Similarly, for Apache2. Currently Sarge codename. 

Backports are not "officially" released - no major changes allowed to a 
stable release - but it does mean that your system may stay more up to date. 

It's your call: stable is essentially "fit and forget" and know that it 
won't change for the lifetime of that release + support for (usually about) 
a year after release of the next stable. [Debian 3.0 [Woody] has just 
been dropped from support: everyone is now assumed to have updated to 3.1, 
for example.] 

One limiting factor outside your control may be the pace of hardware 
change: latest whizzo computer with fast graphics card may not be well 
supported on a two year old version of Xfree86: there have been similar 
problems in the past with e.g. SATA/RAID controllers and being unable to 
install at all . Conversely, stable may keep older hardware with 
e.g. limited RAM happy for years. [Stable has XFree86 3.x]

Stable point release updates occur every few months, primarily to 
address security issues. If I installed from 3.1r0 disks which I'd had 
in the back of my cupboard for a year and immediately upgraded to 3.1r2, 
I'd expect no more than about 100M maximum change: the idea is that 
point releases bring you up to date not that they obsolete previous copies 
- essentially, they just bring together all the changes since the previous point 
release in one place - updating on line from the Debian security server
will bring you to essentially the same state.

> > Hi $NEW_USER,
> > Debian has many streams and each has a goal. Stable is meant to be
> > 'released' and has 'release goals' like stabillty and specific features.
> > Since it is 'released' about every 18 months, it does not, by
> > definition, contain the most recent versions of any software.
> 
> Does it at least contain the most recent versions available at
> release-time, or are they so afraid of introducing bugs that they
> only use versions that have been out for a while?
> 

See above: essentially you get well tested stuff that's been out 
for a while because it's been in testing for a while but stable 
freezes solid at release point - so Apache 1.3x, XFree86.

This is why people always complain that Debian is "old" compared to 
SuSE 10.x / the latest Fedora / Mandriva / Ubuntu ... if they're 
comparing with Debian stable they're comparing apples and oranges, not 
comparing with Solaris 2.9 / Red Hat Enterprise Linux / HP-UX ...

> > But people who use it get 'enterprise-ready' and easy to use software.
> > If you want something that has more recent versions, you can run testing
> > or unstable. But you must deal with the shortcommings those version have:
> > they are less well tested, libel to not be 100% installable and have
> > changing sets of programs.
> 

This seems slightly negative: I personally think the difference between 
unstable/testing/stable is not day to day "stability" as much as pace of 
change. I can work fine on unstable day to day - I just have to accept 
that if I want to install 30 x exactly the same machine in a month, I 
won't easily be able to because the software will have moved on.

Conversely, I can gradually "downgrade" an unstable system just by 
installing it and leaving it for a while (month or two) then moving the 
sources list to "testing" then moving "testing" to "stable" eventually.

If you upgrade to "unstable" to run a particular package, decide its 
a mistake and want to go back to testing/stable quickly there may be 
no immediately apparent and straightforward way to do this  - to this 
extent, downgrading is not supported. Its a good case for using 
backports/waiting for a backport to be produced.

> This here is my main concern with unstable. About how much of the
> unstable distribution is uninstallable right now? If it's low enough
> percentage-wise, I need not worry about how old the software in 'stable'
> is.
> 

It varies: taking a wild guess - out of 18000 packages in unstable, 
possibly a dozen or two are uninstallable today - but will install tomorrow 
or in a week's time, to be replaced by a few more that won't install :)
You may have to wait: on AMD64 unstable, though some help libraries, 
lanugage packs and support and development libraries are currently 
available, I'm waiting for OpenOffice 2.0.x to be 64 bit clean and 
installable. It isn't - the current advice is to use the 32 bit version 
in a chroot - but it's being worked on and will get here soon. There are 
packages on the maintainers home site, he acknowledges bugs and they'll 
be put into unstable when they're ready. This contrasts with the rush to 
get out OO.org 2.x for other 64 bit distributions.

> > I use unstable, which has the most change,
> > but I would not run it on a production server, it is better suited for
> > experienced users who dont need enterprise-ready software, not that it
> > isn't close to that already. There is one way to use more recent
> > versions of some software on stable, that is to use 'backports'. Again
> > this is a compromise but one that many make. It takes more recent
> > version of important software for servers and recompiles that to work
> > with library versions in stable.
> 
> Are you talking about recompiling software outside of the package
> manager, or do 'backports' work within the package manager. My
> current Red Hat system is so full of such software (outside of
> RPM) that there's no longer any point in trying to use RPM. I have a
> directory full of the original source tarballs just to keep track
> of what's installed and where it went during 'make install'.
> 

You should almost never need to do this on a Debian system: everything 
should be available somewhere within Debian, pretty much [if you 
include backports (and the multimedia stuff including non-free codecs and 
"stuff" packaged at debian-multimedia.org by M. Marillat) in this 
loosely wider temporary definition of "Debian" :) ].

> And since the subject of library versions came up, do
> you know why new libraries are binary-incompatible with
> old ones these days, even for applications that only use
> the functions provided by the older library versions? I
> remember a time when all you needed was a library with the
> right symbols in it. But now, a simple program compiled
> against the latest version of glibc won't work with an
> older minor revision of glibc, because ld.so checks a
> version number that is somehow embedded in the binary.
> 

Debian packaging will allow you to have multiple versions of
"stuff" on your system. Proper package and library dependency checking 
helps here and, IMHO, is the main reason for using Debian over any
other Linux distribution. Debian is "done right" at a fairly low level 
:)

> > It provides an intermediate solution
> > for stable users but it again affect the useability of stable as
> > backports introduce a possible source of instability to the 'stable'
> > release while giving a bit of improved functionality.
> > cheers,
> > Kev
> 



Reply to: