[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: stable vs testing



On Fri, Nov 09, 2001 at 03:32:29AM +1100, Jason Lim wrote:
> We run unstable on our production servers. That means we must be very
> vigilant in making sure no one else has had a problem. We download
> the updates, and install them a day or two later after other people
> have tested it and made sure it doesn't totally destroy the box. The
> reason we run unstable is because quite a few times we've needed new
> software, and it just wasn't in stable.

another good idea is to install the same packages that your server
requires on another machine (e.g. a development box or your
workstation). then test every upgrade on that box before doing it on
your production server. if the upgrade works smoothly on the workstation
then it's probably OK to run on the production server. if not, then wait
a few days and run a test upgrade again.

once you've done this a few times, you get a feel for what kinds of
problems to look out for, what to keep an eye on during & after the
upgrade.

monitoring the debian-devel and debian-bugs mailing lists is also a good
idea.


> Anyway, thats our take on it... and its never failed us so far. Takes
> quite a bit of effort though... so watch out.

i've been running unstable on numerous production servers for 5+ years.

in all that time, there have been several minor problems (i.e. easily
fixable, non-disastrous) and only a handful of major problems (i.e.
problems that require a LOT of knowledge about unix in general and
debian in particular to fix). because i test upgrades on my workstation
first, i've never killed a production server, although i have put my
workstation into a difficult state several times.


in my experience, there is far less risk in upgrading regularly & often
than there is in upgrading only when there is a new stable release. you
get small incremental changes rather than one enormous change...one
advantage of this is that if something does go wrong, it's generally
only one or two problems at a time, which is much easier to deal with
than dozens or hundreds of simultaneous problems.

it's also the best way of keeping ahead of the script-kiddies (although
adding security.debian.org to your apt sources.list is also a good way
of doing that)

the truth is that the only times i've ever seen security problems on
debian boxes is when i've built machines for clients and they have
failed to keep it updated. there have been a handful of cases where
customers have rung me in a panic about a compromise on a machine i
built 12 or 18 or 24 months before.

the trick is to keep *ahead* of script kiddies, and you can't do that if
you run software that's over a year old.

IMO & IME, the security risk of running old software far outweighs the
risk of packaging bugs in debian unstable.


here's a good rule of thumb for deciding whether to run unstable:

if you are highly skilled and you need the new versions in unstable then
it's worth the risk to run unstable.

if not, then stick to stable. most packages in unstable can easily be
recompiled for stable (depending on which dependancies you also have to
recompile for stable...if there's too many, then it becomes more work
and more risk to recompile than it is to just upgrade to unstable)


craig

-- 
craig sanders <cas@taz.net.au>

Fabricati Diem, PVNC.
 -- motto of the Ankh-Morpork City Watch



Reply to: