[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Relative stability of Testing vs Unstable



On 07/05/2017 05:17 PM, Jason Cohen wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

I've been using Debian for a number of years, but my experience has
typically been with servers where I have used the Stable branch for its
reliability and security support.  However, I recently began using
Debian Stretch for my desktop and foresee a need for more frequent
software updates than the approximate 2 year cadence of the Stable
release.  While the backports repository is great, it only covers a
small subset of packages.

My question is how Debian Testing and Unstable compare in terms of
stability.  The Debian documentation suggests that Testing is more
stable than Unstable because packages are delayed by 2-10 days and can
only be promoted if no RC bugs are opened in that period [1].  Yet,
other sources indicate that Testing can stay broken for weeks due to
large transitions or the freeze before a new stable release [2].

One user described the releases this way: "Stable is never broken;
Unstable is immediately fixed; Testing is neither" [3]. A Debian
developer seemingly agreed, responding "That's because some things
might break in testing during migration.  E.g., when we upload a new
major release of something like MATE and half of the packages take a
bit longer to migrate to testing, you end up with half of the packages
of MATE in testing on the old major version and the other half being on
the new major version. This will definitely break" [4].  Chris Lamb
also seemed to agree, asking the user why he had not considered
Unstable over Testing [4].

In light of the above, it's not clear to me whether I should use
Testing or Unstable. Presumably there are situations where one is
better than the other.  From what I read, very serious bugs are likely
to be caught before making it to Testing, while Unstable benefits from
getting security updates (in the form of new upstream releases) sooner,
and is more likely to be consistent during transitions.  It would be
useful to hear more about the pros and cons of each release.

In either case, I will be using ZFS for the root pool (I've been using
ZFS on Linux for years and I love its resiliency to hardware failure
and features) and take daily backups with bacula.  As such, I can
snapshot before an upgrade and rollback to the snapshot from an
initramfs shell if an update somehow makes the system unbootable or
otherwise causes serious breakage.  As long as I take basic precautions
such as reviewing the output of apt-listbugs and making sure that an
'apt-get dist-upgrade' doesn't want to remove half my system, am I
likely to experience frequent breakage with either release[5]?  What
other steps can I take to avoid breakage?

Thanks in advance for the information,

Jason


I'll bite!

As a user and a Linux Tester for more than 20yrs, I can say all upstream is going to have problems, including your best rolling release. Someone has to test and someone has to fix the problem or the package gets dropped or you just have a crappy system. Do you enjoy fixing problems, finding a workaround to see a release threw to the end and then start testing(basically that's what you are doing) the next release.

Or maybe just run testing, same thing, in both the problems are seasonal, new kernel, new driver, new infrastructure, etc.

For stability, the older your Debian system is the better, can it handle your hardware and can you install the package you need.

If you like working around problems and using the latest apps Sid/Testing is a blast!

Cheers,
--
Jimmy Johnson

Debian Buster - KDE Plasma 5.8.7 - Intel G3220 - EXT4 at sda14
Registered Linux User #380263


Reply to: