[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: distinguish between "core" and "main"?



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Neil,

On 06/04/11 19:01, Neil Williams wrote:
> 
> Testing compatibility is the larger problem. Automated tests can only
> go so far. Dependencies are one thing, bugs which arise because one
> setup is using a version which has already been replaced in testing.
> 

As a consumer I would like to get a system that keeps up to date
(as today's testing does), but with a stable core package set. I
would like to install the most recent stable core system, and use
testing for everything else.

All these APIs and dynamic libraries are meant to provide backward
compatibility. We also have versioned package names to work around
compatibility problems. But we don't really rely on this, except for
updates within testing or unstable.

>>> If you sincerely want the Debian system which has had the most testing
>>> of all possible variants and which Debian can honestly describe as "the
>>> most likely candidate for a system where packages work together as
>>> nicely as it is practical to achieve" you MUST use stable and then keep
>>> that up to date with the stable point releases and security updates.
>>
>> The problem with this is Debian's long release cycle (+2 years, as it
>> seems). Its difficult to get help from upstream if the source code in
> 
> Then use chroots, as explained in my first message but which you appear
> to have ignored.
> 

Probably I had misunderstood your "Building stuff then takes place in
chroots, e.g. using pbuilder". I had associated pbuilder with "building
packages".

> I do this every single working day. I run a number of boxes using
> stable - each has pbuilder support for sid and most have pdebuild-cross
> support for cross-building for armel based on stable or unstable.
> 

Instead of pbuilder I am using virtual machines (kvm & vserver)
with testing today. Their biggest advantages (as with pbuilder I
would guess) are hiding the real hardware and keeping things separate
from each other.

The downside is that everything gets more complex, but not necessarily
more stable. You've got even more systems to run and more packages to
install, not to mention that updates of the virtualization server
are more difficult to do.

My virtualization servers themselves were running testing, too, but
since Squeeze has been released testing is changing rapidly. Currently
I cannot rely upon testing for server installations.

> The long release cycle arises from the very thing you appear to cherish
> - the quality and stability of the stable release. It takes time and
> people to generate quality. That's the entire problem - there are not
> enough people to do that testing twice over.
> 
> 
> Testers = people. There are as many permutations as there are testers -
> or if you really want the figures, work out how many possible
> permutations there are for installing 1,500 packages out of a total
> selection of 31,000. Then find people to test each permutation on a
> daily, regular usage basis - ensuring that each person fully tests
> EVERY application in their particular set.
> 

Understood. If you reduce the number of packages to be released by
focusing on a core package set with 1000 or 1500 packages instead
of +30000, then your can do more rapid releases because there are
fewer packages to wait for matching the release criteria.

Of course this doesn't make the +29000 packages outside of the
proposed core repository go away. But I think we already agreed to
use testing for installations of "non-core" packages.

I would guess most testing happens in unstable, before a package
gets promoted to testing, anyway.


Regards

Harri
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk3rIiIACgkQUTlbRTxpHjfh3QCfXMKW16yza5S9sG4t7vZdNMgx
ZsYAni1DtpWYxf9ubYmwCATbGXhS7LAW
=lBo9
-----END PGP SIGNATURE-----


Reply to: