[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian reliability growth



Hi,

sorry for taking so long before commenting on this message..

On Tue, May 27, 2003 at 11:41:42PM +0200, Yann Dirson wrote:
> 
> Another problem that was not mentionned in this thread, but which I
> see as being related, is the repeted bottlenecks we currently have for
> entrance into testing.
> 
> How does it relate to quality ?  Well, there are several ways:
> 
> - some packages are held in sid because of completely unrelated
> problems.  This makes packages enter testing later than they should,
> and so they're less tested.
> 
> - one particular case of the above situation is when we must wait for
> all dependencies of a given lib to be rebuilt and free from RC bugs
> (eg. for the g++3.2 ABI transition, bock is still holding libgc, see
> #180969 and #193549), for the whole set to be promoted.

Yes, very true. In general it seems that packages with lots of dependencies
can take very long before migrating to testing. Unfortunately I don't see
any blindingly obvious solution to this problem. OTOH, if such a solution
existed, it would surely have been implemented long ago.. :)

> We could allow promotion into "testing"-as-described-above mostly on a
> per-package basis; only _source_ packages would be promoted (provided
> the build-deps are satisfied), and would then be automatically rebuilt
> against the libs/whatever in "testing".
> 
> That would filter from "testing" the vast majority of the bugs that
> are currently filtered, while virtually removing the bottlenecks
> caused by commonly-used support packages (glibc, libgcc's et al).  As
> a consequence, bugfixes go faster into "testing", and the overall
> quality in "testing" (and thus the quality in "stable") is raised.

This sounds like a good idea, yes. It would also solve the (percieved?)
problem of having software in stable compiled with different compiler
versions and against different libraries. E.g. package X compiled with gcc
2.95.3 against libc 2.2.4, but then finding its way into stable where it is
linked at runtime with libc 2.2.5 compiled with gcc 2.95.4. I don't know if
this is a very realistic problem or not, but I guess it increases the chance
of stumbling on weird linker/compiler bugs etc.? 

The big problem with this scheme, as I see it, is what happens when a new
version of a library enters "testing". As I understand from your
explanation, this would require that all dependencies of the library also be
recompiled. Take for example a new version of libc. Certainly a few big
honkin SMP machines can recompile essentially the entire archive in a
reasonable time for i386, ia64 and other high performance architectures. But
what about e.g. m68? You would need a room full of these machines to
recompile in a reasonable time?

Another problem could be incompabilities between unstable and testing. Say
the testing scripts decide to move package foo version x.y.z from unstable
into testing (because, among other things, it builds just fine with the
toolchain and libraries available in unstable at the time when it was
uploaded). But what if foo x.y.z fails to build with the toolchain and the
libraries in testing? An extra burden on the maintainer since he has to
figure out what went wrong?

> Well, you noticed, that's not Janne's "release" after "testing".
> That's just an additional stage in the pipeline.  But being automated,
> it would only require more computing power and archive space, and not
> much more manpower (apart from the work needed to get this
> up-and-running at first).

I don't think the extra archive space requirement is such a big problem,
storage is getting cheaper and cheaper all the time. Given enough computing
power (which I think is the big problem here), I think your scheme could be
workable.

To defend my own proposal of two supported releases at the same time, I
don't think it requires that much more manpower than the current situation,
in the sense that security updates are still made for potato. My proposal
was mainly to formalize this while at the same time trying to please the
"newest-and-latest" crowd. This last part about pleasing that crowd is not
so essential to my argument anyway. But I think that it's important to give
users ample time to upgrade to a new release. Yes, in many cases it's as
simple as "apt-get dist-upgrade" and going to dinner, but many users
certainly want to make more extensive preparations. For example, many
academic institutions want to make upgrades during the summer holidays, when
the students are away. A commitment to producing security updates for the
old stable release for one year after the new release would go a long way
toward this goal, I think.

Have a nice day,
--
Janne



Reply to: