[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: pilot-link in Sid and Sarge: Much bigger question



Roland Mas wrote:
> To me, you seem to express the view that improving Debian means
> throwing away our release process, including the way testing works.

Then I have expressed myself unclearly. My apologies. I think testing is a
great idea and a most necessary institution. In fact, I wish we had more than
one level of testing.

My view is simply that the current system has weaknesses that merit discussion
in order to hopefully find improvements. I have deliberately not gone into
possible solutions yet, simply because nobody has yet agreed there is a
problem that needs solving in the first place! (Note: The lack of a solution
does not equal the lack of a problem.)

However, since you could almost stretch yourself to hypothetically
acknowledging that we're not quite in heaven yet, I'll say thanks and fire off
some thoughts for discussion:


One problem I see is the enforced binary compatibility. As long as all
programs in testing cannot be upgraded without also upgrading to the latest
shlib'ed version of all used libraries (which tend to be notoriously stuck in
unstable), bug fixes for individual programs don't reach testing.

After all, it's sarge that's the release candidate right? Not sid. So why is
sid allowed to dictate dependencies that sarge must conform to?

This is one reason why testing is hopelessly behind on small fixes such as
security patches. A security patch can be changing strcpy to strncpy in two
lines of code. Yet that simple fix will get stuck in unstable if any of the
libraries the fixed program uses has updated their shlib dependency to an
unstable version.

Kill holy cow #1: Binary compatibility. Testing is a separate release, treat
it as such. Branch it off and set up it's own buildd server. Build packages in
testing with tools and libraries from testing. Don't use binary packages from
unstable, recompile them. Make sarge, not sid, the reference environment for
sarge.


A second problem is enforced platform compatibility. It creates a
lowest-common-denominator problem of the kind so often frowned upon in other
situations. Any one platform can keep all the others from progressing. Let's
take arm as an example. How many people are running the arm port of debian,
compared to i386? Is it really in the best interest of the project to keep
lots of software from being tested due to build errors on a single port?

Yes, I too want each stable release to work on all official ports. But what is
the most efficient way to get there? Surely testing software on the ports it
works on is better than not testing it at all?

Kill holy cow #2: All-port concurrency in testing. Make a testing-i386
release, where admittance rule #2 is replaced with "It must be compiled and up
to date on i386." Statistics posted to this list earlier show that i386 has
the lowest number of build problems of all ports. And I think it's safe to say
that it has the highest number of users. Combine the two, and you get stuff
tested. A lot.


Both of these options naturally come with several drawbacks and
complications. All models have them, including the current. But somewhere
there's a breaking point where the advantages outweigh the drawbacks and you
get a better system, producing better software.

> people prefer bitching and complaining about testing being late and stable
> being old, rather than helping fixing bugs.

Perhaps one reason is that fixing enough bugs to get stuff into testing is
currently a whack-a-mole job? With so many dependencies changing all the time,
there is no solid ground. Once you've fixed the showstoppers in package A,
package B uploads something which breaks, you fix that, then C uploads, you
fix that, then a new version of A pops up again...

We're trying to stuff everything in at once. It can be done, but it's very
difficult and requires a good deal of luck or a freeze.

> The problem is not that this process requires software to be tested and
> declared relatively bug-free before they are admitted into testing.  The
> problem is that the software is not even remotely bug-free.

I have adressed this once already. This is only half the truth. Packages are
also stuck in unstable not because they are buggy, but simply because one of
their dependencies evolved so much that their interface changed, breaking the
binary compatibility.

> And it is so at least partly because people try to put new versions of
> software into Debian, which means the system integration and the
> synchronisation of programs and libraries are an uphill battle.  And it is
> so at least partly because people complain as soon as there's a new upstream
> release, thus delaying the testing of the whole system.

Frequent uploads would not be as much of a problem if each new upload were not
immediately used for testing other packages. See whack-a-mole above.

Distributed development is fast. That's one of the many benefits of Free
Software. Any system designed to harness this flow of creativity must take the
volatility into account and take advantage of it, not treat it as a problem.

-- 
Björn



Reply to: