Re: Some observations regardig the progress towards Debian 3.1
On Tue, Nov 18, 2003 at 10:54:00PM +0100, Yann Dirson wrote:
> On Tue, Nov 18, 2003 at 07:29:29PM +0100, Adrian Bunk wrote:
> > There are some good suggestions in your proposal, e.g. you suggest to
> > check whether the build dependencies are fulfilled. The lack of checking
> > for build dependencies in the current testing scripts often leads to
> > packages in testing you can't build inside testing.
> Sure, and that can probably be added to testing scripts right now,
> isn't it ?
Although I've never looked at the testing scripts, I doubt it would be a
> > But you have to be aware that your proposal only works for the cases
> > where the programs actually compile and work with older versions of
> > libraries, the big tasks like getting KDE 3, GNOME 2 or a more recent
> > Mozilla into testing aren't affected by your suggestion.
> Yes. We could think about having a fast low-cost buildd (ie. i386)
> initiating the build that will eventually migrate, and build arch-all
> debs. Only if that succeeds will other buildds start their work. If
> that initial build fails, then the unstable version has to be flagged
> on-hold, and attempted again when one of its builddeps has been promoted.
Let me make a more concrete example for the problems I meant with
KDE 3 needed a long time until it was hinted into testing.
Due to the dependencies, it wasn't possible for KDE 3 to enter testing
without replacing the complete KDE 2 (e.g. you can't recompile most
programs from KDE 3 with the KDE libs from KDE 2).
Your proposal wouldn't have been able to shorten the move of KDE 3 into
testing by one single day.
> But that would not handle the "and work" part of your statement. For
> this we'd need to be able to declare testsuites to be run (this has
> been discussed recently, IIRC, although I missed the thread).
testsuites must be written, and testsuites for GUI programs are even
> But more importantly, if a program deos not "compile and work with
> older versions", then it's a case of insufficiently-narrowed
> build-dep, and we'll have the same type of breakage that we have today
> with insufficiently-narrowed deps. Could anyone using "testing" (how
> much people use testing ?) share his feelings about the frequency of
> such breakages, and how it evolved since testing exists ? That could
> give a hint whether this is a showstopper or not.
> But that last point raises another issue: does anyone really use
> testing ? Would anyone use pre-testing after all ?
> And if we can make testing usable enough so that people do use it,
> what incentive would there be to use pre-testing ?
These are good questions.
> > There might be new problems e.g. with inter-library dpendencies for
> > libraries without versioned symbols if your proposal would be
> > implemented.
> Hm ? I'm not sure I understand what the problem you mention is.
Depends: libfoo0, libbar1
If you recompile libfoo0 for testing with libbar0, the following is
allowed through the dependencies:
Depends: libfoo0, libbar1
The program in mypackage is in this situation linked with two different
so-versions of libbar at the same time.
Without versioned symbols in libbar, it's not unlikely this situation
will show up e.g. via strange crashes in the program.
libpng and libssl are examples for libraries where this problem was
already observed (not really related to testing, the worst problems
were in unstable).
> > > There _are_ many things to think about, but it may be worth to
> > > investigate it, and see how we could handle the potential problems we
> > > can think of.
> > >...
> > There's also a different discussion that should take place:
> > Is testing actually worth the effort?
> > Testing has it's benefits, e.g. it catches build errors and dependency
> > problems.
> So what about looking for solutions for the problems ? If we drop
> testing, what do we do instead ? Go back to evolving-unstable ->
> frozen -> stable ?
If it turns out that testing isn't worth the effort, this is one
> > testing with its lack of security fixes, aprox. 500 RC bugs and daily
> > changing packages is not usable for production systems.
> What's the problem with daily changing packages ? By nature, only
> different packages can change each day. That could make it a good
> compromise between stable and unstable, eg. for people in need for
> up-to-date desktops. But precisely, one of the problems for those
> people, is that _some_ packages _do_not_ change rapidly enough...
Consider a heterogenous environment, e.g. in a small company, that
consists of some servers and some workstations, and you want a
homogenous software to make maintainance easier.
Some users need more recent applications (e.g. a more recent gcc).
If you upgrade e.g. from Debian 2.2 to Debian 3.0 you might start with
the least important machine, check whether the upgrade went well, then
go to the next machine, then to the first server, check whether the
upgrade went well...
After upgrading some machines you decide to continue one week later.
Two months later, you want to install an additional package on one
Three months later, you want to install an urgent ssh security update.
Now consider the same situations when upgrading from Debian 3.0 to
Every one of the last three situations might give you completely new
problems, e.g. installing a new program or a security update might pull
a new libc6 that breaks your IMAP server.
Backports are a often proposed solution for such situations, but e.g. I
have yet to see a KDE 3 backport that got the g++ transition handled
correctly in the sense, that later a seamless upgrade to Debian 3.1 will
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed