[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Building against testing [was Re: "Recompile with libc6 from testing" <-- doesn't help i think!]

Daniel Jacobowitz wrote:
On Sun, Sep 15, 2002 at 12:23:32PM -0400, Christopher W. Curtis wrote:

Sorry for arriving to the party late, but I take an extreme exception to this. Please compile *all* packages against testing if possible. The

No, don't.

Please, do ...

Maybe the problem lies more in the buildd system (which I am also not so familiar with). Debian uploads source packages, and these get rebuilt by the buildds, correct? If so, perhaps it would be better to make them more intelligent -- if they can buildd cleanly against testing, they should do that first. If there are problems, they try unstable. Either way, the package goes into unstable, but if it's compiled against testing, there are obviously no dependencies needed from unstable so there's no need to be held up by a dependency on a new library rev (which may be incremented weekly, keeping the otherwise working package in unstable in perpetuity).

It's not a "problem", it's a design.  Think about it.  I upload a new
package that builds against the version in testing - but does NOT build
against the version in unstable.  A dozen buildds get it and build it
and it goes into testing.  But it's already unbuildable in unstable and
has been since before I uploaded it.

At your request, I have given this more thought. At first what you said made sense, but upon scrutinization, it's a red herring. My initial points, since edited out, stated:

*) Developers should run unstable
*) Builds should be done against testing
*) Uploads are always into unstable

Now, given your example (you said there are others; I'm initerested to hear them), here is basically what you said [paraphrased]:

"But it we compile against testing to make testing more (stable/up-to-date), then unstable may become unstable."

Sounds rather self-evident, doesn't it?

Further, we have to take into account my first and last points: Firstly, if the developer is running unstable, it should be a reasonable assumption that they have done /some/ semblance of testing (against unstable, by definition) before uploading it. Secondly, you make it sound as though the upload would go directly into testing, which is certainly not what I had said - uploads would go unto unstable, where they could be further tested before migrating into testing, free of dependencies in unstable (because it compiled against testing). Of course this is not foolproof (to reiterate) but perhaps better than what we have now.

This is no panacea - there certainly logistics nightmares, but no more so than there are now. A developer may be running testing (or even stable) and uploading into unstable already. Rather than say how dangerous this is because it may break unstable, simply "issue a recommendation" (whatever that means) that developers run unstable, and institute that automatic builds be done against testing, since that is the next stable target. I think that if unstable is unstable, that is acceptable - it's a cursory waylay into the slushy, ready to freeze and release, testing.

That's just one example.  There are others.  The model of development
we use requires that (except for specific updates to other releases)
all new packages build against unstable.  The system only works if
packages progress into testing at a reasonable rate.

This seems contradictory ... if the system only works by getting things into testing at a reasonable rate, shouldn't it be the case that this be a priority, rather than the stability of unstable? Surely compiling for testing where possible will promote the ends faster than depending on libfoo4-2 in unstable to replace libfoo4-1 in testing, especially if a bug is filed against libfoo4-2, gets locked while the developer readies libfoo4-3, and then waits even further for no more bugs to be filed once -3 is uploaded, when the package works perfectly fine with libfoo4-1 already in testing.


Reply to: