[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [hertzog@debian.org: Re: Woody retrospective and Sarge introspective]



On Wed, Jul 31, 2002 at 04:06:56PM +0200, Raphael Hertzog wrote:
> Le Wed, Jul 31, 2002 at 11:30:48PM +1000, Anthony Towns ?crivait:
> > t-p-u isn't a full distribution, and doesn't get any testing. The
> > former means the testing scripts can't run on it, the latter means
> > they shouldn't.
> One day we told our users to use testing, they did. 

No, we told them that "testing" was a new distribution that contained
packages that were, generally, not much older than unstable, but that
wouldn't have any of the showstopper bugs that regularly show up in
unstable. Some of them decided, "Hey, that's useful" and switched. Others
though "Eh, so what", and stayed with whatever they currently use.

> We can as well ask them to add t-p-u in their sources.lists to test 
> thoses packages ...

If they don't have a good reason to do it, they won't. If "testing+t-p-u"
doesn't serve their needs better than "testing" _and_ "unstable",
they won't use it. And it won't -- it has all the problems of unstable
(untested uploads, dependency issues), and all the problems of testing
(out of date software).

> The recommended way to test "t-p-u" would be to have t-p-u and testing 
> in the sources.lists.

The way Debian gets tested is by having people use it to do stuff: work,
study, play, whatever. They'll pick the distribution that best achieves
that goal, not the one that'd be convenient for us.

Or, that's what I'd do, it's what most people I know would do, and it seems
perfectly sensible and reasonable.

> > Also ensuring that we can't ever transition to libraries with bumped
> > so-versions.
> I don't see why, it would probably take more time because packages
> wouldn't get built against the new lib until the new lib is
> available in testing, but I don't see that like a big drawback.

Yes, and the new lib doesn't get into testing until all the packages that
used the old one have been rebuilt with the new one on all architectures.

> Maybe I'm missing something else ?

You've picked a solution and you're trying to find a problem to fit it to.
That's not particularly helpful. You've also picked a solution that's a
lot of work for other people, that you don't really understand. That's
not particularly helpful, either.

Understand the problem first, then try to find a solution that changes
the _least_ things. 

For comparison, "testing" is the result of ideas Bdale had been
ruminating on for a while that got plucked from a flamewar by Raul
and me, that then got hashed out on a private Cc list for a few weeks,
that then got documented thoroughly and explained and hashed out some
more in public, then got pretty heavily, albeit informally, revised
based on some of Wichert's thoughts from the 1999 DPL campaign, that
then got ruminated on some more, and that then had a year's worth of
hacking before hitting ftp.debian.org, and another year or so before it
actually started functioning effectively.

These really are hard problems. Expecting people to say "Sure, we'll
accept a patch" when you _don't_ know what you're talking about inside
and out is unreasonable.

> > > I'm not able to determine it, I'm just able to see that the version
> > > in unstable has gone through 2 new upstream version while testing
> > > hasn't. I can inform him that he may want to upload the last well tested
> > > package in testing-proposed-updates ...
> > No, what you can and should do is help fix the bugs that prevent the
> > newer version from being releasable.
> One doesn't prevent the other. You always have one day where you have to
> say "this version is for stable, this one for unstable". This is usually
> imposed by the freeze date ... but it would be as good if it was also the
> maintainer's decision.

That's not entirely precise. The maintainer gets to say "This version
isn't fit to be released", she doesn't get to say "This version *is*
fit for release". She can't say the latter, because until it's been
uploaded, until it's been built on other architectures, until the effects
on other packages have been seen, until people have tried it, she just
doesn't know.

Trying to base this on "where the maintainer uploads" is wrong on two
counts: first it requires the maintainer have knowledge that just isn't
available to her, and second it makes it harder to maintain any individual
package in the archive.

> > testing-proposed-updates is a solution for security updates, and similar
> > high importance / low risk changes, low frequence updates. From an archive
> > and buildd point-of-view it might be possible to make it something more
> > than that, but for a release process point-of-view, it's not.
> Now I have something feasible but it's not ok from "the release process
> point-of-view". Can you elaborate a bit more please ?

testing-proposed-updates, as it stands, and what I'm referring to
above, is approved into testing by hand, by the RM. That's a _very_
high overhead task -- you don't have users testing t-p-u so the RM has
to be careful not to let untested packages get in -- and it's why t-p-u
is restricted to security updates and similar changes, not random updates
for random packages. It's quite feasible, but that's entirely due to those
restrictions. stable/proposed-updates is in a similar situation, really.

If you want to change that, you have to be very careful that you _keep_ it
feasible and maintainable while you work towards whatever you want.

Cheers,
aj

-- 
Anthony Towns <aj@humbug.org.au> <http://azure.humbug.org.au/~aj/>
I don't speak for anyone save myself. GPG signed mail preferred.

 ``If you don't do it now, you'll be one year older when you do.''

Attachment: pgp6KpS_1WdkZ.pgp
Description: PGP signature


Reply to: