[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: how about a real unstable?



On Thu, Mar 30, 2000 at 01:30:58PM -0600, Zed Pobre wrote:
>     This started me thinking.  Someone earlier lamented the
> difficulties in using experimental.  I would like to see experimental
> moved into the same tree as stable, frozen, unstable and have a
> Packages file generated.

experimental already has a Packages file generated, and where it is in
the tree is more or less irrelevant.

> New packages (and perhaps all new upstream
> releases) would be autoinstalled into experimental until they had been
> there for a month

This gets rid of the main use of experimental which is to distinguish
packages that'll probably destroy your system, against ones that shouldn't
but might because, well, anything's *possible*.

> (or someone could get to the overrides file for
> unstable, whichever is longer), and packages with Grave or worse bugs
> open longer than a week (or maybe 2 weeks) would be moved there.

A different way of doing it is to leave unstable as it is (ie, new packages
get lumped into unstable whether they work or not, assuming they're not
/likely/ to trash your system), and instead add a new distribution inbetween
stable and unstable, that has some of the properties of stable (ie, packages
have more or less stabilised, they've been tested for a while, they've got
few/no RC bugs, they work on all architectures, packages don't have huge
dependency problems).

Particularly the latter of these is a fairly complicated technical problem
to solve. Exercise to the interested reader: try it at home. Implement your
solution. Time it. Try to optimise it. (20pts)

For the less interested reader, point your browser at
http://auric.debian.org/~ajt/. For the reader who doesn't give a stuff and
just wants to cut to the chase, point apt at, hopefully,
	deb http://auric.debian.org/~ajt/ testing main
.

It's still very alpha, and relies heavily on the autobuilders being up
to date against woody, which isn't the case while we're frozen. As such,
please be wary of mirroring this: when we think it's really worth the
effort of mirroring it'll probably go into /pub/debian/dists, and until
then, it's quite probably a waste of bandwidth.

Source is theoretically available, but only by ssh'ing to auric and poking
around in my home directory.

> This
> would allow lintian checks to become a prerequisite for unstable,
> especially now that developers can write their own overrides for
> special cases. 

Someone would need to go through all the lintian checks and see which
ones are actually worth making RC. Not all of them are by a long shot.

> > personally, i'm not going to hold my breath waiting for the stable
> > release cycle to speed up. it's a big job, and one that grows enormously
> > for every release. we had around 2000 packages for slink. we now have
> > approx 4000 for potato....and already nearly 5000 for woody - and potato
> > isn't even out the door!
>     One of the things that might help this would be continuous freeze.
> As soon as a release is made, whatever is in unstable at that moment
> is frozen for the next release.  This will become more feasable as
> package graduation becomes more refined, I think.

Note that I at least, refuse to fork my packages during the freeze. It's
just too painful to work with.

>     I've been around for less than half that, but I do remember a
> nasty bash/libreadline bug that flattened a number of systems that I
> would not have wanted to encounter on a production system, as well as
> a few X problems.  Furthermore, I would not want to deal with an
> application server running unstable.  While I admit that the quality
> of Debian packages is generally quite high even in unstable, I would
> remain rather wary of recommending it for production servers.

There was a cute grep bug a while ago too, that made grep simply not work
if you specified the files to grep on the command line (or the other
way around, I forget). There are lots of cute bugs around in unstable
now and then, but they're generally easy to recover from if you have a
clue. If you don't want to have to have a clue for production servers
(and I for one don't), well, that's what stable's for.

Possibly, it'll also be what `testing' will be for, up to a point, when and
if it actually works.

BTW, I've been thinking recently. The original point of `testing' was
to make it easier for us to release (you've got a whole semi-unstable
distribution that's up-to-date and more or less bugfree from the word
go. No more bug horizons, just a few finishing touches, some organised
testing on the final product, and voila!), and hence make it easier for
us to release more often.

I wonder, though, if that's really a good idea. At some point, frequent
releases are just a downright pain, even with Debian's fetish towards
in-place and partial upgradability. Maybe it'd be better to just keep
releasing once-a-year or so (with any extra security-fixes), and let
people who really want new packages upgrade to testing. As opposed to
making a release every three, four or six months, say.

And, of course, none of this solves our current problem, which is that
there's just too many *new* RC bugs. 140 new, previously unreported,
RC bugs in the last fortnight or so. Scary.

Cheers,
aj

-- 
Anthony Towns <aj@humbug.org.au> <http://azure.humbug.org.au/~aj/>
I don't speak for anyone save myself. GPG encrypted mail preferred.

 ``The thing is: trying to be too generic is EVIL. It's stupid, it 
        results in slower code, and it results in more bugs.''
                                        -- Linus Torvalds

Attachment: pgpsTninfK6RD.pgp
Description: PGP signature


Reply to: