[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Drop testing



Hello

I may write exactly the same thing as Steve Langasek but I just have
to tell why I would like to keep testing.

On Sat, Oct 23, 2004 at 12:56:36PM +0200, Eduard Bloch wrote:
> #include <hallo.h>
> 
> > > Some improvements have already been proposed by Eduard Bloch and
> > > Adrian Bunk: freezing unstable while keeping testing.
> > 
> > Jerome, please, you could have asked me. I prepare an internal GR draft
> > for exactly this issue, but it is to be made public on the day of the
> > release, and better not before. We should concentrate on making the
> > Sarge release ready, NOW. Do not start another flamewar.
> 
> To hell with it, the avalanche has already started.
> 
> The attached paper was written down as a GR draft, but if the problem
> can be solved peacefully by a consens on d-d and in agreement of the
> release manager(s), this is the way to go. Otherwise, take it as a real
> GR draft which should be completed later.
> 
> It may sound a bit radical, but core points have been mentioned in the
> thread already. I suggest to do it in a more radical way:
> 
>  - unstable lockdown in the freeze
>  - drop Testing and concentrate on work instead of wasting time on
>    synching stuff. This eliminates the need for testing-security. See
>    the last part of the paper for details.

The sync problems will not exist if we drop testing?

>  - about the "filtering updates for frozen" - yes, some additional
>    manpower is required but that work must be done. The problems with
>    Testing synchronisation are not of pure technical nature, they are
>    social problem, and so they should be solved by people and not
>    scripts.

So this means that we need more FTP-maintainers, or more effort spent
by them.

> Regards,
> Eduard.
> -- 
> Ein Bauer zwischen zwei Advokaten gleicht einem Fisch zwischen zwei
> Katzen.
> 		-- Sprichwort

> ABSTRACT
> --------
> 
> I propose that the Debian project resolve these questions:
> 
>  - should we continue using Testing and the gradual freeze model?
>  - should we change the freeze model to the tristate model (similar to the one
>    from the old days)
> 
> Possible choices:
> 
>  - stop using Testing distribution and change to the Tristate freeze model
>  - stop using Testing, the release manager should develop the freeze model
>  - continue using the current release model
> 
> RATIONALE
> ---------
> 
> Why is testing bad?
> ==================
> 
> One of the first and most known things: it puts a lot of outdated packages on
> the head of our users! Yes, testing sounds like a good compromise for people
> that want to have bleeding edge software without taking the risk, but we saw in
> the past years that testing created more problems that it was really worth.

Well testing is not a perfect solution for people to use as it do not contain
security updates and lack updated packages. Yes, and I do not think that
is a bad thing. I actually think that is what you can expect. If people
want to have a secure system they should use stable and if they want
bleeding edge they should use unstable. That is their decision and not
ours. Testing is a good thing to use for development and not something
you sould see as a form of stable. If that were the case it should be named
prerelease or something.

As a testbed testing is something good. You will not have the most critical
known bugs in the system so you can keep on testing things. We also have
a good mixture of people using unstable and testing (I know many people that
use unstable and also many people that use testing), so that is not a problem.

> The only advantages guaranteed by its structure was a promise to keep an
> installable set of packages. Which does not promise a useful/bugfree piece

Yes and this is a very good thing as we did have problems with this before
testing was created.

> of software. I think we should be fair to our users when we tell them to
> report bugs - we should tell all of them that reporting bugs in Sarge is
> often duplicated work because the problem has been fixed in Unstable.  I

I do not know if you have noticed that people tend to report bugs at
whatever they are using. I get a lot of bugs on packages in stable, testing
and unstable. Sometimes even exprimental. If we drop testing we will still
keep getting bugs from people that use unstable (and not a fully updated one).
I do not see the big problem with this. Replying to the mail and closing the
bug is very easy and do not take much time. I maintain about 60 packages
(at my spared time) and I do not see this as a big problem.

> think we should be fair to our users - we should not put a lot of buggy
> software onto their heads (promising some bogus stability at the same time),
> without having working security upgrade system. Without giving the

I do not really see your point. I think it is quite clearly stated that
neither unstable and testing have security support. Should we restrict people
from running unstable too?

> individual developer a chance to fix bugs in the development distribution
> quickly enough. Without having automatic ways to integrate an update into
> every arch in the Debian distribution.

I do not understand what you want to say by this. You can fix the bugs
quickly if it is really important. Just set severity=high. If you build
things that do not build on all arches that should be considered RC so
it should never enter unstable anyway. The problem that is left is if you
depend on packages with RC bugs, but that is another problem. The RC
bugs is still there in unstable. We can discuss this rule as it may need
to be relaxed.

> Some words about the history
> ============================
> 
> Some years ago and before almost a half of developers joined, Testing did
> not exist in its current form. Instead, the release cycle worked differently
> (see below). At some time point, the RM of those days decided to make an
> experiment, which resulted in what we call Testing now. In the years before,
> the Freeze was a real freeze (not the ice sludge we have today). Unstable
> was frozen as-is and stayed in this state untill the RC bug count was down
> to zero. It was not worse than what we have today: Frozen got its own

It was not worse? Well as far as I remember we had a very long freeze
period and lot of complaints on that packages was outdated in frozen. Now
we have a half freeze and packages can still get into testing without
especially big problems. A lot less manpower is needed because most work
can be done by the developer themself.

> release branch name, deliberate uploads to Frozen could be detected easy and
> inspected by the RM. Almost the same work that is done now by the RM team.

The problem is that manual inspection is very tedious and if the RM need
to do that on all packages that is now uploaded to unstable (to be in
next version of stable) would be a LOT of work.

> But OTOH the developers had to eat the dog food [1] and there were no stupid
> overhead required to move packages to Testing, working around obscure
> problems.

I do not see the difference between manually forcing packages into testing
and manually inspect and install into frozen. The big difference is that
much of the work is automated in the current testing implementation.

> How does the situation look today?
> ==================================
> 
> Debian Testing is not stable and is not mature. It is full of shitty bugs
> (let me define this term as name for ugly bugs that bother the users but do
> not look appear as critical for maintainer, or not important enough to touch
> package in the holy "frozzen" state). Such bugs are a disaster, they make
> our definition of a Stable release absurd. Yes, Debian Stable has become a
> buggy stable release. Just face it.

I 

> The major goal (making Freeze periods shorter) was not reached. The second

I actually think it is reached, if we exclude two major issues that we still
have if we make a frozen distribution. Debian-installer and security update
infrastructure.

> goal (faster releases) was not reached. The third goal (better software

We have a LOT bigger distribution now. Try to imagine the former structure
to manage the number of packages we have now. We had very long freeze time
before.

> quality in the development branch) was not reached, at most partially (the
> users did not have to deal with PAM bugs this time, hahaha).

?

> So in my eyes, the Testing experiment failed and should be finished. Now.

And I disagree. I have not seen a single evidence that dropping testing
will solve any of these problems.

> So how would the removal of Testing help?
> =========================================
> 
>  - there would be no obscure criteria that tell maintainers to held back
>    package upgrades

I think the criterias are clear.

>  - it would eliminate the need of "testing build daemons". Instead, the free
>    resources could be used to implement exprimental buildd, for example.
The build deamons run unstable as far as I know and compile packages for
unstable. Correct me if I'm wrong. We would still need them even if we
drop testing, unless you want each maintainer to build packages for each
architecture.

>  - Debian's development branch would be more secure
The development branch is unstable and is in no sense secure. There are
lot of security problems in unstable, the only difference is that they
are not known yet.

>  - the release date would be more predictable (assuming that there will be
>    active FTP masters) and faster

How can you predict release dates better with the frozen structure than with
the unstable structure? I can not see how you can do that.

>  - frozen would have better software - more up-to-date upstream versions with
>    less ugly upstream/packaging bugs

We will have a snapshot of unstable and all the bugs that exist in each
packages there. If unstable was that good we would never have the problem
with testing either.

>  - maintainers would actually be forced to fix 

And they are not that now?

>  - there would be no or less bug reports about obsolete versions, leading to
>    less confusion and less work for both, maintainers and users

Tell them to use unstable if it is bleeding edge they want. People will
ALWAYS complain on outdated versions and people ALWAYS will complain on that
their programs do not work as they used to. The only solution would be
to have a system where a lot of different versions would be possible to have
installed at the same time. That would be even more confusing.

>  - the users that actually want fresh software, get fresh software with
>    Unstable. We saw that Testing has (in average) also a large number of
>    problems so there is no point in having two development branches with
>    different bugs and no good way to deal with bugs in the other one. Yes,
>    users would benefit from bugfixes that reach them immediately instead of
>    5..90 (or more) days later The other kind of users that are used to wait
>    that long new (and stable) software can switch to replacements, see below.

The reason why there is a delay is that there are RC bugs in the package
or packages they depend on. They need to be fixed anyway.
If people did not have RC bugs on their packages all problems would be
solved, right? Yes I know I have RC bugs on my packages. I'll try to fix
them (help is always appriciated).

> How does the alternative plan look like?
> =======================================
> 
> We should not fork a new Testing distribution before this GR is trough.
> Release managers are asked to wait.  If we decide for Testing, it will be
> forked as they plans currently look like. If not, Testing will be no more.
> When the next freeze time comes, it will be a hard freeze. Panic uploads will
> always be there, but this time the avalanche will be started only once, not
> with every phase of the "gradual freeze".

And if we have panic uploads that will be well tested right? I can not see
how this big bang scenario solves the problem. I would never want a
snapshot of unstable to become stable, even after some testing time.

> Alternative for the developers: Tristate freeze model
> =====================================================
> 
> A possible future model for the release cycle and freze method will be so
> called Tristate freeze model. It consists of three states:
> 
>  - PRE-FREEZE, 2-3 Weeks before the freeze day. A frozen directory is created
>    on a dedicated machine and release managers can start testing the freeze
>    management scripts. Release team and few selected users should have access
>    to this resource. This testing environment is not stable and may (or not) be
>    purged before the main freeze begins
>  - MAIN FREEZE: takes three weeks, beginning from the freeze day. Unstable
>    branch will be moved to "frozen".  Packages uploaded to "frozen" or
>    "unstable" are mapped to "frozen-candidates" and are to be reviewed by the
>    release team.
>  - DEEP FREEZE: 1-2 weeks. Only important updates are allowed to be moved from
>    "frozen-candidates". Release team has to decide with majority about this actions.
>  - After the release, the contents of "stable" are synchronized with "unstable"
>    and the new iteration begins.

So we will release every ... ~ 9 weeks with RC bugs. I can not see how the
RC bugs get fixed faster this way.

I do not say that releasing faster is a bad thing. Actually I think it would
be a good thing to have snapshot or point releases, but I think they
should be done from testing and not from unstable. This will be more
job for the security team and that is the problem with this idea. I would like
to develop this further though.

> Alternatives for the users
> ==========================
> 
> The users will be told to switch to the following alternatives:
> 
>  - Ubuntu Linux [2]. It is a good, Debian based distro and implements, what
>    many people expected from Testing. It has motivated developers behind so it
>    should never reach the bad state in which Testing, again and again.

I actually do not think testing is in a bad shape. Yes it lack some new
packages but stable will always lack new software.

>  - Debian stable with backports. High quality backports of Sid packages became
>    very mature in the last months, and if the release times become shorter
>    after Testing is away, it should become an acceptable solution. volatile.d.o
>    is another good step in that direction.

Now I have said what I think about this proposal.

Regards,

// Ola

> [1] http://www.joelonsoftware.com/articles/fog0000000012.html
> [2] www.ubuntulinux.org


-- 
 --------------------- Ola Lundqvist ---------------------------
/  opal@debian.org                     Annebergsslingan 37      \
|  opal@lysator.liu.se                 654 65 KARLSTAD          |
|  +46 (0)54-10 14 30                  +46 (0)70-332 1551       |
|  http://www.opal.dhs.org             UIN/icq: 4912500         |
\  gpg/f.p.: 7090 A92B 18FE 7994 0C36  4FE4 18A1 B1CF 0FE5 3DD9 /
 ---------------------------------------------------------------



Reply to: