[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Why is only the latest unstable package considered for testing?



Hi.

Apologies in advance if this is the wrong list for this issue, but I've been looking at the "excuses" page[0] and was struck by how very old some packages are in testing, yet only the very latest bleeding edge version from unstable appears to be considered for inclusion.

Am I misunderstanding something, or does this approach not "punish" projects that adhere to the Open Source motto "release early, release often"?

Hypothetical example:

Project X makes an effort to prepare a solid release, squashing all RC bugs and making sure each target builds flawlessly. They bag it, label it "3.0" or whatever and release it. The package goes into unstable and, being a non-critical update, needs 10 days to become a "Valid Candidate"[1] for testing.

During the release effort, a number of patches were submitted to the project but were delayed until post-3.0. This is pretty normal. Now that 3.0 is out the door and the users have a stable version to work with, these new patches go in and a new unstable version 3.1 is released. This version has some RC bugs, or doesn't build on all platforms, or otherwise breaks one of the five rules for inclusion in testing.

Since the testing scripts only check the latest version, testing will never consider the stable and working 3.0 version but will reject the unstable 3.1 version and instead keep the old and outdated 2.0 version.

Am I wrong? If I'm right, why does it work like this? Would it not be better to track all versions and merge the latest version that fulfills all requirements?

If I'm wrong, where can I see the status for each version? packages.qa.debian.org only shows status for the latest unstable version.

[0] http://ftp-master.debian.org/testing/update_excuses.html
[1] http://www.debian.org/devel/testing

-- 
Björn



Reply to: