[Credit for the writing of this goes to Robot101. The ideas where hashed out by a number of us though. We're fully expecting someone to turn round and tell us exactly why it won't work. Robert isn't on debian-devel so please Cc him on replies. I am however, so I'll read it onlist.] Our current release paradigm is one where non-buggy, arch-synched and dependency-satisfied packages work their way from unstable to testing after a suitable cooling off period. The principle is that this makes testing a known less-buggy platform, to make it easier to release with simple snapshots. However, this has several flaws: * As stable becomes more and more out of date, more and more people will turn to using testing. With the increased usage, it is inevitable that bugs will be filed against testing itself, taking it further from the ideal bug-free snapshottable release platform that we desire. * These bugs are far harder to fix than you might think. Besides the taboo against NMUs which aj is fighting valiantly to weaken, unstable can progress significantly very quickly, with new problems, library dependencies, etc, being added. This means that to fix a small bug that crops up in testing, a package in unstable has to be made un-buggy and buildable. * Testing is popular and with increased usage comes increased demand for security updates and advisories. While testing is fenced off, and all upgrades must move through unstable, the potential for broken packages in unstable to hold up security fixes to testing is not insubstantial. Bearing these in mind, the idea of having another way to get updates into testing may not be as bizzare as it sounds. The proposed testing-proposed-update release (which already seems to exist, but I do not know the conditions of it's use) would be subject to the same criteria to progress into testing as packages in unstable are. However, the conditions for upload would be similar to those for when stable/testing freezes. Only security updates, and RC bugfixes, are allowable into testing-proposed-updates, and new upstream versions only allowed when they are only for these purposes. Packages for upload to t-p-u (testing-proposed-updates) should be built on testing, and this means that buildd chroots would be required to build the uploads for the other architectures in testing. However, I think it's worth it. Another consideration is that the testing script would need to heed BTS tags, so that bugs in sid didn't count against things in t-p-u, and vice versa. I believe it already does this, but I'm not sure to what extent. Another massive advantage of this is that it becomes possible to upload security updates to testing without them being subject to being held up by breakage in new upstream etc packages in unstable. A testing security team could (perhaps with some common members to the stable team) track new issues and fix them for testing with backports and patches like the stable team already does. This dual-feed approach to testing should bring us much closer to the ideal of having a testing that remains bug-free and current enough so that a 6-monthly snapshot release can be considered a reality. The bugsquash parties and NMUers could feed fixes into t-p-u and fix RC bugs quickly without trampling on the maintainer's toes with the latest unstable version, which is where I'm assuming the most effort of a maintainer is dedicated. This could be considered what should/would happen when testing freezes. However, this looks quite unlikely while testing remains buggy, and this remains hard to fix while unstable lives up to it's name, and tracks the cutting edge - as it should. There's no reason we can't have t-p-u as a way to get simple and security fixes into testing fast, so that releasing needn't be such a hassle. J. -- /-\ | Eat a prune. Start a movement. |@/ Debian GNU/Linux Developer | \- |
Attachment:
pgpOXE4UkQ_QB.pgp
Description: PGP signature