Is testing model flawed?
As I see it, the testing team has an impossible job. It hopes to test
each and every package, but the magnitude of the job far outweighs the
In my case, I have the interest, but not the knowledge, to do a good job.
To be able to say "this package has no bugs" will require someone who
knows in depth not only exactly how the package is supposed to work, but
also to know how it fits in with a myriad other packages and with the
kernel itself. Well, that rubs me out.
As an example of the problem, I read today on Brandon's web page, that
sysvinit_2.75-1 was tested and passed. But I also read on the list where
there is a problem with the database (wtmp?) which shows up with "last."
So here we have a tested package which turns out to be flawed. And please
note that I am not pointing the finger here - I missed that problem too.
So assuming that I am correct, where do we go from here?
As I see it, we already have thousands of "testers" out there who install
and use the packages as they become available. Many of them submit bug
reports (we have 22000+ bugs so far) and this army has far more punch than
any testing team can hope for.
So my view is that the testing team abandons actual testing and takes on a
book-keeper role. It would keep a database of when packages were
released. It would monitor the lists for reports of problems and it would
monitor the bug system. After a "quiet" period of, say, two weeks, with
no important bugs outstanding, a package would be "cleared." We could
even go to the point of making clearance automatic unless the package was
specifically knocked back.
This would reduce the team's task to one that can be handled by the
Lindsay Allen <firstname.lastname@example.org> Perth, Western Australia
voice +61 8 9316 2486 32.0125S 115.8445E vk6lj Debian Unix
To UNSUBSCRIBE, email to email@example.com
with a subject of "unsubscribe". Trouble? Contact firstname.lastname@example.org