[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: testing "testing" (was: Implementing "testing")



>>>>> "DZM" == David Z Maze <dmaze@MIT.EDU> writes:

    TM> After a successful compile, it could also run the program and
    TM> use a memory shaker, and/or software that gives the program
    TM> random input, and again files a bug report for any crashes.

    DZM> isn't necessarily the best idea.  We don't want to set up a
    DZM> situation where a package passes all of the automated tests,
    DZM> but this just means it's lint-clean, lintian-clean, and
    DZM> doesn't barf on random input, not necessarily that it works
    DZM> correctly.

I think any sort of testing is really package specific. Even package
installation is specific to the package (what configuration options do
you want to use). The only exceptions I can think of are package
removal, dependency checking, and file conflicts checking (to a
limited degree).

Ideally, a testing mechanism should take into account the differences
between packages, and have some way to take these into account.

For instance, automake generated Makefiles _can_ support the "check"
target, that is used to run tests against a compiled library and/or
program, to ensure it works according to the given tests. The only
problem with this, is that it only works for the source code package,
and not the binary package.

So, if somebody could make the same idea work for binary packages, I
think we might be on to something.

Then of course, this raises other issues, like where should we put
this package specific testing code, and/or whether scripting languages
could be developed or used to make this package specific testing
easier to implement. Also, I think you could split the task up into
several categories, for instance:

1. programs.
2. shared libraries.
3. static libraries (???).

That way, if testing gives false negatives or false positives, the
maintainer can adjust the test to make things work.
-- 
Brian May <bam@debian.org>



Reply to: