[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Automated testing - design and interfaces



On Thu, 2005-11-17 at 14:36 -0800, Steve Langasek wrote:
> [let's get this over to a technical list like it was supposed to be ;)]

> > Following your exit status based approach you could add to stanzas
> > something like:
> 
> >   Expected-Status: 0
> 
> > I found the above requirement the very minimum for a test interface.
> > What follows is optional (IMHO).
> 
> FWIW, I don't see that there's a clear advantage to having the test harness
> *expect* non-zero exit values (or non-empty output as you also suggested).
> It may make it easier to write tests by putting more of the logic in the
> test harness, but in exchange it makes it harder to debug a test failure
> because the debugger has to figure out how "correct" and "incorrect" are
> defined for each test, instead of just getting into the meat of seeing why
> the test returned non-zero.  Likewise, expecting successful tests to be
> silent means that you can rely on any output being error output that can be
> used for debugging a test failure.

Right. Splitting it into two bits ...

with respect to exit code, there is generally only one way to succeed,
but many ways to fail. So reserving 0 for 'test succeeded' in ALL cases,
makes writing front ends, or running the tests interactively much
easier. Its certainly possible to provide a $lang function that can
invert the relationship for you, for 'expected failure' results. One of
the things about expected failures is their granularity: is a test
expected to fail because 'file FOO is missing', or because 'something
went wrong'. The former test is best written as an explicit check, where
you invert the sense in the test script. Its best because this means
that when the behaviour of the failing logic alters - for better or
worse - you get flagged by the test that it has changed. Whereas a
handwaving 'somethings broken' style expected failure really does not
help code maintenance at all. So while it can be useful in the test
interface to have an explicit code for 'expected failure', I think it is
actually best to just write the test to catch the precise failure, and
report success. 

As for silence, yes, noise is generally not helpful, although long
running test suites can usefully give *some* feedback (a . per 100 tests
say) to remind people its still running.

Rob

-- 
GPG key available at: <http://www.robertcollins.net/keys.txt>.

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: