Re: Automated testing - design and interfaces
Anthony Towns writes ("Re: Automated testing - design and interfaces"):
> On Mon, Nov 21, 2005 at 06:22:37PM +0000, Ian Jackson wrote:
> > This is no good because we want the test environment to be able to
> > tell which tests failed, so the test cases have to be enumerated in
> > the test metadata file.
> Uh, having to have 1000 scripts, or even just 1000 entries in a metadata
> file, just to run 1000 tests is a showstopper imho. Heck, identifying
> testcases by number mightn't be particularly desirable in some instances,
> if a clearer alternative like, say, "test case failed: add 1, add 2,
> del 1, ch 2" is possible.
Sorry, as Robert Collins point out, I didn't mean `enumerate'. I
meant `identify'. That is, the test environment needs to see the
results of individual tests and not just a human-only-readable report
I agree with you about numbers. If you let tests have names people
can always write numbers in, if they insist, so it's sufficient for
the system to support names.
> > You can't check that the binary works _when the .deb is installed_
> > without installing it.
> That's okay. You can't check that the binary works _on the user's system_
> without installing it on the user's system either. For Debian's purposes,
> being able to run the tests with minimal setup seems crucial.
That's true. Of course "the user's system" is a moveable feast. One
goal of my design is to allow testing on a minimal setup.
> > Also, a `Restriction' isn't right because if the test has neither of
> > those Restrictions then presumably it can do either but how would it
> > know which ?
> It would have to not care which; which it could do by expecting the
> test harness to put the binaries in the PATH, or provide an environment
> variable like INSTALL_ROOT=$(pwd)/debian/tmp .
Right. So you're effectively adding a new bit to the spec to support
that. I don't want to go there right now but this is definitely
something we want to allow room for in the future. The way I would
imagine extending it to cover this case would be to invent a new
header (which the old test-runner would be updated to treat as
which would mean that this was supported. You could say
to mean that _only_ that was supported. And of course you'd have to
define exactly what the feature meant (including the INSTALL_ROOT
> Having test case dependencies is fairly useful; in any event the language
> "Even integration tests can be represented like this: if one package's
> tests Depend on the other's" is wrong if tests depend on other packages,
> not on other package's tests. You'll want Conflicts: as well as Depends:
> in that case too.
Ah, I see what you mean. Yes, that language is wrong.
Adding `Conflicts' is an obvious extension but I don't propose to
implement it in my first cut.
> It would probably be quite useful to be able to write tests like:
> for mta in exim4 smail sendmail qmail; do
> install $mta
> # actual test
> uninstall $mta
> too, to ensure that packages that depend on one of a number of packages
> actually work with all of them.
Quite so. I'm not sure if my test-runner will get clever enough for
that but the information and interfaces it needs will be there.