[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Are these superficial autopkgtests?



Hello,

I'm working on packaging gcc-sh-elf (ITP #986778) which provides not
just a cross C compiler, but also provides Newlib (the ISO C standard
library) and a simulator, which is a Wine-like wrapper that makes
running the binaries possible on a Debian machine. This seems like a
very good opportunity for DEP-8 tests that can test all the components
in tandem.

One test I have builds a tiny implementation of the echo command and
tries to get it to say 'Hello, world'. Another builds a more
computationally-intensive program to compute the number of primes less
than 2^15 and checks against the correct answer.

Here's what the autopkgtest specification says makes a test
superficial:
> The test does not provide significant test coverage, so if it
> passes, that does not necessarily mean that the package under test
> is actually functional.
My question may boil down to what is "significant," I think.
> If a superficial test fails, it will be treated like any other
> failing test, but if it succeeds, this is only a weak indication of
> success. Continuous integration systems should treat a package where
> all non-superficial tests are skipped as equivalent to a package
> where all tests are skipped.
> For example, a C library might have a superficial test that simply
> compiles, links and executes a "hello world" program against the
> library under test but does not attempt to make use of the library's
> functionality, while a Python or Perl library might have a
> superficial test that runs import foo or require Foo; but
> does not attempt to use the library beyond that.

Note that in their reference to building a 'Hello, world' program, the
specification says that what makes the test superficial is that the
library's functionality isn't used in the 'Hello, world' program, but
merely linking against it is tested. Since I'm testing GCC, Newlib
(which provides the I/O functions), and the simulator in combination,
is building and running such relatively simple programs appropriate to
say that the tests provide good coverage?

Because this is a toolchain for embedded devices, it's not possible to
build mainstream software for it, so I will otherwise probably resign
to not having any non-superficial tests.

All perspectives and thoughts on the matter would be appreciated.

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: