[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#901804: autopkgtest: consider using exit status 8 ("no tests found") if every test was ignored



On the Gitlab MR, Paul Gevers wrote:
> I am not 100% sure that debci need to treat the results in the same
> way as britney does.

What debci primarily needs to do is: if britney would want to distinguish
between two scenarios, then both autopkgtest and debci need to distinguish
between them. Whether that should be by using a different exit status or
reporting more rich per-test status, I don't know: it might be better for
autopkgtest (or debci) to report "just the facts" with no interpretation,
perhaps in JSON or TAP or something or by offering guarantees about
the parseability of the $output/summary file, and let debci or britney
interpret those facts in whatever way is desired.

> And maybe a regression from 0 to 2 (without changes to the autopkgtest
> that runs) should also be a regression for britney (if it is not caused
> by a flaky test :( ).

I'm not so sure about this. If debci was previously giving tests
a particular capability bounding set in their LXC containers, and a
security fix to lxc removes a capability that turns out to be exploitable,
then tests that were exercising that capability will have to be skipped
(ideally they'd already be 'skippable', and exit 77 if they don't have
it), so they'd appear to "regress". That seems undesirable.

Similarly, ostree and flatpak have some tests that require /var/tmp to
support xattrs. At the moment, if it doesn't, these tests exit 0, which
is a bit of a lie; ideally they would be marked 'skippable' and exit 77,
and this is a large part of why I implemented 'skippable'. If debci was
previously giving tests an ext4 /var/tmp (which does support xattrs),
but after a memory upgrade on the host machine it switches to the whole
container being backed by tmpfs for better performance, then these tests
would "regress".

I think as a general design principle, if we are going to encourage
maintainers to add more tests, then they need to be able to be confident
that they won't be penalized for limitations of the test infrastructure. I
personally use suitably-configured virtual machines to test my packages,
and I want to be able to make those tests into autopkgtests so I don't
have to do them manually every time, because that way I can do more
thorough testing for the same amount of time/attention/thought; but if
running those tests on the production infrastructure stops my packages
from migrating because the production infrastructure doesn't always have
some feature, then I'd have to remove those tests, and that seems bad.

    smcv



Reply to: