[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS



Paul Gevers <elbrus@debian.org> writes:
> One can always file bug reports against the release.debian.org pseudo
> package to ask for britney to ignore the autopkgtest result.

This would again concentrate work on a relatively small team.

> One other thing that I can envision (but maybe to early to agree on or
> set in stone) is that we lower the NMU criteria for fixing (or
> temporarily disabling) autopkgtest in ones reverse dependencies. In
> the end, personally I don't think this is up to the "relevant
> maintainers" but up to the release team. And I assume that badly
> maintained autopkgtest will just be a good reason to kick a package
> out of testing.

I already brought an example where autopkgtest ist well maintained but
keeps failing.

And I think that it is the package maintainers who have the experience
of whether a CI test failure is critical or not.

BTW, in the moment the CI tests are done in unstable -- if you want to
kick out a package from *testing*, you need to test the new unstable
package against this, which would be some change in the logic of
autopkgtest.

>> What is the reason not to use automated bug reports here? This would
>> allow to use all the tools the bug system has: severities, reassigning
>> closing etc.
>
> The largest reason is that it didn't cross my mind yet and nobody else
> except you has raised the idea so far.

I already don't understand this with the piuparts blocker: we have an
established workflow for problems with packages that need some
intervention, and this is bugs.d.o. This has a lot of very nice
features, like:

 * discussion of the problem attached to the problem itself and stored
   for reference
 * formal documentation of problem solving in the changelog (Closes: #)
 * severities, tags, re-assignments, affects etc.
 * maintainer notifications, migration blocks, autoremovals etc.
 * documented manual intervention possible

I don't see a feature that one would need for piuparts complaints or for
CI test failures that is not in our bug system. And (I am not sure)
aren't already package conflict bugs autogenerated?

I would really prefer to use the bug system instead of something else.

> One cravat that I see though is which system should hold the
> logic. The current idea is that it is britney that determines which
> combinations need to be tested and thus can use the result straight
> away for the migration decision.

> As Martin Pitt described in the thread I referenced in my first reply,
> Ubuntu already experimented with this and they came to the conclusion
> that it didn't really work if two entities have to try and keep the
> logic in sync.

I don't see the need to keep things in sync: If a new failure is
detected, it creates an RC bug against the migration candidate, with an
"affects" to the package that failed the test. The maintainer then has
the possibilities:

 * solve the problem in his own package, upload a new revision, and close
   the bug there

 * re-assign the problem to the package that failed the test is the
   problem lies there. In this case, that maintainer can decide if the
   problem is RC, and if not, then lower the severity.

In any case, the maintainers can follow the established workflow, and if
one needs to look up the problems a year later, one can just search for
the bug.

What else would you need to keep in sync?

>>> Possible autopkgtest extension: "Restrictions: unreliable"?
>> 
>> This is not specific enough. Often you have some tests that are
>> unreliable, and others that are important. Since one usually takes the
>> upstream test suite (which may be huge), one has to look manually first
>> to decide about the further processing.
>
> Than maybe change the wording: not-blocking, for-info, too-sensitive,
> ignore or ....

The problem is that a test suite is not that homogenious, and often one
doesn't knows that ahead. For example, the summary of one of my packages
(python-astropy) has almost 9000 individual tests. Some of them are
critical and influence the behaviour of the whole package, but others
are for a small subsystem an/or a very special case. I have no
documentation of the importance of each individual test; this I decide
on when I see a failure (in cooperation with upstream). But more: these
9000 tests are combined into *one* autopkgtest result. What should I put
there?

> If you know your test suite needs investigation, you can have it not
> automatically block. But depending on the outcome of the
> investigation, you can still file (RC) bugs.

But then we are where we are already today: Almost all tests of my
packages are "a bit complex", so I would just all mark them as
non-blocking. But then I would need to file the bugs myself, and
especially then there is no formal sync between the test failure and the
bug.

> Why I am so motivated on doing this is because I really believe this is
> going to improve the quality of the release and the release process.

As I already wrote: I really appreciate autopkgtest, and I would like to
have a way to automatically keep and track CI test failures. I just think
that it should allow the maintainer to finally overwrite, and that
bugs.d.o is the superiour system for the workflow because of its
flexibility.

Best regards

Ole


Reply to: