Hi Chris, On 7/21/19 4:09 PM, Chris Lamb wrote:
So, the devil is very much in the details here, alas. Whilst I am a definite +1 on this idea the day-to-day experience may not be ideal so your thoughts are very welcome. Just to take one recent example: I noticed yesterday a regression whereby one of our tools (strip-nondeterminism) is causing a fair number of Java packages to be unreproducible — it would seem a little antisocial to block them from migrating to testing when it was "our" fault. It is not really within the power or remit of the maintainers in-question to fix this particular issue, and we should not implicitly encourage maintainers to manually (!) fix it in each of the affected packages just to get it to migrate... Similar mishaps or problems with the testing framework itself are usually more typical than the above but would have the same effect of directing a lot of distracting and demotivating blowback in our direction.
Thanks for raising this concern. As with other policies, we should try to find a balance. I also expect this to be something that needs to be fine-tuned over time. The details haven't been worked out yet, and there have already been a number of interesting suggestions in this thread.
I guess a lot will depends on the impact of the delay. If the difference in migration delay is small enough, then there would still be an incentive to make packages reproducible, but it wouldn't create too much of an issue if there is a temporary problem that makes some packages unreproducible in a way the maintainer can't fix.
Adding a long delay or even blocking unreproducible packages doesn't seem to be appropriate at this point.
Note that (temporary) breakage, causing unexpected FTBFS, is (unfortunately) already fairly common in unstable. This is part of the way unstable works. If some change makes a number of packages unreproducible, people can always try to fix that themselves (worst case with an NMU). If these kinds of issues are rare enough, it should be OK. If not, the policy might need to be relaxed.
If infrastructure issues cause the reproducibility information to be unreliable, the impact on testing migration should be disabled.
As with autopkgtests, we can always (temporarily) add overrides for certain issues if development is being stalled by them.
Note that delaying migration here has quite a different consent and social dynamic to autopkgtest failures as the maintainers have, by uploading a package that contains autopkgtests, implicitly opted into the committment to ensure they continue to pass.
Just FTR: this isn't correct: if package A, which depends on package B, adds an autopkgtest, this can block the migration of package B, even though package B didn't opt in.
Anyway, your thoughts on this important angle?
Once we have worked out what tests should actually be used for this and how the information about them is exchanged, we can create an implementation to see the impact, and improve it based on the results we see over time.
Cheers, Ivo