[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#932795: marked as done (How to handle FTBFS bugs in release architectures)

Your message dated Wed, 9 Oct 2019 11:15:32 -0500
with message-id <20191009161532.GA27377@mosca.iiec.unam.mx>
and subject line Regarding archive-wide quality assurance work
has caused the Debian Bug report #932795,
regarding How to handle FTBFS bugs in release architectures
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org

932795: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932795
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: tech-ctte

Dear TC:

I reported this bug:


and it was downgraded on the basis that the official autobuilders
are multi-core.

I believe this downgrade is not appropriate, for several reasons:

* The informal guideline which is being used, "FTBFS are serious if
and only if they happen on buildd.debian.org", is not written anywhere
and it's contradictory with Debian Policy, which says "it must be
possible to build the package when build-essential and the
build-dependencies are installed".

* Because this is a violation of a Policy "must" directive, I consider
the downgrade to be a tricky way to modify Debian Policy without
following the usual Policy decision-making procedure.

* I also do not recognize the informal guideline being used as
universally applicable, always valid, and in 100% of cases. In fact,
I have yet to see why people follow such guideline when there
is no rationale anywhere. Packages which FTBFS in buildd.debian.org
certainly deserve a serious bug, but P => Q is not the same as Q => P.

If we have a FTBFS bug that nobody can reproduce, then ok, downgrading
the bug if the package builds ok in the buildds may make sense as a
cautionary measure until we have more info, but a single successful
build in buildd.debian.org does not ensure that the package will build
in every system where the package must build.

To illustrate why I think this guideline can't be universal, let's
consider the case (as a "thought experiment") where we have a package
which builds ok with "dpkg-buildpackage -A" and "dpkg-buildpackage -B"
but FTBFS when built with plain "dpkg-buildpackage".

Are we truely and honestly saying this package would not deserve a
serious bug in the BTS just because it builds ok in the buildds?

Surely, the end user *must* be able to build the package as well, must
they not?

So, in the bug above, I'm asked to accept as a fact that we have
*already* deprecated building on single-cpu systems, implicitly and
automagically. Let's assume for a while that such deprecation is real
and suppose I would like to "undeprecate" it. What formal procedure
should I follow for that?

Would it work, for example, if I propose a change to Debian Policy so
that it reads "Packages must build from source" instead of "Packages
must build from source on multi-core systems"? No, that would be
useless, because Debian Policy already says that packages must build
from source.

Would it work, for example, if I propose a change to Release Policy so
that it reads "Packages must build on all architectures on which they
are supported" instead of "Packages must only build ok in the official
buildds"? No, that would not work either, because Release Policy
already says that packages must build in all architectures in which
they are supported.

See how much kafkaesque is this?

Currently, this is what is happening:

Whenever someone dares to report a bug like this as serious, following
both Debian Policy and Release Policy (or at least the letter of it),
we lambast them, we make mock of their building environment, we call
them a fool, and we quote informal guidelines which are not written
anywhere. If we do this consistently, then no doubt that building on
single-cpu systems will become de-facto obsolete regardless of what
policy says, because nobody likes to be treated that way.

But surely there must be a better way: It is my opinion, and here is
where I'm asking the TC for support, that the burden of deprecating
building on single-cpu systems, or in general any other thing which
has always been a policy "must" directive, should be on those willing
to deprecate such things, and they are the ones who should convince
the rest of us, not the other way around.

For example, being proud to call ourselves the Universal Operating System,
we drop release architectures when it's increasingly difficult for us
to support them, *not* because we dislike them, *not* because they are
inefficient, and *not* because amd64 is "better".

We put a lot of care when we are about to deprecate architectures, we
examine the facts, the pros and the cons. The number of bugs affecting
such architectures, the number of people requiring special skills for
such architectures, that sort of thing.

I believe this to be a much better model of what we should do if we
really wanted to deprecate building on single-cpu systems, not what
happened in Bug #907829.


Addendum: I'm going to summarize some of the reasons I'm told in
favor of deprecating building on single-cpu systems, and why I
consider those reasons mostly bogus.

* I'm told that single-cpu systems are an oddity and that most
physical machines manufactured today are multi-core, but this
completely fails to account that single-cpu systems are today more
affordable than ever thanks to virtualization and cloud providers.

Just because most desktop systems are multi-core does not mean that we
can blindly assume that the end user will use a desktop computer to
build packages, or that users who do not build packages using a
desktop computer deserve less support. We don't discriminate
minorities just because they are minorities.

* I'm told that building packages using single-cpu systems is worse
or less efficient, and nobody would do that in 2019. I think this
is based on prejudice and not on real facts. The data I've collected
during the last months tells me that exactly the opposite is true:


I would call this "CPUism", which could be defined as the wrong
belief that multi-core machines are always superior and better than
single-core machines. In real life, people care about the cost of
things, so there are a lot of cases where using single-cpu systems is
justified and useful.

But even if it was less efficient (which is not), that still would not
mean at all that building in single-cpu systems it not useful. For example,
suppose that you have a Jenkins instance which is idle most of the time.
Why on earth should this instance be multi-core and cost more if we
don't care about the build time?

* I'm told that if I want to see this kind of bugs fixed, I have to
fix them myself, and that maintainers should not spend *any* time on
fixing these kind of bugs. I can't buy the argument that maintainers
can't even be bothered to ensure that their packages build properly.

* I'm told that that there are bugs more important than this one and
therefore this one may not be serious. I consider such reasoning flawed
for several reasons:

1. If every time we have two different problems we should assign
different severities to them, we would easily run out of different
severities. This is called Dirichlet's principle and it's something
that everybody here will understand. So it is normal and expected
that two things which do not have exactly the same level of importance
end up sharing a BTS severity level.

2. I have seen in fact bugs *less* important than this one to be
reported as serious, with all the consequences associated to that,
including the package being autoremoved from testing. Example: Wrong
maintainer address in the control file:


The package may be fully functional even if it has a maintainer
address which does not work. The bug reports are not lost, the BTS
still receives them.

3. If using the same severity is really a problem, we could always
report FTBFS bugs as "grave" in general and use "serious" only when
the build failure does not happen everywhere. Documentation says that
"grave" is ok if the bug makes the package in question unusable or
mostly so. What could be more unusable than not distributing the
package at all because it does not build?


I would hope that the above reasons are enough to illustrate that it's
not so "obvious" that we have to do this, as some people claim, and
therefore it's not something that we should do "willy-nilly".


--- End Message ---
--- Begin Message ---
The Debian Technical Committee, after evaluating the social and
technical consequences of handling bug #932795, has come to the
conclusion that this bug is not technical, but social, in nature — it
evidences communication breakdown between two parties because detailed
information on the motivations and consequences of the QA work in
question was not available at the beginning, and it got added only
when animosity had already manifested.

Therefore, we assert that:

- Doing archive-wide quality assurance work (AWQA) is hard, important
  work, and to a good extent, one of the factors that makes Debian be
  the robust distribution it is widely recognized as. It should always
  be thanked and appreciated.

- AWQA work needs to be easily recognizable as such. AWQA work usually
  does not consist of "only" rebuilding the whole archive, but to
  stress-test it under specially constrained conditions, in order to
  achieve archive-wide results.

AWQA work often uncovers large number of points demanding attention
(that is, bugs) — Of course, if we were to assume the archive is in
perfect shape, there would be no point in running said tests. When
filing bugs for said tests, we strongly suggest submitters to visibly
label them in a way the maintainer understands any oddities in the
setup followed.

Furthermore, we encourage AWQA drivers study to document (i.e. by
creating a page inside wiki.debian.org, referenced from each of the
bug reports filed) their efforts, in enough detail for package
maintainers to understand the points being pursued, in the interest of
not having to explain them case by case.

AWQA bugs will often (not always -- i.e. the «Reproducible Builds»
project started off as an AWQA effort) report FTBFS failures. Such
failures are undeniably bugs — But, with the background we presented
here, and given that firing off a lot of bugs that will potentially
end up removing many packages will most likely elicit a negative
reaction, we kindly suggest AWQA drivers to thoroughly consider
whether the reported bugs are grave enough to warrant excluding the
affected packages from the current testing (and therefore, from the
next stable release) unless given immediate attention, or can be
addressed in a more calm way.

Likewise, we ask package maintainers to always consider, in the
non-confrontational way, whether an affected package is really fit for
the next stable release.

Attachment: signature.asc
Description: PGP signature

--- End Message ---

Reply to: