[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#932795: Ethics of FTBFS bug reporting



Package: tech-ctte

Dear TC:

I reported this bug:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907829

and it was downgraded on the basis that the official autobuilders
are multi-core.

I believe this downgrade is not appropriate, for several reasons:

* The informal guideline which is being used, "FTBFS are serious if
and only if they happen on buildd.debian.org", is not written anywhere
and it's contradictory with Debian Policy, which says "it must be
possible to build the package when build-essential and the
build-dependencies are installed".

* Because this is a violation of a Policy "must" directive, I consider
the downgrade to be a tricky way to modify Debian Policy without
following the usual Policy decision-making procedure.

* I also do not recognize the informal guideline being used as
universally applicable, always valid, and in 100% of cases. In fact,
I have yet to see why people follow such guideline when there
is no rationale anywhere. Packages which FTBFS in buildd.debian.org
certainly deserve a serious bug, but P => Q is not the same as Q => P.

If we have a FTBFS bug that nobody can reproduce, then ok, downgrading
the bug if the package builds ok in the buildds may make sense as a
cautionary measure until we have more info, but a single successful
build in buildd.debian.org does not ensure that the package will build
in every system where the package must build.

To illustrate why I think this guideline can't be universal, let's
consider the case (as a "thought experiment") where we have a package
which builds ok with "dpkg-buildpackage -A" and "dpkg-buildpackage -B"
but FTBFS when built with plain "dpkg-buildpackage".

Are we truely and honestly saying this package would not deserve a
serious bug in the BTS just because it builds ok in the buildds?

Surely, the end user *must* be able to build the package as well, must
they not?


So, in the bug above, I'm asked to accept as a fact that we have
*already* deprecated building on single-cpu systems, implicitly and
automagically. Let's assume for a while that such deprecation is real
and suppose I would like to "undeprecate" it. What formal procedure
should I follow for that?

Would it work, for example, if I propose a change to Debian Policy so
that it reads "Packages must build from source" instead of "Packages
must build from source on multi-core systems"? No, that would be
useless, because Debian Policy already says that packages must build
from source.

Would it work, for example, if I propose a change to Release Policy so
that it reads "Packages must build on all architectures on which they
are supported" instead of "Packages must only build ok in the official
buildds"? No, that would not work either, because Release Policy
already says that packages must build in all architectures in which
they are supported.

See how much kafkaesque is this?

Currently, this is what is happening:

Whenever someone dares to report a bug like this as serious, following
both Debian Policy and Release Policy (or at least the letter of it),
we lambast them, we make mock of their building environment, we call
them a fool, and we quote informal guidelines which are not written
anywhere. If we do this consistently, then no doubt that building on
single-cpu systems will become de-facto obsolete regardless of what
policy says, because nobody likes to be treated that way.

But surely there must be a better way: It is my opinion, and here is
where I'm asking the TC for support, that the burden of deprecating
building on single-cpu systems, or in general any other thing which
has always been a policy "must" directive, should be on those willing
to deprecate such things, and they are the ones who should convince
the rest of us, not the other way around.

For example, being proud to call ourselves the Universal Operating System,
we drop release architectures when it's increasingly difficult for us
to support them, *not* because we dislike them, *not* because they are
inefficient, and *not* because amd64 is "better".

We put a lot of care when we are about to deprecate architectures, we
examine the facts, the pros and the cons. The number of bugs affecting
such architectures, the number of people requiring special skills for
such architectures, that sort of thing.

I believe this to be a much better model of what we should do if we
really wanted to deprecate building on single-cpu systems, not what
happened in Bug #907829.

---------------------------------------------------------------------

Addendum: I'm going to summarize some of the reasons I'm told in
favor of deprecating building on single-cpu systems, and why I
consider those reasons mostly bogus.


* I'm told that single-cpu systems are an oddity and that most
physical machines manufactured today are multi-core, but this
completely fails to account that single-cpu systems are today more
affordable than ever thanks to virtualization and cloud providers.

Just because most desktop systems are multi-core does not mean that we
can blindly assume that the end user will use a desktop computer to
build packages, or that users who do not build packages using a
desktop computer deserve less support. We don't discriminate
minorities just because they are minorities.


* I'm told that building packages using single-cpu systems is worse
or less efficient, and nobody would do that in 2019. I think this
is based on prejudice and not on real facts. The data I've collected
during the last months tells me that exactly the opposite is true:

https://people.debian.org/~sanvila/single-cpu/

I would call this "CPUism", which could be defined as the wrong
belief that multi-core machines are always superior and better than
single-core machines. In real life, people care about the cost of
things, so there are a lot of cases where using single-cpu systems is
justified and useful.

But even if it was less efficient (which is not), that still would not
mean at all that building in single-cpu systems it not useful. For example,
suppose that you have a Jenkins instance which is idle most of the time.
Why on earth should this instance be multi-core and cost more if we
don't care about the build time?


* I'm told that if I want to see this kind of bugs fixed, I have to
fix them myself, and that maintainers should not spend *any* time on
fixing these kind of bugs. I can't buy the argument that maintainers
can't even be bothered to ensure that their packages build properly.


* I'm told that that there are bugs more important than this one and
therefore this one may not be serious. I consider such reasoning flawed
for several reasons:

1. If every time we have two different problems we should assign
different severities to them, we would easily run out of different
severities. This is called Dirichlet's principle and it's something
that everybody here will understand. So it is normal and expected
that two things which do not have exactly the same level of importance
end up sharing a BTS severity level.

2. I have seen in fact bugs *less* important than this one to be
reported as serious, with all the consequences associated to that,
including the package being autoremoved from testing. Example: Wrong
maintainer address in the control file:

https://bugs.debian.org/cgi-bin/pkgreport.cgi?dist=unstable;include=subject%3Amaintainer+address;severity=serious

The package may be fully functional even if it has a maintainer
address which does not work. The bug reports are not lost, the BTS
still receives them.

3. If using the same severity is really a problem, we could always
report FTBFS bugs as "grave" in general and use "serious" only when
the build failure does not happen everywhere. Documentation says that
"grave" is ok if the bug makes the package in question unusable or
mostly so. What could be more unusable than not distributing the
package at all because it does not build?

---------------------------------------------------------------------

I would hope that the above reasons are enough to illustrate that it's
not so "obvious" that we have to do this, as some people claim, and
therefore it's not something that we should do "willy-nilly".

Thanks.


Reply to: