[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#932795: Ethics of FTBFS bug reporting



Ansgar <ansgar@debian.org> writes:
> Adrian Bunk writes:

>> - An environment with at least 16 GB RAM is supported.
>>
>> Not sure about the exact number, but since many packages have 
>> workarounds for gcc or ld running into the 4 GB address space
>> limit on i386 it is clear that several packages wouldn't build
>> in an amd64 vm with only 8 GB RAM.

> Aren't there even packages that will not build on i386 with a i386
> kernel (non-PAE) as they require the full 4 GB address space to be
> buildable?

> Even more, from the "32 bit archs in Debian" BoF at DebConf15 I remember
> the suggestion that one might have to switch to 64-bit compilers even on
> 32-bit architectures in the future...  So building packages would in
> general require a 64-bit kernel, multi-arch and 4+ GB RAM.

Weighing in here as a Policy Editor, I think we do have a rough consensus
in the project about what sorts of resources a package may or may not
require in order to build, in that we've made firm decisions both
directions (dropping architectures that can no longer build large packages
in a reasonable length of time, for example, but also rejecting packages
that cannot be built reasonably on our buildds).  But they're
largely-undocumented "tribal knowledge."

I would be in favor of writing down those guidelines, as have been
discussed on this thread, and publishing them as part of Policy, since I
think it would provide useful guide rails for developers to know how many
resources they can reasonably require for the package build, and what sort
of build environments they need to support (and therefore should at least
consider simulating to ensure that they do support them).

We could then align our archive-wide rebuild testing with the documented
minimum requirements for package builds, and all be consistently testing
the same thing, which would prevent some surprises.

I do think, as this thread has made clear, that we do have some minimum
requirements and don't expect packages to build in smaller environments.
Minimum available memory is a really obvious one; I'm sure many of our
packages won't build in 128MB of RAM, for example.

I'm rather dubious that it makes sense to *require* multiple cores to
build a package for exactly the reason that Santiago gave: single-core VMs
are very common and a not-very-exotic environment in which someone may
reasonably want to make changes to a package and rebuild it.  But maybe
I'm missing something that would make that restriction make sense.

It's possible that we may have to have a couple of levels of requirements:
base minimum requirements below which we don't expect any maintainer to
worry about, and a higher tier of requirements for larger packages.  For
instance, I'm not sure that we want to say that we don't support building
*any* Debian package on a host that can't build Firefox (particularly
given our support for embedded devices); coreutils probably should build
on a lighter-weight machine than Firefox requires.  And it's possible that
multi-core may be a reasonable requirement for that "heavy package" tier.
If we do go down that path, though, it would be nice to add a metadata
field so that maintainers can flag their packages as being "heavy" so that
our users know to expect them to not build on commodity VMs.

-- 
Russ Allbery (rra@debian.org)               <http://www.eyrie.org/~eagle/>


Reply to: