Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj
On Fri, Jun 29, 2018 at 8:16 AM, Uwe Kleine-König <uwe@kleine-koenig.org> wrote:
> Hello,
>
> On Wed, Jun 27, 2018 at 08:03:00PM +0000, Niels Thykier wrote:
>> armel/armhf:
>> ------------
>>
>> * Undesirable to keep the hardware running beyond 2020. armhf VM
>> support uncertain. (DSA)
>> - Source: [DSA Sprint report]
>>
>> [DSA Sprint report]:
>> https://lists.debian.org/debian-project/2018/02/msg00004.html
>
> In this report Julien Cristau wrote:
>
>> In short, the hardware (development boards) we're currently using to
>> build armel and armhf packages aren't up to our standards, and we
>> really, really want them to go away when stretch goes EOL (expected in
>> 2020). We urge arm porters to find a way to build armhf packages in
>> VMs or chroots on server-class arm64 hardware.
from what i gather the rule is that the packages have to be built
native. is that a correct understanding or has the policy changed?
>
> If the concerns are mostly about the hardware not being rackable, there
> is a rackable NAS by Netgear:
>
> https://www.netgear.com/business/products/storage/readynas/RN2120.aspx#tab-techspecs
>
> with an armhf cpu. Not sure if cpu speed (1.2 GHz) and available RAM (2
> GiB) are good enough.
no matter how much RAM there is it's never going to be "enough", and
letting systems go into swap is also not a viable option [2]
i've been endeavouring to communicate the issue for many many years
wrt building (linking) of very large packages, for a long, *long*
time. as it's a strategic cross-distro problem that's been very very
slowly creeping up on *all* distros as packages inexorably creep up in
size, reaching people about the problem and possible solutions is
extremely difficult. eventually i raised a bug on binutils and it
took several months to communicate the extent and scope of the problem
even to the developer of binutils:
https://sourceware.org/bugzilla/show_bug.cgi?id=22831
the problem is that ld from binutils by default, unlike gcc which
looks dynamically at how much RAM is available, loads absolutely all
object files into memory and ASSUMES that swap space is going to take
care of any RAM deficiencies.
unfortunately due to the amount of cross-referencing that takes place
in the linker phase this "strategy" causes MASSIVE thrashing, even if
one single object file is sufficient to cause swapping.
this is particularly pertinent for systems which compile with debug
info switched on as it is far more likely that a debug compile will go
into swap, due to the amount of RAM being consumed.
firefox now requires 7GB of resident RAM, making it impossible to
compile on 32-bit systems webkit-based packages require well over 2GB
RAM (and have done for many years). i saw one scientific package a
couple years back that could not be compiled for 32-bit systems
either.
all of this is NOT the fault of the PACKAGES [1], it's down to the
fact that *binutils* - ld's default memory-allocation strategy - is
far too aggressive.
the main developer of ld has this to say:
Please try if "-Wl,--no-keep-memory" works.
now, that's *not* a good long-term "solution" - it's a drastic,
drastic hack that cuts the optimisation of keeping object files in
memory stone dead. it'll work... it will almost certainly result in
32-bit systems being able to successfully link applications that
previously failed... but it *is* a hack. someone really *really*
needs to work with the binutils developer to *properly* solve this.
if any package maintainer manages to use the above hack to
successfully compile 32-bit packages that previously completely ran
out of RAM or otherwise took days to complete, please do put a comment
to that effect in the binutiols bugreport, it will help everyone in
the entire GNU/Linux community to do so.
l.
[1] really, it is... developers could easily split packages into
dynamic-loadable modules, where each module easily compiles well below
even 2GB or 1GB of RAM. they choose not to, choosing instead to link
hundreds of object files into a single executable (or library).
asking so many developers to change their strategy however... yyeah :)
big task, i ain't taking responsibility for that one.
[2] the amount of memory being required for the linker phase of large
packages over time goes up, and up, and up, and up... when is it going
to stop? never. so just adding more RAM is never going to "solve"
the problem, is it? it just *avoids* the problem. letting even
64-bit systems go into swap is a huge waste of resources as builds
that go into swap will consume far more resources and time. so *even
on 64-bit systems* this needs solving.
Reply to: