[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft



(hi edmund, i'm reinstating debian-devel on the cc list as this is not
a debian-arm problem, it's *everyone's* problem)

On Mon, Jan 7, 2019 at 12:40 PM Edmund Grimley Evans
<edmund.grimley.evans@gmail.com> wrote:

> >  i spoke with dr stallman a couple of weeks ago and confirmed that in
> > the original version of ld that he wrote, he very very specifically
> > made sure that it ONLY allocated memory up to the maximum *physical*
> > resident available amount (i.e. only went into swap as an absolute
> > last resort), and secondly that the number of object files loaded into
> > memory was kept, again, to the minimum that the amount of spare
> > resident RAM could handle.
>
> How did ld back then determine how much physical memory was available,
> and how might a modern reimplemention do it?

 i don't know: i haven't investigated the code.  one clue: gcc does
exactly the same thing (or, used to: i believe that someone *may* have
tried removing the feature from recent versions of gcc).

 ... you know how gcc stays below the radar of available memory, never
going into swap-space except as a last resort?

> Perhaps you use sysconf(_SC_PHYS_PAGES) or sysconf(_SC_AVPHYS_PAGES).
> But which? I have often been annoyed by how "make -j" may attempt
> several huge linking phases in parallel.

 on my current laptop, which was one of the very early quad core i7
skylakes with 2400mhz DDR4 RAM, the PCIe bus actually shuts down if
too much data goes over it (too high a power draw occurs).

 consequently, if swap-thrashing occurs, it's extremely risky, as it
causes the NVMe SSD to go *offline*, re-initialise, and come back on
again after some delay.

 that means that i absolutely CANNOT allow the linker phase to go into
swap-thrashing, as it will result in the loadavg shooting up to over
120 within just a few seconds.


> Would it be possible to put together a small script that demonstrates
> ld's inefficient use of memory? It is easy enough to generate a big
> object file from a tiny source file, and there are no doubt easy ways
> of measuring how much memory a process used, so it may be possible to
> provide a more convenient test case than "please try building Firefox
> and watch/listen as your SSD/HDD gets t(h)rashed".
>
>     extern void *a[], *b[];
>     void *c[10000000] = { &a };
>     void *d[10000000] = { &b };
>
> If we had an easy test case we could compare GNU ld, GNU gold, and LLD.

 a simple script that auto-generated tens of thousands of functions in
a couple of hundred c files, with each function making tens to
hundreds of random cross-references (calls) to other functions across
the entire range of auto-generated c files should be more than
adequate to make the linker phase go into near-total meltdown.

 the evil kid in me really *really* wants to give that a shot...
except it would be extremely risky to run on my laptop.

 i'll write something up. mwahahah :)

l.


Reply to: