[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: buildd machines vs. resource-hungry packages (ITK)

--On Saturday, September 17, 2011 08:40:16 AM +0200 Mike Hommey <mh@glandium.org> wrote:

On Fri, Sep 16, 2011 at 10:06:58PM -0500, Steve M. Robbins wrote:

I am having a huge problem getting insighttoolkit (ITK) to build due
to the fact that it takes a huge amount of disk, memory, and time to
build.  In fact, the build is now generally failing because either
disk or memory is exhausted.

The main culprit behind the resource usage is the wrappers for Tcl,
Java, and Python.  The underlying ITK codebase consists of heavily
templated C++ libraries.  The wrapping process generates a huge amount
of code since many variants of each templated class are instantiated
and compiled.

Up until the most recent upload, about 10 GB disk was required, which
exceeds the capacity of several of the buildd machines, so builds were
failing.  Builds were also failing due to exhausting the buildd
memory.  Recently, it was suggested (#640667) that the memory issue
was due to building the huge amount of wrapping code using -O3 and
that it would be better to use -O2 instead; so upload (3.20.0-14)
switched to -O2.  However, I also turned on -g (as a side effect) and
doubled the disk usage to 22 GB!  Now it basically won't build
anywhere :-(

I can continue to fiddle with the compiler flags to reduce disk
requirements.  For example, I can remove -g again; however, ITK would
still need 10 GB or so, more than some buildds provide.  So something
more is required.

Any ideas?

I think you should ask yourself why there is a need for so much disk
space when building, in the first place. Are the built packages taking
3GB when installed ? If not, then there must be something wrong with the
build system.


I am new to maintaining buildd servers and the build of the
insighttoolkit got my attention.  Looking at the log I see took a bit
over 6 days on my old AlphaServer 1200 with its 1.5 gbytes of memory,
and about about 12 gbytes of disk.  It didn't seem that the low memory
was a problem if one is patient enough.  12 gbytes is a lot, but by
today's stanford it does not seem outlandish.



Bill MacAllister
Infrastructure Delivery Group, Stanford University

Reply to: