[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: compile speed



lsorense@csclub.uwaterloo.ca (Lennart Sorensen) writes:

> On Sun, Jun 15, 2008 at 07:19:29PM +0200, Hans-J. Ullrich wrote:
>> yes, that is the point, I wanted to know. I suppose, higher than "-j 4" will 
>> let the system make too slow. 
>> 
>> Just another last: Are the entries in /etc/apt/apt-build.conf used by any(!) 
>> compiler applications, which are debian based ? I am thinking especially of 
>> commands like "make-kpkg", ncurses tools  like "module-assistant" and dpkg 
>> related things, which are executed during upgrading of packages.
>
> The thing is, many Makefiles are NOT proper and will result in broken
> builds if you try to run them in parallel.  So there is no way you can
> just say 'always do this' because sometimes it doesn't work.
>
> I find that 2xCores is a good value, since it tries to make sure each
> core has something to compile even while doing disk io for another
> compile task.  Any more than that never seems to improve things and just
> increases the number of context switches that need to be done.

For the kernel I found that -j3 is a few seconds faster than -j4.

But what really makes a difference for me is ccache. It won't hep the
first time you compile something but the second time. I would be
interested to hear about cache hit/miss ratios for apt-build usage.

> Another option is to look at gcc's -pipe option, which makes it use
> pipes with multiple processes rather than temporary files with each
> compile step process run in turn.  It is completely safe no matter what
> the Makefile does, but won't likely gain as much, but still better than
> nothing.

I have half a mind to file a bugreport against gcc to default to
it. The only reason not to use -pipe is when you don't have much rum,
like < 128MiB.

MfG
        Goswin


Reply to: