[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: How should stalin be handled on slower architectures?



Scripsit Adam Majer <adamm@galacticasoftware.com>

> My first comment would be: "For Pete's sake, a 22MB C file!"

> The problem is that the source code is basically one humongous function,
> at least at first glance.

Not really. The two longest function bodies seem to be about 20,000
no-comment lines in length, whereas the entire file has more than
400,000 non-comment lines.

Thus it ought to be possible to spilt the source into several batches
automatically. Its structure is fairly regular - a quick and dirty
job could probably be done with a day's perl hacking.

On the other hand, it is likely that the 20,000 line functions are
what causes the majority of the thrashing. As far as I understand, gcc
will by default compile one function at a time, flushing the locally
used memory after each. Each of the long functions has a *lot* of
local variables, which is known to cause all sorts of trouble for
compilers written with very few local variables in mind.

On the third hand, perhaps splitting would allow one to select a
coarser optimization setting for the huge functions, while keeping -O2
for source files with functions that have a more manageable size.

-- 
Henning Makholm                   "Det er sympatisk du håner dig selv. Fuldt
                           berettiget. Men det gør dig ikke til en kristen."



Reply to: