[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: About the i586 / i386 ' optimized releases ' differences ?



On Thu, Nov 15, 2001 at 10:16:37AM -0500, Timothy H. Keitt wrote:
> My guess is that getting rid of '-g' (i.e., debugging symbols) would be 
> the most profitable "optimization." My understanding is that the debug 
> symbols cannot be stripped from library code, so you are probably 
> thrashing your cpu cache unnecessarily when running debian binary 
> packages (at least io-bound processes that run lots of dynamically 
> linked code). (Does anyone have benchmark results?) If I remember 
> correctly, it is debian policy to  use '-g' and then strip non-library 
> binaries. I'm sure I'll get howls for suggesting it, but I think that 
> the policy should be to not use '-g' in the stable distribution.

You can strip shared libraries, and the Debian project appears to do so:

[ferlatte@movin ~]$ objdump -g /lib/libdb.so.2 

/lib/libdb.so.2:     file format elf32-i386

objdump: /lib/libdb.so.2: no symbols
objdump: /lib/libdb.so.2: no recognized debugging information

[ferlatte@movin ~]$ objdump -g /usr/lib/libgtk-1.2.so.0

/usr/lib/libgtk-1.2.so.0:     file format elf32-i386

objdump: /usr/lib/libgtk-1.2.so.0: no symbols
objdump: /usr/lib/libgtk-1.2.so.0: no recognized debugging information

[ferlatte@movin ~]$ objdump -g /bin/ls                 

/bin/ls:     file format elf32-i386

objdump: /bin/ls: no symbols
objdump: /bin/ls: no recognized debugging information


Compiling with -g, and then strip-ing is no different than compiling
without -g, except that you don't have the option of creating debug
libraries.  Besides, the debugging symbols aren't loaded unless you are
actually using a debugger, so all they would consume is disk space and
bandwidth, not memory.

On Intel, most CPU specific optimizations are a wash, anyway.  I can't
find the reference (so you can feel free to completely disregard this
part), but I recall a discussion which mentioned that for general use
apps, the only x86 processor worth optimising specifically for was the
Pentium, because it's pipelining was broken enough that if the compiler
didn't take steps to work around it, you'd lose some performance.
Even then, it only resulted in about a 10% boost, and it would be slower
on i686 class machines that had better hardware, so it's just not worth
the pain.

There are, of course, specific cases where you want these, like for pure
CPU bound apps like Octave, or for the kernel (but that's mostly so that
the kernel can take advantage of new assembly instructions like atomic
test and set ops).

M



Reply to: