7 березня 2011 о 23:13 +0100 Ludovic Brenta написав(-ла): > > gcc-4.4 -c -g -gnatwae -gnat05 -gnatwl -gnaty3abefhiklnprt > > -I../lib/spark/current -I../common/versioning -gnatf -O1 -fstack-check > > sem.adb > > > > raised STORAGE_ERROR : stack overflow (or erroneous memory access) > > > > This could be triggered by -fstack-check, which is broken in this > particular version of GCC (and has been since 4.0; there is a long > string of PRs on GCC bugzilla). Maybe try removing -fstack-check. Removing -fstack-check did not help. But using -O0 iinstead of -O1 helped. Strangely, there is no error if I just pass all -f* options turned on by -O1. And there is still this error if I pass -O1 and -fno-* options to disable optimizations. > > > So the largest problem for now is definition of RefType. And I'm not > > sure at all what to do with it. Also I have no idea how binary > > distribution for x86_64 was compiled. Maybe it was compiled with older > > compuler that does not emit that warning and binary distribution is > > broken? This is from gdb: > > > > (gdb) print examinerconstants.RefType(0xffffffff) > > $5 = -1 > > > > Any ideas? > > Could it be that the Examiner is compiled in 32-bit mode only, and that > the "x86_64" moniker only applies to the compiler? I also thought so, but it is not true: % file spark spark: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped I think that spark executable works most of the time on x86_64 if memory is allocated in first 2GB of address space. This is probably the case if there is no heap allocation randomization. My testing shows that heap allocation IS randomized, but is always in first 2GB. If RefType was not used to point at objects in stack (that is located at addresses around 0x7fffea13a948 in my testing), this can explain why the bug was not visible. Regards, Eugeniy Meshcheryakov
Attachment:
signature.asc
Description: Digital signature