Re: Why not use GCC 3.0?
Christopher C. Chimelis wrote:
On Mon, 4 Mar 2002, Jerome Warnier wrote:
I just finished the (long) installation process of a Woody on an
AlphaServer and met a lot of problems during and after the install.
Like what? Did you file bug reports?
Not yet, but I will. I have although to check if such bug reports are
not already filed.
The problems I had all resolved when using gcc-3.0, using the "apt-get
-b source" method.
That's why I wonder why not use gcc-3.0 on all binary packages for Woody.
The problems I had resolved by themself when I changed the links to gcc,
cpp and g++ in /usr/bin to use the 3.0 version.
There are alot of reasons. While gcc-3.0 is currently superiour to
gcc-2.95 on alpha, it was not always so. Also, woody's been in freeze
(for awhile, unfortunately) and changing compilers at this stage would
cause more trouble than it would solve (for instance, all C++ packages
would need to be recompiled). Because of woody's (hopefully) impending
release, I didn't feel that changing compilers this late in the game was
beneficial to anyone.
I understand. But isn't it possible to make certain packages compile
anyway with gcc-3.0 (for the binary package), without making changes to
the to-be-frozen-but-freezing-going-really-slow Woody?
The problems I met:
- plain kernel 2.2.20, 2.4.17 and 2.4.18 (from kernel.org, and, yes, I
needed a more recent kernel because of a Mylex controller in the
machine) did not compile, with random compilation errors.
Which drivers? Most of these can be worked around by removing the
optimisation or reducing it to -O0 for the affected drivers.
DAC960, I also read the posts on this mailing-list about the firmware
version (I had the problem, but updated it to 2.73).
The problem is that I imagine I'm not the only one who needed to add a
scsi disk in the machine on the NCR controller to boot, compile a newer
kernel and retry.
If I ever happen to install it back or install another machine like
that, I would really love to make it as easily as on other platforms. As
such, I feel I have to report the problems I encountered and try help to
solve them.
I was lucky in that I had other machines available with Debian (but no
other Alpha) to read the docs.
I was not able to find out what was the problem until I used my own-made
2.4 kernel, because the only message told back by the 2.2.19 kernel
before the kernel panic was a debug message (a lot of numbers, but
nothing really usefull to understand what the problem is). I discovered
afterwards that the Mylex firmware was to old to his liking.
- Netatalk (1.5.1.1-4) appears to have a strange bug on 64-bits
architectures, however, recompiling it solved this problem for me (bug
#123268).
Most likely another optimiser bug. There are a few that apparently
generate bad code rather than bombing during compilation.
I personnaly prefer changing gcc version than modify any of the
parameters of the beautifully hand-made Debian (source) packages ;-)
It is more convenient, and I prefer to rely on gcc developpers than on
my own suppositions.
Did you read the bug? It is a very surprising problem of size
multiplication and data loss.
Maybe it is even possible to ask for the packaging system to compile
with a given gcc version? I'm willing to continue to update my system
(apt-get rules!), but I would have to check on each upgrade that it
doesn't replace it with a gcc-2.95-compiled version.
Bump/add any epochs in the package versions. That should make it so that
your packages aren't replaced normally by changes in the main archive
(unless the maintainers of the packages do the same, of course).
I will try that.
C
Thanks for your help
Reply to: