On Fri, Jul 13, 2012 at 3:57 PM, Frans van Berckel
<fberckel@xs4all.nl> wrote:
On Fri, 2012-07-13 at 13:48 +0200, Michel Schanen wrote:
>
>
> We received these cluster nodes at no costs and we use them
> essentially for simulation codes/testing. As the tools are all SPARC
> and not SPARC64 we had some problems using OpenMP. For instance the
> atomic statement was very slow. After compiling gcc with arch=SPARC64
> we had a considerable performance boost using OpenMP. In the end, all
> our development tools are now recompiled with arch=SPARC64.
>
Did you know theres a SPARC64 packages port at Debian-ports?
http://buildd.debian-ports.org/status/architecture.php?a=sparc64&suite=sid
I haven't known yet. Maybe I'll test a SPARC64 installation on a spare node or get involved in the build/testing process. Thank you.
> Oh and don't forget to recompile your kernel with 64 CPU support.
> Otherwise you will be stuck with 32 or something.
And they even did build a 64 bit kernel as well. A how-to bootstrapping
it for SPARC64 can be found in this wiki.
http://wiki.debian.org/Sparc64
Correct me if I'm wrong but I think that the kernel is 64bit on the official SPARC port of debian. Only the userland is 32bit. Here I was only talking about the number of CPU limit. The Niagara T2 has 64 cores, where only 32 will be used if you use the provided kernel on the official SPARC release.
Thanks,
Frans van Berckel
Another problem we faced with the T5120 systems was that the tftp network boot failed. We installed all of the nodes diskless. This kernel has to have nfs support builtin. That way the root file system is mounted over nfs. While pulling the kernel over tftp boot, the download stalled. The problem was the size of our kernel. We had to strip it down to a size of currently 5 MB; over 5MB net boot fails.
Michel