[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[comp.os.linux.alpha] High-performance on AXP-Linux



--- Begin Message ---
Greetings,

The Linux/Alpha platform is fast becoming acknowledge as having the best
price/performance for floating point computation.  This was recognized
as early as around two years ago, when Digital Domain put 200
Linux/Alpha machines together to do the rendering for the movie Titanic,
and more recently when the Avalon cluster at Los Alamos became the #114
supercomputer in the world at a total cost of around $300K.

In the last few months, several developments have enhanced the
performance of this platform, including the Free-Fast-Math library of
Joachim Wesner and Kazushige Goto, and Mr. (Dr.?) Goto's hand-coded
assembler BLAS (basic linear algebra subroutines) which multiply
matrices faster than any other single-processor machine in the world (I
think- well, at least anything else within 10 times the price).
Johannes Hausmann has put these and other fast linear algebra routines
into a single fastmath/BLAS/LAPACK RPM builder which greatly simplifies
installation of these disparate pieces.

Richard Payne has put together a page to summarize these and other
high-performance efforts at
http://www.alphalinux.org/docs/high_perf.html.  In response to that
page, it was suggested that a mailing list be created to discuss these
topics.

This list has been created, and is open to new members.  To join, simply
send me email.  (If you reply to this, make sure *my* email address is
in the To: field, not a list address.)  Possible discussion topics might
include:

   * Tuning of existing routines for improved speed, e.g. block size
     issues, performance on 21164PC sans on-chip L2 cache, etc.
   * New routines to accelerate FFTs, sparse matrix solvers, FEM/BEM
     matrix construction, etc.
   * Requests for help with coding of inner loops of custom codes
     (neural nets? spreadsheets?  mozilla? quake? :-), including
     assembler instruction scheduling, cache issues, etc.

I suspect the list will focus on scientific computation, but any
high-performance topic is welcome.

Note: this list is based on decade-old non-majordmo list technology, and
list updates happen once a day, and if I am unreachable, an update
request could potentially take several days to be processed.  It will
not have an archive right away (though a volunteer has stepped forward
to create one soon), but I will save all messages and will gladly honor
forwarding requests.  If anyone has a better list server available, then
perhaps the list should be hosted there; in the meantime, I'm making
available what I have.

Cheers and happy hacking!

[P.S. If anyone's on debian-alpha, feel free to forward this.]

-Adam `Cold Fusion' Powell, IV http://www.ctcms.nist.gov/~powell/ ____
USDoC, National Institute of Standards & Technology (NIST)  |\ ||<  |
Center for Theoretical and Computational Materials Science  | \||_> |


--- End Message ---

Reply to: