[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Performance analysis and profiling tools for sequential and parallel codes



Hi Julien,

Wouldn't it make sense to work on this inside Debian Science?

(Full quote from the Planet blog post below, for those who missed it)

- Lucas

On 09/12/10 at 10:21 -0000, jblache wrote:
> <http://blog.technologeek.org/2010/12/09/417>
> 
> As part of my work for EDF, I’ve had to package and integrate a set
> of tools for performance measurement and profiling of HPC code. This
> toolbox comprises tools for analysis of both sequential and parallel
> codes, MPI communications profiling and, of course, visualization
> frontends.
> 
> Without going into too much details, here’s the list:
> 
> - OpenSpeedShop: a complete performance and profiling workbench,
> including I/O and MPI
> - PerfSuite: a relatively simple and easy to use performance
> analysis toolkit
> - TAU: a complete performance analysis framework including automatic
> instrumentation with PDT
> - Scalasca: a tool for performance optimization of parallel codes,
> including MPI communications
> - a number of dependencies: slog2, Open Trace Format libraries,
> VampirTrace, dyninst, monitor, perfctr, pfmon, PAPI, …
> - visualization frontends: paraprof, jumpshot4, cube3, …
> 
> If you’ve been anywhere near HPC code, the names probably ring a
> bell; they’re the best tools out there in their category, developed
> and used by the top laboratories.
> 
> Over the past year, all the tools have received some level of
> testing, meaning we know they do work at the very least to some
> extent. As you can imagine, testing such tools is no easy task and
> takes an insane amount of time and resources of all kinds.
> 
> Testing is all the more important that I had to produce a number of
> patches to integrate the tools properly in the distribution and, in
> some cases, to even get them to build.
> 
> Now, we would like to share this work with the HPC community in and
> around Debian. How exactly we are going to do that isn’t clear just
> yet; most probably, we’ll end up building a team with other
> interested parties and offer our packages as a base to build upon.
> 
> There are a number of challenges with these tools: they’re not easy
> to build, they’re not easy to maintain, they’re not easy to use,
> they’re not easy to understand. Basically, nothing is easy. Some of
> those tools were never meant to be packaged and integrated in a
> distribution and no sane amount of patching will fix that, so we
> have to live with packages that aren’t quite as polished as we like
> them to be.
> 
> And then, there are licenses. Some tools are non-free due to usage
> restrictions. Others rely on non-free dependencies. Although I’ve
> been looking at the licenses, a thorough license check will be
> required and decisions will need to be made.
> 
> It’s not for the faint of heart! If you are interested in these
> tools and in bringing them to Debian, please get in touch.
> 
> I’ve also had to package the Apache Derby database (Java); if
> someone out there cares about Derby, I’d be more than happy to
> provide my packages as a start base for getting Derby into Debian.
> The packages need some work by someone who knows a thing or two
> about Derby and who can test and enhance the packaging of the server
> part.
> -- 
> Feed: Planet Debian
> <http://planet.debian.org/>
> Item: Julien Blache: Performance analysis and profiling tools for sequential and parallel codes
> <http://blog.technologeek.org/2010/12/09/417>
> Date: Thu Dec 09 10:21:53 UTC 2010
> Author: jblache


Reply to: