[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: h5py and hdf5-mpi




On 13/08/2019 05:01, Drew Parsons wrote:
On 2019-08-13 03:51, Steffen Möller wrote:
Hello,


There are a few data formats in bioinformatics now depending on hdf5 and
h5py is used a lot. My main concern is that the user should not need to
configure anything, like a set of hostnames. And there should not be
anything stalling since it waiting for contacting a server. MPI needs to
be completely transparent and then I would very much like to see it.

MPI is generally good that way.  The programs runs directly as a simple serial program if you run it on its own, so in that sense it should be transparent to the user (i.e. you won't know its mpi-enabled unless you know to look for it).  A multicpu job is launched via running the program with mpirun (or mpiexec).

e.g. in the context of python and h5py, if you run
  python3 -c 'import h5py'
then the job runs as a serial job, regardless of whether h5py is built for hdf5-serial or hdf5-mpi.

If you want to run on 4 cpus, you launch the same program with
  mpirun -n 4 python3 -c 'import h5py'

Then if h5py is available with hdf5-mpi, it handles hdf5 as a multiprocessor job.  If h5py here is built with hdf5-serial, then it runs the same serial job 4 times at the same time.

To reiterate, having h5py-mpi available will be transparent to a user interacting with hdf as a serial library. It doesn't break serial use, it just provides the capability to also run multicpu jobs.

I'd go with this policy in general:  codes available as both serial and mpi should probably be shipped mpi by default.

The main reason not to do so is normally "it drags in MPI" and "its painful to build", but these are arguments against an end-user having to build all the software; the advantage of Debian is the stack is available for free :-) . Typically space for the MPI libraries is not an issue.

At the moment the main exception is NetCDF : serial and parallel NetCDF have orthogonal features: the MPI version provides parallelism but only the serial version provides compression with I/O, (because I/O writes happen on byte ranges via POSIX). This is changing though (not sure of the timetable); in the future a parallel version with full features is expected.


How do autotests work for MPI?

We simply configure the test script to invoke the same tests using mpirun.

This is a bigger issue.  We have test suites that test MPI features without checking MPI processor counts (eg the Magics /Metview code). One workaround is to enable oversubscribe to allow the test to work (inefficiently), though the suites that use MPI should really detect and disable such tests if resources are not found. We will always have features in our codes that our build/test systems aren't capable of testing: eg. pmix is designed to work scalably to > 100,000 cores. We can't test that :-)
Drew

Alastair


--
Alastair McKinstry, <alastair@sceal.ie>, <mckinstry@debian.org>, https://diaspora.sceal.ie/u/amckinstry
Misentropy: doubting that the Universe is becoming more disordered.


Reply to: