dask and parallel hdf5 writing

1.8k views Asked by At

In my code I save multiple processed images (numpy arrays) in parallel on an hdf5 file using mpi (mpi4py/h5py). In order to do that the file need to be opened using the driver=mpio option.

import h5py
from mpi4py import MPI
file_hdl=h5py.File(file_lath,'r+',driver='mpio', comm=MPI.COMM_WORLD)

I would like to move away from mpi and use dask for parallelization. Is it possible to use parallel hdf5 in dask? Do I still need to rely on mpi? If so is there a better way to store the data? Thanks

1

There are 1 answers

0
MRocklin On BEST ANSWER

This is a hard and complex question.

Generally HDF5 is highly optimized for parallel MPI reads and writes. It is hard to get the same level of support outside of MPI.

Additionally this question is hard because people use Dask and HDF5 differently, some use multiple threads in the same process (h5py is not threadsafe), while others use multiple processes on the same hard drive, or multiple computers over a network file system. Additionally users often use several HDF5 files, for example to have one file per day of data.

Dask generally handles parallel reads and writes to HDF5 by using locks. If you are in a single process then this is a normal threading.Lock object. Typically this doesn't affect performance much because reading from HDF5 files is often I/O rather than CPU bound. There is a bit of contention, but it's not much to worry about.

In a distributed setting we use serializable locks, which protect against multi-threaded concurrent access in any particular process, but don't stop two processes from colliding with each other. Usually this isn't a problem because colliding on reads is fine as long as you're not in the same process and people usually write cohesive chunks that align with HDF5 chunks.

People happily use HDF5 in parallel with Dask.array every day. However, I'm not confident that everything is foolproof. I suspect that it would be possible to engineer a breaking case.

(Also, this particular aspect is evolving quickly. This answer may quickly become out of date)

https://github.com/pydata/xarray/issues/798