Parallel Dataset Creation hanging

Hi folks,

I’m attempting to setup an HDF5 connection for parallel writing. The issue I’m running into is that the dataset creation process hangs when executed with more than one MPI client.

Here’s the code in question:

// Initialize MPI, if it's available
const hid_t plist_id = H5Pcreate(H5P_FILE_ACCESS);
// Initialize the MPI values
MPI_Comm comm = MPI_COMM_WORLD;
MPI_Info info = MPI_INFO_NULL;
MPI_Comm_size(comm, &mpi_size);
MPI_Comm_rank(comm, &mpi_rank);
H5Pset_fapl_mpio(plist_id, comm, info);

// Create the file
_file_id = H5Fcreate(filename.c_str(), H5F_ACC_TRUNC, H5P_DEFAULT, plist_id);
H5Pclose(plist_id);


// Create the dataset
// Set chunking and compression, if desired
const auto dset_plist = H5Pcreate(H5P_DATASET_CREATE);
H5Pset_chunk(dset_plist, Dimensions, Chunks->data());
H5Pset_deflate(dset_plist, 6);

const auto filespace = H5Screate_simple(Dimensions, _dimensions.data(), nullptr);

dset_id = H5Dcreate2(_file_id, dsetname.c_str(), H5T_NATIVE_INT, filespace, H5P_DEFAULT, dset_plist,
                  H5P_DEFAULT);
H5Sclose(filespace);
H5Pclose(dset_plist);

Everything works fine with running OpenMPI with -np1, but with more than one client things hang after the call to H5Dcreate2. I’ve tried to follow the existing examples as closely as possible, but I’m sure I’m missing something.

The MPI environment is initialized before calling this section of code, Chunks is a template parameter which is an array of dimension boundaries.

Environment: HDF5 Master
OS: Mac OS 10.13.6
Compiler: Apple LLVM version 10.0.0 (clang-1000.11.45.5)
OpenMPI: 4.0.4

Any help would be greatly appreciated.

Thanks,
Nick Robison