writing one dataset per processor using MPI with HDF5

I distributed my work load among multiple CPUs (using openMPI). Now I need to write an array of doubles into a file, I choose to use HDF5. So I create a file and then try to create dataset per each CPU, then close the dataset and file. My code works fine when using one CPU. In case of multiple CPUs, it just stops when it reaches the closing part of file.

MPI_Info info  = MPI_INFO_NULL;
herr_t status_hdf5;
hsize_t  dimension_hdf5[1];
int rank_hdf5{1};

hid_t PList_ID 	= H5Pcreate(H5P_FILE_ACCESS);
H5Pset_fapl_mpio(PList_ID, MPI_COMM_WORLD, info);
file_id_H 		= H5Fcreate("hdf5_temp.h5", H5F_ACC_TRUNC, H5P_DEFAULT, PList_ID);		

space_id_H 		= H5Screate_simple(rank_hdf5, dimension_hdf5, NULL); 

std::stringstream data_set_name;
data_set_name << "data_set_"<<mpi.rank;
data_set_id_H 	= H5Dcreate(file_id_H, data_set_name.str().c_str(), H5T_NATIVE_DOUBLE, space_id_H,
							H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);

status_hdf5 = H5Dclose(data_set_id_H);
status_hdf5 = H5Sclose(space_id_H);
status_hdf5 = H5Pclose(PList_ID);
if (status_hdf5 == H5I_INVALID_HID){
	std::cout<<"error in closing file";
}else{
	std::cout<<"success"
}
status_hdf5 = H5Fclose(file_id_H);

Hi @mehdi.h.jenab, in parallel HDF5 any operations that modify metadata in the file (H5Dcreate in this case) need to be called by all MPI ranks with the same arguments. What this means for your example is that you will need to make multiple calls to H5Dcreate, one for each dataset for each MPI rank, on all MPI ranks. Essentially you’d just loop over setting up your dataset name as “data_set_0”, “data_set_1” and so on, having all the MPI ranks call H5Dcreate each time.

Once those datasets have been created, you can then have a single MPI rank write to a particular dataset individually, or you can even close them and later on have a single MPI rank open a dataset individually and then write to it, whichever works best for you.

Another possible approach is to just have MPI rank 0 open the file (try using H5Pset_fapl_mpio(PList_ID, MPI_COMM_SELF, info)) and create all those datasets, then close the file and re-open it on all MPI ranks for individual writing.

Here’s a link to a table of HDF5 API routines that you need to be careful about with parallel HDF5.

1 Like