writing data in parallel (question about hyperslabs)

Hello,

I’m trying to do a parallel write to an HDF5 file using HDF5 Version 1.8.6
with OpenMPI Version 1.4.3. A simple Fortran program that demonstrates the
problem I’m having is attached.

The example program attempts to write the “phim” array to an external file.
The “phim” array is a function of position (IM, JM, and KM) and energy
(IGM). The space domain is divided between two processors along the
K-plane: Processor 0 receives K=[1,4], Processor 1 receives K=[5,8]. The
intention is to write a single file with the combined space domains, with
the dimensions ordered as follows: IM, JM, KM, IGM – with the IM index
changing most rapidly, and the IGM index changing least rapidly.

The example code that I’ve attached doesn’t appear to produce any errors,
but the output is clearly incorrect. At IG=34, processor 0 correctly writes
data, but processor 1 does not. And beyond IG=34, both processors do not
write the desired data.

I suspect that I’m doing something wrong with my hyperslab selection. In
particular, I wonder if my specification of the “block” and “cnt” arrays is
correct. My (non-example) code seems to work properly with a single
processor, but fails with errors when I try splitting up the space domain
between two or more processors.

Any suggestions? Thanks!

Greg

hdfproblem.f90 (2.87 KB)

I solved the problem. When I was issuing the h5dwrite_f() command, I did
not have a "mem_space_id" parameter defined. To fix it, I defined a new
dataspace as:

call h5screate_simple_f(rank, block*cnt, mspace_id, ierr)

and then used "mspace_id" in the h5dwrite_f() function call. This makes it
behave as expected. (The source code is attached. I also deleted the
if-statements around the attribute writes, which makes them work
correctly.)

My understanding as to why it works now is that, since the memory dataspace
wasn't defined, it defaulted to using the file dataspace. With the offsets
used for each processor, this would cause it to "skip" roughly half of the
data for each processor, which was the observed symptom.

Greg

hdfproblem.f90 (2.92 KB)