some values not read from chunked dataset


I have a HDF5 file with 129x128x128 dataset, that is chunked in
32x32x32 blocks. When I try to read the dataset in parallel, from
Fortran 90 piece of code, and 8 MPI ranks running in parallel, it
seems that some values get skipped. Attached is small program
demonstrating the problem, as well as the HDF5 file in question. I've
tested on Linux, with GCC compiler version 5.4.0, and HDF5 version
1.10.0-patch1, built against OpenMPI 2.0.1. In order to build the
   mpifort -o foo foo.f90 -lhdf5_fortran -lhdf5
Then, unpack the archive with the HDF5 file:
   tar xfv foo.tgz
and then run the program:
   mpirun -np 8 foo

All values in the HDF5 file are set to 0. The array in the program is
initialized to very large values. Then, the dataset is read in
parallel. Note that there is small overlap in reading between various
MPI ranks, this is because the actual problem occurred in a finite
difference code, and the overlap is because of halo regions. After
reading values from HDF5 file, the program will check are all array
elements set to 0, and report any array elements that still have very
large values set. Quite a few elements get reported with wrong values
when I run the code. However, if I repack the HDF5 file so that
chunking is removed:
   h5repack -l CONTI foo.h5 bar.h5 && mv bar.h5 foo.h5
then all values are read properly.

I tried to find are there any limitations on selections in case when
dataset is chunked in the HDF5 file, but was not able to confirm that
this could be the issue here. Otherwise, I really have no idea of
what's wrong... So, any suggestions?


foo.f90 (4.1 KB)

foo.tgz (20.8 KB)