Dear HDF developers,
I have stumbled upon a grave performance bug in H5Ocopy when using
parallel HDF5. Please see the attached test programs for reproducing
In my MPI program I achieve collective write speeds from 16 nodes of
2000 MB/s on a GPFS filesystem, so parallel HDF5 is working fine in
general. However, when copying datasets between two parallel files,
the copy time increases roughly linearly with the number of nodes.
Following, each test was repeated 10 times, and the smallest time was
chosen. The environment was Parallel HDF5 1.8.14, Intel MPI 4.1.2.040,
GPFS 3.5.0 and CentOS 6.4 on Linux x86_64.
Consider first a small compact dataset (32K):
# mpirun -np 1 -ppn 1 ./h5copy_mpio_compact
# mpirun -np 2 -ppn 1 ./h5copy_mpio_compact
# mpirun -np 4 -ppn 1 ./h5copy_mpio_compact
# mpirun -np 8 -ppn 1 ./h5copy_mpio_compact
# mpirun -np 16 -ppn 1 ./h5copy_mpio_compact
The copy time is constant with the number of MPI nodes. The dataset
has a compact layout, thus it consists purely of metadata. This test
indicates that metadata copying is working fine.
Now consider a larger contiguous dataset (32M):
# mpirun -np 1 -ppn 1 ./h5copy_mpio
# mpirun -np 2 -ppn 1 ./h5copy_mpio
# mpirun -np 4 -ppn 1 ./h5copy_mpio
# mpirun -np 8 -ppn 1 ./h5copy_mpio
# mpirun -np 16 -ppn 1 ./h5copy_mpio
The copy time increases roughly linearly with the number of MPI nodes,
even though the size of the raw data being copied is the same for all
cases. Could it be that all processes are trying to write the same raw
data to the destination object, causing serious write contention?
I would expect that while all processes copy the metadata to their
respective metadata cache, only one process copies the raw data to
the output file. However, while trying to understand the source code
of H5Ocopy, I could not find any special handling of the MPIO case.
Can you reproduce the issue on your parallel filesystem?
Which part of H5Ocopy might be causing the issue?
h5copy_mpio.c (1.35 KB)
h5copy_mpio_compact.c (1.44 KB)