I'm updating an mpi-parallel code that does serial HDF5 output (data gathered to rank 0 before being written) to use parallel HDF5. In certain
cases the code will write a 0-sized array; i.e., an array where one of the
dimensions is 0. This has worked fine in serial -- the H5Dwrite call has
no issues. But in parallel H5Dwrite throws an error:
H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data
In the latter call, the xfer_plist_id argument is set to:
H5Pset_dxpl_mpio(xfer_plist_id, H5FD_MPIO_COLLECTIVE)
Is this a known issue with parallel HDF5?
I've experimented, and it seems that skipping the H5Dwrite call in
the case of a 0-sized array works. Is that a legitimate thing to do?
From a naive user perspective (mine) that call is a no-op, though
I don't know how else it might be altering the file (metadata?)
For background, the code does not use HDF5 directly, but indirectly
through third party libraries (one serial, and a different one parallel).
So I'm debugging code I have little understanding of.
Thanks for any advice.?
-Neil
Hi Neil,
Is the code using a 0-sized array because a process does not have data to write?
We have an FAQ with examples of how to write data collectively and independently when one process does not have data
or does not need to write data. See:
https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata
-Barbara
help@hdfgroup.org
···
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of Carlson, Neil
Sent: Monday, February 20, 2017 11:46 AM
To: hdf-forum@lists.hdfgroup.org
Subject: [Hdf-forum] Parallel HDF5 and 0-sized arrays
I'm updating an mpi-parallel code that does serial HDF5 output (data gathered to rank 0 before being written) to use parallel HDF5. In certain
cases the code will write a 0-sized array; i.e., an array where one of the
dimensions is 0. This has worked fine in serial -- the H5Dwrite call has
no issues. But in parallel H5Dwrite throws an error:
H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data
In the latter call, the xfer_plist_id argument is set to:
H5Pset_dxpl_mpio(xfer_plist_id, H5FD_MPIO_COLLECTIVE)
Is this a known issue with parallel HDF5?
I've experimented, and it seems that skipping the H5Dwrite call in
the case of a 0-sized array works. Is that a legitimate thing to do?
From a naive user perspective (mine) that call is a no-op, though
I don't know how else it might be altering the file (metadata?)
For background, the code does not use HDF5 directly, but indirectly
through third party libraries (one serial, and a different one parallel).
So I'm debugging code I have little understanding of.
Thanks for any advice.
-Neil
Yes, it is 0-sized because a process has nothing to write. I'll certainly take a look at your link, however I've found that things appear to work just fine doing nothing special -- i.e., making all the same H5 calls one would make if it had a non-zero size. (I may find this is not the proper way to do things after looking at your link.) The error only occurs if the array is 0-sized on every process.
Note that the client code is Fortran where 0-sized arrays are quite natural, and we are interfacing it to the hdf5 C library.
Thanks for your reply Barbara.
-Neil
···
________________________________
From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of Barbara Jones <bljones@hdfgroup.org>
Sent: Friday, March 10, 2017 12:20 PM
To: HDF Users Discussion List
Subject: Re: [Hdf-forum] Parallel HDF5 and 0-sized arrays
Hi Neil,
Is the code using a 0-sized array because a process does not have data to write?
We have an FAQ with examples of how to write data collectively and independently when one process does not have data
or does not need to write data. See:
https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata
-Barbara
help@hdfgroup.org
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of Carlson, Neil
Sent: Monday, February 20, 2017 11:46 AM
To: hdf-forum@lists.hdfgroup.org
Subject: [Hdf-forum] Parallel HDF5 and 0-sized arrays
I'm updating an mpi-parallel code that does serial HDF5 output (data gathered to rank 0 before being written) to use parallel HDF5. In certain
cases the code will write a 0-sized array; i.e., an array where one of the
dimensions is 0. This has worked fine in serial -- the H5Dwrite call has
no issues. But in parallel H5Dwrite throws an error:
H5Dio.c line 234 in H5Dwrite(): can't prepare for writing data
In the latter call, the xfer_plist_id argument is set to:
H5Pset_dxpl_mpio(xfer_plist_id, H5FD_MPIO_COLLECTIVE)
Is this a known issue with parallel HDF5?
I've experimented, and it seems that skipping the H5Dwrite call in
the case of a 0-sized array works. Is that a legitimate thing to do?
From a naive user perspective (mine) that call is a no-op, though
I don't know how else it might be altering the file (metadata?)
For background, the code does not use HDF5 directly, but indirectly
through third party libraries (one serial, and a different one parallel).
So I'm debugging code I have little understanding of.
Thanks for any advice.?
-Neil