slices in a 3D domain: null dims?

Hello!

Here is my context: I compute velocities in a 3D mesh. This mesh is divide in subdomain (cubes) for every MPI job. I can dump all the domain thanks to parallel hdf5 routines.

But, I would like to dump also a slice of this domain via parallel hdf5 (I mean, I want to use a collective write). In this case, the slice is only included in a few subdomains. So, for the other subdomains/mpi job, the dimensions of their hyperslab part are null.

For a collective I/O, all mpi-job must to pass through the CALL h5dwrite_f(...,...,data,dims,...). Some mpi-job (in fact, a lot!) have data and dims equal to zero.

My code crashs! HDF5 complains about

"H5Screate_simple(): zero sized dimension for non-unlimited dimension"

How can I handle this without create a smaller communicator??

I can put my code if it could help!
Thanks!

Stephane

Hi Stephane,

···

On Jun 15, 2011, at 4:44 PM, Stéphane Backaert wrote:

Hello!

Here is my context: I compute velocities in a 3D mesh. This mesh is divide in subdomain (cubes) for every MPI job. I can dump all the domain thanks to parallel hdf5 routines.

But, I would like to dump also a slice of this domain via parallel hdf5 (I mean, I want to use a collective write). In this case, the slice is only included in a few subdomains. So, for the other subdomains/mpi job, the dimensions of their hyperslab part are null.

For a collective I/O, all mpi-job must to pass through the CALL h5dwrite_f(...,...,data,dims,...). Some mpi-job (in fact, a lot!) have data and dims equal to zero.

My code crashs! HDF5 complains about

"H5Screate_simple(): zero sized dimension for non-unlimited dimension"

How can I handle this without create a smaller communicator??

  I believe that we have addressed this limitation with the 1.8.7 release. Can you give that a try?

  Quincey

Yes, it have been addressed!
Unfortunately, the hdf5 library on the cluster I have to use is older. The solution is to add an "if(data ==null) h5sselect_none(fspace_id) or h5sselect_none(mspace_id) " before the h5dwrite.
I found this trick on this forum...

Thanks!

Stephane

···

On 20 juin 2011, at 22:55, Quincey Koziol wrote:

Hi Stephane,

On Jun 15, 2011, at 4:44 PM, Stéphane Backaert wrote:

Hello!

Here is my context: I compute velocities in a 3D mesh. This mesh is divide in subdomain (cubes) for every MPI job. I can dump all the domain thanks to parallel hdf5 routines.

But, I would like to dump also a slice of this domain via parallel hdf5 (I mean, I want to use a collective write). In this case, the slice is only included in a few subdomains. So, for the other subdomains/mpi job, the dimensions of their hyperslab part are null.

For a collective I/O, all mpi-job must to pass through the CALL h5dwrite_f(...,...,data,dims,...). Some mpi-job (in fact, a lot!) have data and dims equal to zero.

My code crashs! HDF5 complains about

"H5Screate_simple(): zero sized dimension for non-unlimited dimension"

How can I handle this without create a smaller communicator??

  I believe that we have addressed this limitation with the 1.8.7 release. Can you give that a try?

  Quincey

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org