Parallel IO in HDF5

Hello All,

I am getting an error which is not very obvious. I am trying to write a HDF5 file
in parallel. Although I am able to create the file and dataset collectively, I
am getting an error when I call the h5dwrite_f subroutine. This is the error I get:

[cli_1]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x125b840) failed
MPI_Type_free(96).: Cannot free permanent data type
[cli_0]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x130b250) failed
MPI_Type_free(96).: Cannot free permanent data type
[cli_2]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x125d420) failed
MPI_Type_free(96).: Cannot free permanent data type
HDF5: infinite loop closing library
     
D,G,S,T,D,S,F,D,G,S,T,F,AC,FD,P,FD,P,FD,P,E,E,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL
HDF5: infinite loop closing library
     
D,G,S,T,D,S,F,D,G,S,T,F,AC,FD,P,FD,P,FD,P,E,E,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL

Has anybody encountered a similar error or has any idea what is the reason for
this?

One more small doubt I have about PHDF5(this may be a silly one, but I am new to
PHDF5):

All the library calls do not have a parameter for the MPI communicator? So how
does HDF5 decide which processors to use for each call. Since I run the
application on 6 procs and only 3 of them(communicator 'new_comm') are required
to write to the file, I am wondering if this can be a problem.

Thank you.

Regards,
Nikhil

···

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.

Hi Nikhil,

Hello All,

I am getting an error which is not very obvious. I am trying to write a HDF5 file
in parallel. Although I am able to create the file and dataset collectively, I
am getting an error when I call the h5dwrite_f subroutine. This is the error I get:

[cli_1]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x125b840) failed
MPI_Type_free(96).: Cannot free permanent data type
[cli_0]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x130b250) failed
MPI_Type_free(96).: Cannot free permanent data type
[cli_2]: aborting job:
Fatal error in MPI_Type_free: Invalid datatype, error stack:
MPI_Type_free(145): MPI_Type_free(datatype_p=0x125d420) failed
MPI_Type_free(96).: Cannot free permanent data type
HDF5: infinite loop closing library

D,G,S,T,D,S,F,D,G,S,T,F,AC,FD,P,FD,P,FD,P,E,E,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL
HDF5: infinite loop closing library

D,G,S,T,D,S,F,D,G,S,T,F,AC,FD,P,FD,P,FD,P,E,E,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL

Has anybody encountered a similar error or has any idea what is the reason for
this?

  I'm not certain about how this happens, sorry...

One more small doubt I have about PHDF5(this may be a silly one, but I am new to
PHDF5):

All the library calls do not have a parameter for the MPI communicator? So how
does HDF5 decide which processors to use for each call. Since I run the
application on 6 procs and only 3 of them(communicator 'new_comm') are required
to write to the file, I am wondering if this can be a problem.

  HDF5 makes a copy of the communicator used to open the file and uses that for all I/O. Since MPI I/O requires a communicator when opening a file, the processes from that communicator must be the ones that perform I/O.

  Quincey

···

On Jun 27, 2008, at 9:53 AM, Nikhil Laghave wrote:

----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.