Good morning Rob,
First of all I would like to thank the HDF community for the help and support. As I said, I need really to find a solution to this problem. I already adopted parallel HDF5 as a file format to my application. And I'm surprised by the performance with the scattered read and write. I think that the overhead is in the HDF5 layer. In fact one of my colleague is using MPI2 for scattered read and write and the performance are really very good. If it could help, I can try to make a simple program using MPI2.
Please, any suggestion is really appreciated.
Regards,
Mokhles
···
________________________________________
From: Rob Latham [robl@mcs.anl.gov]
Sent: Wednesday, May 20, 2009 7:57 PM
To: Mezghani, Mokhles B
Cc: hdf-forum@hdfgroup.org
Subject: Re: [hdf-forum] Scattered read and write
On Wed, May 20, 2009 at 09:08:32AM +0300, Mezghani, Mokhles B wrote:
Good morning Rob,
The bulk time is spent in the reading/writing phase. You will find
in attachment a Fortran small program to write scattered data. You
can use this program to see the problem. Please let me know if you
need any additional information or examples.
What I mean to determine is if the overhead is in the MPI-IO layer, or
in the HDF5 layer.
Thank you very much for the testcase. It's exactly what I hoped you'd
send. I can confirm that this code you sent is slow. dirt slow.
Roughly 1 MB per 10 minutes -- I had to cut down the number of points
to 100k just so it would finish in a reasonable amount of time :>
I can see that for me, HDF5 is turning a collective h5dwrite_f into N
individual MPI_File_write_at calls. I don't know anything about HDF5
internals, but you've described all the elements of the dataset you
want with h5sselect_elements_f. I would have expected HDF5 to
construct a monster datatype, feed that into MPI_File_write_at_all ...
and then send me a bug report when that doesn't work :>
I'm testing with HDF5-1.8.0.
HDF5 folks: is it possible I have an improperly-built HDF5? What I
mean is would you expect h5sselect_elements_f to behave as I
described, making a single (or few) calls to MPI_File_write_at_all ?
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
The contents of this email, including all related responses, files and attachments transmitted with it (collectively referred to as "this Email"), are intended solely for the use of the individual/entity to whom/which they are addressed, and may contain confidential and/or legally privileged information. This Email may not be disclosed or forwarded to anyone else without authorization from the originator of this Email. If you have received this Email in error, please notify the sender immediately and delete all copies from your system. Please note that the views or opinions presented in this Email are those of the author and may not necessarily represent those of Saudi Aramco. The recipient should check this Email and any attachments for the presence of any viruses. Saudi Aramco accepts no liability for any damage caused by any virus/error transmitted by this Email.
----------------------------------------------------------------------
This mailing list is for HDF software users discussion.
To subscribe to this list, send a message to hdf-forum-subscribe@hdfgroup.org.
To unsubscribe, send a message to hdf-forum-unsubscribe@hdfgroup.org.