HDF5 1.8.6-pre2 is available for testing

Hi everyone,

A new pre-release candidate of our HDF5 1.8.6 release is available for testing, and can be downloaded at the following link:

http://www.hdfgroup.uiuc.edu/ftp/pub/outgoing/hdf5/hdf5-1.8.6-pre2.tar

If you have some time to build and test this within the next week, it would be highly appreciated. We have addressed some issues exposed during the previous round of release candidate testing related to parallel HDF5, so focus on parallel applications and tests would additionally be greatly appreciated. If there are no critical errors reported, we hope to release on or around November 16th.

Thank you, all!

The HDF Team

a compressed (gzip / bzip2/ ..) will be better

--- Mar 9/11/10, Mike McGreevy <mamcgree@hdfgroup.org> ha scritto:

···

Hi everyone,

A new pre-release candidate of our HDF5 1.8.6 release is
available for testing, and can be downloaded at the
following link:

http://www.hdfgroup.uiuc.edu/ftp/pub/outgoing/hdf5/hdf5-1.8.6-pre2.tar

If you have some time to build and test this within the
next week, it would be highly appreciated. We have addressed
some issues exposed during the previous round of release
candidate testing related to parallel HDF5, so focus on
parallel applications and tests would additionally be
greatly appreciated. If there are no critical errors
reported, we hope to release on or around November 16th.

Thank you, all!

The HDF Team

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

- Configured and compiled and passed "make check" on an x86-64 linux
  system with MPICH2, even when i manually enabled the "complex
  derived datatype" and "special collective i/o" options.

Oh, good news: with the recent release of MPICH2-1.3.0, "mpicc
--version" finally gives you a non-recycled version number. Now you
can check for 1.3.whatever or newer and know those two things work.
Probably a bit late in the game for 1.8.6 but just letting you know.

- configured and compiled on bluegene. Did not run 'make check' but
  did run the flash-io kernel on 256 processors.

I see, thanks to jumpshot, that you now have many processors doing
metadata updates before closing the file. for this small test, 21 out
of 256 processors (ranks 0-21) do an MPI_FILE_WRITE_AT. Used to all
come from rank 0. Neat!
Could that final metadata update be done collectively? I think you've
explained to me why it could before, but I'm drawing a blank.

==rob

···

On Tue, Nov 09, 2010 at 09:51:03AM -0600, Mike McGreevy wrote:

Hi everyone,

A new pre-release candidate of our HDF5 1.8.6 release is available
for testing, and can be downloaded at the following link:

http://www.hdfgroup.uiuc.edu/ftp/pub/outgoing/hdf5/hdf5-1.8.6-pre2.tar

If you have some time to build and test this within the next week,
it would be highly appreciated.

--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA

Done!

.gz and .bz2 compressed versions of the pre-release tarball are now located in http://www.hdfgroup.uiuc.edu/ftp/pub/outgoing/hdf5/

- Mike

Marco Atzeri wrote:

···

a compressed (gzip / bzip2/ ..) will be better

--- Mar 9/11/10, Mike McGreevy <mamcgree@hdfgroup.org> ha scritto:

Hi everyone,

A new pre-release candidate of our HDF5 1.8.6 release is
available for testing, and can be downloaded at the
following link:

http://www.hdfgroup.uiuc.edu/ftp/pub/outgoing/hdf5/hdf5-1.8.6-pre2.tar

If you have some time to build and test this within the
next week, it would be highly appreciated. We have addressed
some issues exposed during the previous round of release
candidate testing related to parallel HDF5, so focus on
parallel applications and tests would additionally be
greatly appreciated. If there are no critical errors
reported, we hope to release on or around November 16th.

Thank you, all!

The HDF Team

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Rob,

You are probably seeing the "round-robin" metadata writing
optimization that came out of our work tuning HDF5 for Lustre that we
presented at IASDS2010. I'm guessing that you are seeing only 22
writes because that is how many individual pieces of metadata are in
your file. I'm surprised that you are seeing independent MPI writes,
because I thought it did generate collective calls when we were
testing it.

Eventually, the metadata writes will be consolidated into larger
contiguous pieces once a newer "pagefile" mechanism for metadata is
developed. The round-robin approach is more of a stop-gap until the
pagefile mechanism is available (at which point the metadata writes
should be large enough to interact much better with parallel file
systems).

(Quincey should correct me on anything I've gotten wrong here.)

Mark

···

On Wed, Nov 10, 2010 at 11:47 AM, Rob Latham <robl@mcs.anl.gov> wrote:

I see, thanks to jumpshot, that you now have many processors doing
metadata updates before closing the file. for this small test, 21 out
of 256 processors (ranks 0-21) do an MPI_FILE_WRITE_AT. Used to all
come from rank 0. Neat!
Could that final metadata update be done collectively? I think you've
explained to me why it could before, but I'm drawing a blank.

Hi all,

···

On Nov 10, 2010, at 11:04 AM, Mark Howison wrote:

On Wed, Nov 10, 2010 at 11:47 AM, Rob Latham <robl@mcs.anl.gov> wrote:

I see, thanks to jumpshot, that you now have many processors doing
metadata updates before closing the file. for this small test, 21 out
of 256 processors (ranks 0-21) do an MPI_FILE_WRITE_AT. Used to all
come from rank 0. Neat!
Could that final metadata update be done collectively? I think you've
explained to me why it could before, but I'm drawing a blank.

Hi Rob,

You are probably seeing the "round-robin" metadata writing
optimization that came out of our work tuning HDF5 for Lustre that we
presented at IASDS2010. I'm guessing that you are seeing only 22
writes because that is how many individual pieces of metadata are in
your file. I'm surprised that you are seeing independent MPI writes,
because I thought it did generate collective calls when we were
testing it.

Eventually, the metadata writes will be consolidated into larger
contiguous pieces once a newer "pagefile" mechanism for metadata is
developed. The round-robin approach is more of a stop-gap until the
pagefile mechanism is available (at which point the metadata writes
should be large enough to interact much better with parallel file
systems).

(Quincey should correct me on anything I've gotten wrong here.)

  Nope, you are correct on all accounts, except that we are still currently doing independent I/O calls. I think the next phase of work was to make the metadata I/O collective. I'm working on the page cache design right now and should have something reasonable in about a month or so.

  Quincey