Parallel I/O with HDF5

Hi All,

Hopa all is well.

I am trying to use hdf5 parallel feature for extreme scale computing.
I would like each processor write out a dseparate dataset.
This question is actually mentioned on the HDF5 website.
Due to the collective call every processor has to call the same data set.

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a single file in parallel in which different processes write to
separate datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where hundreds of thousand cores are involved.

Is there and better way of overcoming this issue?

Your advise on this issue is greatly appreciated

Thanks

Dr J

The dataset creation has to be called on all ranks, not the actual writing of the array data.
So all ranks should call H5Dcreate() for all the datasets, but then each rank can write to its corresponding dataset.

Alternatively, you can have 1 rank create the entire file serially, then close the file, then all other ranks open and write the raw data in parallel.

Thanks,
Mohamad

···

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of jaber javanshir <jaberjavanshir@hotmail.com>
Reply-To: hdf-forum <hdf-forum@lists.hdfgroup.org>
Date: Tuesday, August 30, 2016 at 4:21 PM
To: hdf-forum <hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] Parallel I/O with HDF5

Hi All,

Hopa all is well.

I am trying to use hdf5 parallel feature for extreme scale computing.
I would like each processor write out a dseparate dataset.
This question is actually mentioned on the HDF5 website.
Due to the collective call every processor has to call the same data set.

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a single file in parallel in which different processes write to separate datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where hundreds of thousand cores are involved.

Is there and better way of overcoming this issue?

Your advise on this issue is greatly appreciated

Thanks

Dr J

Hi,

AFAIK, the issue is that if you create a new dataset in the file, all ranks
have to know about it (I think it seems obvious why). I don't think there
is a file format that can solve this issue. The best idea is still to use
different files if you are not writing in the same dataset!

Cheers,

Matthieu

···

2016-08-30 22:21 GMT+01:00 jaber javanshir <jaberjavanshir@hotmail.com>:

Hi All,

Hopa all is well.

I am trying to use hdf5 parallel feature for extreme scale computing.
I would like each processor write out a dseparate dataset.
This question is actually mentioned on the HDF5 website.
Due to the collective call every processor has to call the same data set.

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a
single file in parallel in which different processes write to separate
datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where
hundreds of thousand cores are involved.

Is there and better way of overcoming this issue?

Your advise on this issue is greatly appreciated

Thanks

Dr J

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

--
Information System Engineer, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher

If you take those hundreds of thoudsans of cores and issue I/O to the parallel file system, you will probably break your file system.

So you are imagining, say, 1,000 datasets and 100 cores will write to a dataset? Are you imagining one dataset per core? Can the HDF5 visualization and analysis tools deal reasonably well with 100,000 datasets?

A single shared dataset has a lot of workflow advantages. It also maps nicely to collective MPI-IO optimizations.

If you really need one dataset per process, then you probably also need to use the multi-dataset I/O routines (H5Dread_multi() and H5Dwrite_multi() -- are those released yet? )

==rob

···

On 08/30/2016 04:21 PM, jaber javanshir wrote:

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a
single file in parallel in which different processes write to separate
datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where
hundreds of thousand cores are involved.

My recollection is that a developer somewhere in Europe (maybe CERN) developed a convenience API on top of HDF5
that simplified collective dataset a bit by creating an interface where processors work independently to _define_
(names, types, sizes and shapes) the datasets they need to create and then call a collective _sync_ method where
all the collective dataset creation happens down in HDF5. Datasets from different ranks that have same attributes
(e.g. name, types, size and shape) and are marked with a 'tag' wind up being common across the ranks that passed
the same tag. After the collective _sync_ operation, processors can again engage in either independent (or collective)
I/O to the datasets.

I have never used that API and I'll be darned if I can remember the name of it (I spent 20 mins looking on Google) and
I don't even know if it is still being maintained. But, it does provide a much simpler way of interacting with HDF5's
collective dataset creation requirement when that is necessary.

It might be an option if you can find it or if another user here familiar with what I am talking about can send a link :wink:

Mark

···

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>> on behalf of Mohamad Chaarawi <chaarawi@hdfgroup.org<mailto:chaarawi@hdfgroup.org>>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Date: Wednesday, August 31, 2016 at 6:19 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Subject: Re: [Hdf-forum] Parallel I/O with HDF5

The dataset creation has to be called on all ranks, not the actual writing of the array data.
So all ranks should call H5Dcreate() for all the datasets, but then each rank can write to its corresponding dataset.

Alternatively, you can have 1 rank create the entire file serially, then close the file, then all other ranks open and write the raw data in parallel.

Thanks,
Mohamad

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>> on behalf of jaber javanshir <jaberjavanshir@hotmail.com<mailto:jaberjavanshir@hotmail.com>>
Reply-To: hdf-forum <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Date: Tuesday, August 30, 2016 at 4:21 PM
To: hdf-forum <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Subject: [Hdf-forum] Parallel I/O with HDF5

Hi All,

Hopa all is well.

I am trying to use hdf5 parallel feature for extreme scale computing.
I would like each processor write out a dseparate dataset.
This question is actually mentioned on the HDF5 website.
Due to the collective call every processor has to call the same data set.

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a single file in parallel in which different processes write to separate datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where hundreds of thousand cores are involved.

Is there and better way of overcoming this issue?

Your advise on this issue is greatly appreciated

Thanks

Dr J

The collective calls to create datasets are required because the metadata needs to be consistent across all ranks for the datasets to be created correctly. You can write a sample program which violates this rule, and you’ll see how the datasets get clobbered if not coordinated and you don’t get the results you would like.
Same with groups, attributes, and anything that affects the metadata, hence the list of required collective calls.

Mohamad’s solution should work, since collective calls are only required if opening the file in parallel across multiple ranks. Once the metadata is created for all the datasets and written to the file and closed, you can then open the file in parallel and each rank can write into the file without affecting the datasets for the other ranks.

Jarom

···

From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of Mohamad Chaarawi
Sent: Wednesday, August 31, 2016 6:19 AM
To: HDF Users Discussion List
Subject: Re: [Hdf-forum] Parallel I/O with HDF5

The dataset creation has to be called on all ranks, not the actual writing of the array data.
So all ranks should call H5Dcreate() for all the datasets, but then each rank can write to its corresponding dataset.

Alternatively, you can have 1 rank create the entire file serially, then close the file, then all other ranks open and write the raw data in parallel.

Thanks,
Mohamad

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>> on behalf of jaber javanshir <jaberjavanshir@hotmail.com<mailto:jaberjavanshir@hotmail.com>>
Reply-To: hdf-forum <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Date: Tuesday, August 30, 2016 at 4:21 PM
To: hdf-forum <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Subject: [Hdf-forum] Parallel I/O with HDF5

Hi All,

Hopa all is well.

I am trying to use hdf5 parallel feature for extreme scale computing.
I would like each processor write out a dseparate dataset.
This question is actually mentioned on the HDF5 website.
Due to the collective call every processor has to call the same data set.

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a single file in parallel in which different processes write to separate datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where hundreds of thousand cores are involved.

Is there and better way of overcoming this issue?

Your advise on this issue is greatly appreciated

Thanks

Dr J

Rob,

https://www.hdfgroup.org/HDF5/faq/parallel.html How do you write to a
single file in parallel in which different processes write to separate
datasets?Please advise on this matter.

The answer is not satisfying for the extreme scale computing where
hundreds of thousand cores are involved.

If you take those hundreds of thoudsans of cores and issue I/O to the parallel file system, you will probably break your file system.

So you are imagining, say, 1,000 datasets and 100 cores will write to a dataset? Are you imagining one dataset per core? Can the HDF5 visualization and analysis tools deal reasonably well with 100,000 datasets?

A single shared dataset has a lot of workflow advantages. It also maps nicely to collective MPI-IO optimizations.

If you really need one dataset per process, then you probably also need to use the multi-dataset I/O routines (H5Dread_multi() and H5Dwrite_multi() -- are those released yet? )

Not yet... ;-(

···

On Aug 31, 2016, at 11:36 PM, Rob Latham <robl@mcs.anl.gov> wrote:
On 08/30/2016 04:21 PM, jaber javanshir wrote:
==rob

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5