hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from an
hdf5 file.

Because several people work with this file and long running processes I
would like

to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What's the way to parallel h5py.

Thanks and regards

Friedhelm Matten

It doesn't sound like parallel HDF5 is what you are wanting to do here. Parallel HDF5 is for an application where all applications are writing in a very coordinated manner. All processes need to write the same metadata to the file in "collective" calls to the library, i.e. each application makes the same calls using the same arguments in the same order when making calls that modify the file metadata (creating files, datasets or groups, writing attributes, etc.<https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that might work well for you. HDF5 1.10 introduced a Single-writer multi-reader (SWMR) mode = to open a file. Using a SWMR file for each process, each process would open one file as the writer in SWMR mode, and open the files from all the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

···

From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from an hdf5 file.

Because several people work with this file and long running processes I would like
to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What's the way to parallel h5py.

Thanks and regards

Friedhelm Matten

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which puts
a server in front of your hdf5 file. It serves as a single process owns it
and serves as the serialization point which side-steps almost all of the
multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd) is a
client of h5serv which has an identical high-level API to h5py.

Tom

···

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov> wrote:

It doesn’t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing in a
very coordinated manner. All processes need to write the same metadata to
the file in “collective” calls to the library, i.e. each application makes
the same calls using the same arguments in the same order when making calls
that modify the file metadata (creating files, datasets or groups, writing
attributes, etc.
<https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).

It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that might
work well for you. HDF5 1.10 introduced a Single-writer multi-reader (SWMR)
mode = to open a file. Using a SWMR file for each process, each process
would open one file as the writer in SWMR mode, and open the files from all
the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

*From:* Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] *On
Behalf Of *ISCaD GmbH
*Sent:* Thursday, August 3, 2017 1:19 AM
*To:* hdf-forum@lists.hdfgroup.org
*Subject:* [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from
an hdf5 file.

Because several people work with this file and long running processes I
would like

to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What’s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Friedhelm,

   I’m not familiar with the specifics of your web app, but another possibility is to just have the app call h5serv directly.

   Anika @ NASA Goddard wrote a nice blog article on this approach: https://www.hdfgroup.org/2017/04/the-gfed-analysis-tool-an-hdf-server-implementation/.

John

···

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of Thomas Caswell <tcaswell@gmail.com>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Date: Saturday, August 5, 2017 at 9:48 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>, "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which puts a server in front of your hdf5 file. It serves as a single process owns it and serves as the serialization point which side-steps almost all of the multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd) is a client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov<mailto:nelson99@llnl.gov>> wrote:
It doesn’t sound like parallel HDF5 is what you are wanting to do here. Parallel HDF5 is for an application where all applications are writing in a very coordinated manner. All processes need to write the same metadata to the file in “collective” calls to the library, i.e. each application makes the same calls using the same arguments in the same order when making calls that modify the file metadata (creating files, datasets or groups, writing attributes, etc.<https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that might work well for you. HDF5 1.10 introduced a Single-writer multi-reader (SWMR) mode = to open a file. Using a SWMR file for each process, each process would open one file as the writer in SWMR mode, and open the files from all the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from an hdf5 file.

Because several people work with this file and long running processes I would like
to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What’s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Hi,
The way to parallel h5py is first to compile h5py with parallel HDF5. If
you are looking for examples of parallel H5py, take a look at the nersc
webpage:
http://www.nersc.gov/users/data-analytics/data-management/i-o-libraries/hdf5-2/h5py/

Best,
Jialin

···

On Mon, Aug 7, 2017 at 11:06 AM, <hdf-forum-request@lists.hdfgroup.org> wrote:

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_
lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Thomas Caswell)
   2. Re: "File too large" error, seemingly related to MPI
      (Quincey Koziol)
   3. Re: VCS URI (Quincey Koziol)
   4. Re: hdf5 parallel h5py (John Readey)

----------------------------------------------------------------------

Message: 1
Date: Sat, 05 Aug 2017 16:48:23 +0000
From: Thomas Caswell <tcaswell@gmail.com>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
        <CAA48SF_LEzc5tRgMWgTwv03SBOoCvBcFeycP6
pDRH_MVfP1HEg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which
puts
a server in front of your hdf5 file. It serves as a single process owns it
and serves as the serialization point which side-steps almost all of the
multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd) is a
client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov> wrote:

> It doesn?t sound like parallel HDF5 is what you are wanting to do here.
> Parallel HDF5 is for an application where all applications are writing
in a
> very coordinated manner. All processes need to write the same metadata to
> the file in ?collective? calls to the library, i.e. each application
makes
> the same calls using the same arguments in the same order when making
calls
> that modify the file metadata (creating files, datasets or groups,
writing
> attributes, etc.
> <https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).
>
> It sounds like you have separate applications that are executing in a
> somewhat independent manner. This will not work with parallel HDF5.
>
>
>
> Using the serial library, I can think of at least one approach that might
> work well for you. HDF5 1.10 introduced a Single-writer multi-reader
(SWMR)
> mode = to open a file. Using a SWMR file for each process, each process
> would open one file as the writer in SWMR mode, and open the files from
all
> the other processes as read-only in SWMR mode.
>
>
>
> http://docs.h5py.org/en/latest/swmr.html
>
>
>
> Jarom
>
>
>
> *From:* Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] *On
> Behalf Of *ISCaD GmbH
> *Sent:* Thursday, August 3, 2017 1:19 AM
> *To:* hdf-forum@lists.hdfgroup.org
> *Subject:* [Hdf-forum] hdf5 parallel h5py
>
>
>
> Dear all,
>
>
>
> I work on an web application which should store and receive the data from
> an hdf5 file.
>
>
>
> Because several people work with this file and long running processes I
> would like
>
> to use the mpi4py, h5py and HDF5.
>
>
>
> I work on debian linux stretch 64 Bit.
>
>
>
> What?s the way to parallel h5py.
>
>
>
> Thanks and regards
>
>
>
> Friedhelm Matten
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170805/4d84f060/attachment-0001.html>

------------------------------

Message: 2
Date: Mon, 7 Aug 2017 09:14:28 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] "File too large" error, seemingly related to
        MPI
Message-ID: <11A08003-02C4-494D-A746-D86F1C948FF6@lbl.gov>
Content-Type: text/plain; charset=utf-8

Hi Frederic,
        Could you give us some more details about your file and the
call(s) you are making to HDF5? I can?t think of any reason that it would
crash when creating a file like this, but something interesting could be
going on? :slight_smile:

        Quincey

> On Aug 7, 2017, at 5:28 AM, Frederic Perez <fredericperez1@gmail.com> > wrote:
>
> Hi,
>
> While writing significant amount of data in parallel, I obtain the
> following error stack:
>
> HDF5-DIAG: Error detected in HDF5 (1.8.16) MPI-process 66:
> #000: H5D.c line 194 in H5Dcreate2(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #001: H5Dint.c line 453 in H5D__create_named(): unable to create and
> link to dataset
> major: Dataset
> minor: Unable to initialize object
> #002: H5L.c line 1638 in H5L_link_object(): unable to create new
> link to object
> major: Links
> minor: Unable to initialize object
> #003: H5L.c line 1882 in H5L_create_real(): can't insert link
> major: Symbol table
> minor: Unable to insert object
> #004: H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal
failed
> major: Symbol table
> minor: Object not found
> #005: H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator
failed
> major: Symbol table
> minor: Callback failed
> #006: H5L.c line 1685 in H5L_link_cb(): unable to create object
> major: Object header
> minor: Unable to initialize object
> #007: H5O.c line 3016 in H5O_obj_create(): unable to open object
> major: Object header
> minor: Can't open object
> #008: H5Doh.c line 293 in H5O__dset_create(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #009: H5Dint.c line 1060 in H5D__create(): can't update the metadata
cache
> major: Dataset
> minor: Unable to initialize object
> #010: H5Dint.c line 852 in H5D__update_oh_info(): unable to update
> layout/pline/efl header message
> major: Dataset
> minor: Unable to initialize object
> #011: H5Dlayout.c line 238 in H5D__layout_oh_create(): unable to
> initialize storage
> major: Dataset
> minor: Unable to initialize object
> #012: H5Dint.c line 1713 in H5D__alloc_storage(): unable to
> initialize dataset with fill value
> major: Dataset
> minor: Unable to initialize object
> #013: H5Dint.c line 1805 in H5D__init_storage(): unable to allocate
> all chunks of dataset
> major: Dataset
> minor: Unable to initialize object
> #014: H5Dchunk.c line 3575 in H5D__chunk_allocate(): unable to write
> raw data to file
> major: Low-level I/O
> minor: Write failed
> #015: H5Dchunk.c line 3745 in H5D__chunk_collective_fill(): unable
> to write raw data to file
> major: Low-level I/O
> minor: Write failed
> #016: H5Fio.c line 171 in H5F_block_write(): write through metadata
> accumulator failed
> major: Low-level I/O
> minor: Write failed
> #017: H5Faccum.c line 825 in H5F__accum_write(): file write failed
> major: Low-level I/O
> minor: Write failed
> #018: H5FDint.c line 260 in H5FD_write(): driver write request failed
> major: Virtual File Layer
> minor: Write failed
> #019: H5FDmpio.c line 1846 in H5FD_mpio_write(): MPI_File_write_at_all
failed
> major: Internal error (too specific to document in detail)
> minor: Some MPI function failed
> #020: H5FDmpio.c line 1846 in H5FD_mpio_write(): Other I/O error ,
> error stack:
> ADIOI_NFS_WRITESTRIDED(672): Other I/O error File too large
> major: Internal error (too specific to document in detail)
> minor: MPI Error String
>
>
> It basically claims that I am creating a file too large. But I
> verified that the filesystem is capable of handling such a size. In my
> case, the file is around 4 TB when it crashes. Where could this
> problem come from? I thought HDF5 does not have a problem with very
> large files. Plus, I am dividing the file in several datasets, and the
> write operations work perfectly until, at some point, it crashes with
> the errors above.
>
> Could it be an issue with HDF5? Or could it be an MPI limitation? I am
> skeptic about the latter option: at the beginning, the program writes
> several datasets inside the file succesfully (all the datasets being
> the same size). If MPI was to blame, why wouldn't it crash at the
> first write?
>
> Thank you for your help.
> Fred
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5

------------------------------

Message: 3
Date: Mon, 7 Aug 2017 09:15:41 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] VCS URI
Message-ID: <ADB59151-50FC-4927-8165-9D2B0C975251@lbl.gov>
Content-Type: text/plain; charset="us-ascii"

Hi David,
        Sure, the git repo is here: https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse <https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse>

                Quincey

> On Aug 7, 2017, at 5:27 AM, David Seifert <soap@gentoo.org> wrote:
>
> Hi HDF5 team and users,
> is there any possibility for me to develop again the current VCS
> sources for adding pkgconfig + Meson support? This makes development
> and feature addition a lot easier, as there won't be any conflicts.
>
> Regards
> David
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170807/bebdf029/attachment-0001.html>

------------------------------

Message: 4
Date: Mon, 7 Aug 2017 15:05:28 +0000
From: John Readey <jreadey@hdfgroup.org>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID: <82D9CE59-6BBE-4E47-9D05-18DE033262F9@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Friedhelm,

   I?m not familiar with the specifics of your web app, but another
possibility is to just have the app call h5serv directly.

   Anika @ NASA Goddard wrote a nice blog article on this approach:
https://www.hdfgroup.org/2017/04/the-gfed-analysis-tool-an-
hdf-server-implementation/.

John

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of
Thomas Caswell <tcaswell@gmail.com>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Date: Saturday, August 5, 2017 at 9:48 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>, "
friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which
puts a server in front of your hdf5 file. It serves as a single process
owns it and serves as the serialization point which side-steps almost all
of the multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd)
is a client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov<mailto: > nelson99@llnl.gov>> wrote:
It doesn?t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing in a
very coordinated manner. All processes need to write the same metadata to
the file in ?collective? calls to the library, i.e. each application makes
the same calls using the same arguments in the same order when making calls
that modify the file metadata (creating files, datasets or groups, writing
attributes, etc.<https://support.hdfgroup.org/HDF5/doc/RM/
CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that might
work well for you. HDF5 1.10 introduced a Single-writer multi-reader (SWMR)
mode = to open a file. Using a SWMR file for each process, each process
would open one file as the writer in SWMR mode, and open the files from all
the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-
forum-bounces@lists.hdfgroup.org>] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from
an hdf5 file.

Because several people work with this file and long running processes I
would like
to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What?s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170807/d8afdcac/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 6
****************************************

Hi,

the database for the web application is a lot of measured data stream and
data which come in on events.
in a staging area the data will collected and then transferred into the web
app data area.
Some data must be collected via forms.

That’s the orchestra with a lot of hierarchical data which - you all know -
not the best thing for traditional sql.

Other people use the system for reporting, summary and statistics.

We have a database version with high availability and a lot of work for
stability and ... and ...

We search for new ways ...???!!

Thanks and regards

Friedhelm

···

-----Ursprüngliche Nachricht-----
Von: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] Im Auftrag von
hdf-forum-request@lists.hdfgroup.org
Gesendet: Montag, 7. August 2017 17:06
An: hdf-forum@lists.hdfgroup.org
Betreff: Hdf-forum Digest, Vol 98, Issue 6

Send Hdf-forum mailing list submissions to
  hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
  

or, via email, send a message with subject or body 'help' to
  hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
  hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific than
"Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Thomas Caswell)
   2. Re: "File too large" error, seemingly related to MPI
      (Quincey Koziol)
   3. Re: VCS URI (Quincey Koziol)
   4. Re: hdf5 parallel h5py (John Readey)

----------------------------------------------------------------------

Message: 1
Date: Sat, 05 Aug 2017 16:48:23 +0000
From: Thomas Caswell <tcaswell@gmail.com>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
  "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
  <CAA48SF_LEzc5tRgMWgTwv03SBOoCvBcFeycP6pDRH_MVfP1HEg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which puts
a server in front of your hdf5 file. It serves as a single process owns it
and serves as the serialization point which side-steps almost all of the
multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd) is a
client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov> wrote:

It doesn?t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing
in a very coordinated manner. All processes need to write the same
metadata to the file in ?collective? calls to the library, i.e. each
application makes the same calls using the same arguments in the same
order when making calls that modify the file metadata (creating files,
datasets or groups, writing attributes, etc.
<https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).

It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that
might work well for you. HDF5 1.10 introduced a Single-writer
multi-reader (SWMR) mode = to open a file. Using a SWMR file for each
process, each process would open one file as the writer in SWMR mode,
and open the files from all the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

*From:* Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] *On
Behalf Of *ISCaD GmbH
*Sent:* Thursday, August 3, 2017 1:19 AM
*To:* hdf-forum@lists.hdfgroup.org
*Subject:* [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data
from an hdf5 file.

Because several people work with this file and long running processes
I would like

to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What?s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/attachment
s/20170805/4d84f060/attachment-0001.html>

------------------------------

Message: 2
Date: Mon, 7 Aug 2017 09:14:28 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] "File too large" error, seemingly related to
  MPI
Message-ID: <11A08003-02C4-494D-A746-D86F1C948FF6@lbl.gov>
Content-Type: text/plain; charset=utf-8

Hi Frederic,
  Could you give us some more details about your file and the call(s)
you are making to HDF5? I can?t think of any reason that it would crash
when creating a file like this, but something interesting could be going on?
:slight_smile:

  Quincey

On Aug 7, 2017, at 5:28 AM, Frederic Perez <fredericperez1@gmail.com> wrote:

Hi,

While writing significant amount of data in parallel, I obtain the
following error stack:

HDF5-DIAG: Error detected in HDF5 (1.8.16) MPI-process 66:
#000: H5D.c line 194 in H5Dcreate2(): unable to create dataset
   major: Dataset
   minor: Unable to initialize object
#001: H5Dint.c line 453 in H5D__create_named(): unable to create and
link to dataset
   major: Dataset
   minor: Unable to initialize object
#002: H5L.c line 1638 in H5L_link_object(): unable to create new link
to object
   major: Links
   minor: Unable to initialize object
#003: H5L.c line 1882 in H5L_create_real(): can't insert link
   major: Symbol table
   minor: Unable to insert object
#004: H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal

failed

   major: Symbol table
   minor: Object not found
#005: H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator

failed

   major: Symbol table
   minor: Callback failed
#006: H5L.c line 1685 in H5L_link_cb(): unable to create object
   major: Object header
   minor: Unable to initialize object
#007: H5O.c line 3016 in H5O_obj_create(): unable to open object
   major: Object header
   minor: Can't open object
#008: H5Doh.c line 293 in H5O__dset_create(): unable to create dataset
   major: Dataset
   minor: Unable to initialize object
#009: H5Dint.c line 1060 in H5D__create(): can't update the metadata

cache

   major: Dataset
   minor: Unable to initialize object
#010: H5Dint.c line 852 in H5D__update_oh_info(): unable to update
layout/pline/efl header message
   major: Dataset
   minor: Unable to initialize object
#011: H5Dlayout.c line 238 in H5D__layout_oh_create(): unable to
initialize storage
   major: Dataset
   minor: Unable to initialize object
#012: H5Dint.c line 1713 in H5D__alloc_storage(): unable to
initialize dataset with fill value
   major: Dataset
   minor: Unable to initialize object
#013: H5Dint.c line 1805 in H5D__init_storage(): unable to allocate
all chunks of dataset
   major: Dataset
   minor: Unable to initialize object
#014: H5Dchunk.c line 3575 in H5D__chunk_allocate(): unable to write
raw data to file
   major: Low-level I/O
   minor: Write failed
#015: H5Dchunk.c line 3745 in H5D__chunk_collective_fill(): unable to
write raw data to file
   major: Low-level I/O
   minor: Write failed
#016: H5Fio.c line 171 in H5F_block_write(): write through metadata
accumulator failed
   major: Low-level I/O
   minor: Write failed
#017: H5Faccum.c line 825 in H5F__accum_write(): file write failed
   major: Low-level I/O
   minor: Write failed
#018: H5FDint.c line 260 in H5FD_write(): driver write request failed
   major: Virtual File Layer
   minor: Write failed
#019: H5FDmpio.c line 1846 in H5FD_mpio_write(): MPI_File_write_at_all

failed

   major: Internal error (too specific to document in detail)
   minor: Some MPI function failed
#020: H5FDmpio.c line 1846 in H5FD_mpio_write(): Other I/O error ,
error stack:
ADIOI_NFS_WRITESTRIDED(672): Other I/O error File too large
   major: Internal error (too specific to document in detail)
   minor: MPI Error String

It basically claims that I am creating a file too large. But I
verified that the filesystem is capable of handling such a size. In my
case, the file is around 4 TB when it crashes. Where could this
problem come from? I thought HDF5 does not have a problem with very
large files. Plus, I am dividing the file in several datasets, and the
write operations work perfectly until, at some point, it crashes with
the errors above.

Could it be an issue with HDF5? Or could it be an MPI limitation? I am
skeptic about the latter option: at the beginning, the program writes
several datasets inside the file succesfully (all the datasets being
the same size). If MPI was to blame, why wouldn't it crash at the
first write?

Thank you for your help.
Fred

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5

------------------------------

Message: 3
Date: Mon, 7 Aug 2017 09:15:41 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] VCS URI
Message-ID: <ADB59151-50FC-4927-8165-9D2B0C975251@lbl.gov>
Content-Type: text/plain; charset="us-ascii"

Hi David,
  Sure, the git repo is here:
https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5/browse
<https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5/browse>

    Quincey

On Aug 7, 2017, at 5:27 AM, David Seifert <soap@gentoo.org> wrote:

Hi HDF5 team and users,
is there any possibility for me to develop again the current VCS
sources for adding pkgconfig + Meson support? This makes development
and feature addition a lot easier, as there won't be any conflicts.

Regards
David

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/attachment
s/20170807/bebdf029/attachment-0001.html>

------------------------------

Message: 4
Date: Mon, 7 Aug 2017 15:05:28 +0000
From: John Readey <jreadey@hdfgroup.org>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
  "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID: <82D9CE59-6BBE-4E47-9D05-18DE033262F9@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Friedhelm,

   I?m not familiar with the specifics of your web app, but another
possibility is to just have the app call h5serv directly.

   Anika @ NASA Goddard wrote a nice blog article on this approach:
https://www.hdfgroup.org/2017/04/the-gfed-analysis-tool-an-hdf-server-implem
entation/.

John

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of Thomas
Caswell <tcaswell@gmail.com>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Date: Saturday, August 5, 2017 at 9:48 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
"friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which puts
a server in front of your hdf5 file. It serves as a single process owns it
and serves as the serialization point which side-steps almost all of the
multiple-client issues. h5pyd (https://github.com/HDFGroup/h5pyd) is a
client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov<mailto:nelson99@llnl.gov>> wrote:
It doesn?t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing in a
very coordinated manner. All processes need to write the same metadata to
the file in ?collective? calls to the library, i.e. each application makes
the same calls using the same arguments in the same order when making calls
that modify the file metadata (creating files, datasets or groups, writing
attributes,
etc.<https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that might
work well for you. HDF5 1.10 introduced a Single-writer multi-reader (SWMR)
mode = to open a file. Using a SWMR file for each process, each process
would open one file as the writer in SWMR mode, and open the files from all
the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

From: Hdf-forum
[mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.
hdfgroup.org>] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data from an
hdf5 file.

Because several people work with this file and long running processes I
would like to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What?s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/attachment
s/20170807/d8afdcac/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 6
****************************************

Hi,

I look for SWMR and h5serv:

H5serv thanks good way but:

I read the h5serv release notes and notify:

High Performance usage ‐ the current release of h5serv serializes all requests, so would not be suitable for a demanding environment with a large number of clients and/or high throughput rates.

and

Variable length datatypes - Variable length datatypes are now supported.

This are remarkable issues for me.

SWMR sounds good but:

What I see, is in early stage.
I need also parallel write for big data loads.

Thanks and regards

···

-----Ursprüngliche Nachricht-----
Von: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] Im Auftrag von hdf-forum-request@lists.hdfgroup.org
Gesendet: Montag, 7. August 2017 17:13
An: hdf-forum@lists.hdfgroup.org
Betreff: Hdf-forum Digest, Vol 98, Issue 7

Send Hdf-forum mailing list submissions to
  hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
  http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
  hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
  hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Jialin Liu)

----------------------------------------------------------------------

Message: 1
Date: Mon, 7 Aug 2017 11:12:23 -0400
From: Jialin Liu <jalnliu@lbl.gov>
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
  <CAOWawpGNLZz38c0CnPv00RyctgAdW9bGWLc+gNqVx_0Z7KcQ_A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi,
The way to parallel h5py is first to compile h5py with parallel HDF5. If you are looking for examples of parallel H5py, take a look at the nersc
webpage:
http://www.nersc.gov/users/data-analytics/data-management/i-o-libraries/hdf5-2/h5py/

Best,
Jialin

On Mon, Aug 7, 2017 at 11:06 AM, <hdf-forum-request@lists.hdfgroup.org> wrote:

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_
lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Thomas Caswell)
   2. Re: "File too large" error, seemingly related to MPI
      (Quincey Koziol)
   3. Re: VCS URI (Quincey Koziol)
   4. Re: hdf5 parallel h5py (John Readey)

----------------------------------------------------------------------

Message: 1
Date: Sat, 05 Aug 2017 16:48:23 +0000
From: Thomas Caswell <tcaswell@gmail.com>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
        <CAA48SF_LEzc5tRgMWgTwv03SBOoCvBcFeycP6
pDRH_MVfP1HEg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which
puts a server in front of your hdf5 file. It serves as a single
process owns it and serves as the serialization point which side-steps
almost all of the multiple-client issues. h5pyd
(https://github.com/HDFGroup/h5pyd) is a client of h5serv which has an
identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov> wrote:

> It doesn?t sound like parallel HDF5 is what you are wanting to do here.
> Parallel HDF5 is for an application where all applications are
> writing
in a
> very coordinated manner. All processes need to write the same
> metadata to the file in ?collective? calls to the library, i.e. each
> application
makes
> the same calls using the same arguments in the same order when
> making
calls
> that modify the file metadata (creating files, datasets or groups,
writing
> attributes, etc.
> <https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html>).
>
> It sounds like you have separate applications that are executing in
> a somewhat independent manner. This will not work with parallel HDF5.
>
>
>
> Using the serial library, I can think of at least one approach that
> might work well for you. HDF5 1.10 introduced a Single-writer
> multi-reader
(SWMR)
> mode = to open a file. Using a SWMR file for each process, each
> process would open one file as the writer in SWMR mode, and open the
> files from
all
> the other processes as read-only in SWMR mode.
>
>
>
> http://docs.h5py.org/en/latest/swmr.html
>
>
>
> Jarom
>
>
>
> *From:* Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] *On
> Behalf Of *ISCaD GmbH
> *Sent:* Thursday, August 3, 2017 1:19 AM
> *To:* hdf-forum@lists.hdfgroup.org
> *Subject:* [Hdf-forum] hdf5 parallel h5py
>
>
>
> Dear all,
>
>
>
> I work on an web application which should store and receive the data
> from an hdf5 file.
>
>
>
> Because several people work with this file and long running
> processes I would like
>
> to use the mpi4py, h5py and HDF5.
>
>
>
> I work on debian linux stretch 64 Bit.
>
>
>
> What?s the way to parallel h5py.
>
>
>
> Thanks and regards
>
>
>
> Friedhelm Matten
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.
> org
> Twitter: https://twitter.com/hdf5
-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170805/4d84f060/attachment-0001.html>

------------------------------

Message: 2
Date: Mon, 7 Aug 2017 09:14:28 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] "File too large" error, seemingly related to
        MPI
Message-ID: <11A08003-02C4-494D-A746-D86F1C948FF6@lbl.gov>
Content-Type: text/plain; charset=utf-8

Hi Frederic,
        Could you give us some more details about your file and the
call(s) you are making to HDF5? I can?t think of any reason that it would
crash when creating a file like this, but something interesting could be
going on? :slight_smile:

        Quincey

> On Aug 7, 2017, at 5:28 AM, Frederic Perez > > <fredericperez1@gmail.com> > wrote:
>
> Hi,
>
> While writing significant amount of data in parallel, I obtain the
> following error stack:
>
> HDF5-DIAG: Error detected in HDF5 (1.8.16) MPI-process 66:
> #000: H5D.c line 194 in H5Dcreate2(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #001: H5Dint.c line 453 in H5D__create_named(): unable to create
> and link to dataset
> major: Dataset
> minor: Unable to initialize object
> #002: H5L.c line 1638 in H5L_link_object(): unable to create new
> link to object
> major: Links
> minor: Unable to initialize object
> #003: H5L.c line 1882 in H5L_create_real(): can't insert link
> major: Symbol table
> minor: Unable to insert object
> #004: H5Gtraverse.c line 861 in H5G_traverse(): internal path
> traversal
failed
> major: Symbol table
> minor: Object not found
> #005: H5Gtraverse.c line 641 in H5G_traverse_real(): traversal
> operator
failed
> major: Symbol table
> minor: Callback failed
> #006: H5L.c line 1685 in H5L_link_cb(): unable to create object
> major: Object header
> minor: Unable to initialize object
> #007: H5O.c line 3016 in H5O_obj_create(): unable to open object
> major: Object header
> minor: Can't open object
> #008: H5Doh.c line 293 in H5O__dset_create(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #009: H5Dint.c line 1060 in H5D__create(): can't update the
> metadata
cache
> major: Dataset
> minor: Unable to initialize object
> #010: H5Dint.c line 852 in H5D__update_oh_info(): unable to update
> layout/pline/efl header message
> major: Dataset
> minor: Unable to initialize object
> #011: H5Dlayout.c line 238 in H5D__layout_oh_create(): unable to
> initialize storage
> major: Dataset
> minor: Unable to initialize object
> #012: H5Dint.c line 1713 in H5D__alloc_storage(): unable to
> initialize dataset with fill value
> major: Dataset
> minor: Unable to initialize object
> #013: H5Dint.c line 1805 in H5D__init_storage(): unable to allocate
> all chunks of dataset
> major: Dataset
> minor: Unable to initialize object
> #014: H5Dchunk.c line 3575 in H5D__chunk_allocate(): unable to
> write raw data to file
> major: Low-level I/O
> minor: Write failed
> #015: H5Dchunk.c line 3745 in H5D__chunk_collective_fill(): unable
> to write raw data to file
> major: Low-level I/O
> minor: Write failed
> #016: H5Fio.c line 171 in H5F_block_write(): write through metadata
> accumulator failed
> major: Low-level I/O
> minor: Write failed
> #017: H5Faccum.c line 825 in H5F__accum_write(): file write failed
> major: Low-level I/O
> minor: Write failed
> #018: H5FDint.c line 260 in H5FD_write(): driver write request failed
> major: Virtual File Layer
> minor: Write failed
> #019: H5FDmpio.c line 1846 in H5FD_mpio_write():
> MPI_File_write_at_all
failed
> major: Internal error (too specific to document in detail)
> minor: Some MPI function failed
> #020: H5FDmpio.c line 1846 in H5FD_mpio_write(): Other I/O error ,
> error stack:
> ADIOI_NFS_WRITESTRIDED(672): Other I/O error File too large
> major: Internal error (too specific to document in detail)
> minor: MPI Error String
>
>
> It basically claims that I am creating a file too large. But I
> verified that the filesystem is capable of handling such a size. In
> my case, the file is around 4 TB when it crashes. Where could this
> problem come from? I thought HDF5 does not have a problem with very
> large files. Plus, I am dividing the file in several datasets, and
> the write operations work perfectly until, at some point, it crashes
> with the errors above.
>
> Could it be an issue with HDF5? Or could it be an MPI limitation? I
> am skeptic about the latter option: at the beginning, the program
> writes several datasets inside the file succesfully (all the
> datasets being the same size). If MPI was to blame, why wouldn't it
> crash at the first write?
>
> Thank you for your help.
> Fred
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.
> org
> Twitter: https://twitter.com/hdf5

------------------------------

Message: 3
Date: Mon, 7 Aug 2017 09:15:41 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] VCS URI
Message-ID: <ADB59151-50FC-4927-8165-9D2B0C975251@lbl.gov>
Content-Type: text/plain; charset="us-ascii"

Hi David,
        Sure, the git repo is here: https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse <https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse>

                Quincey

> On Aug 7, 2017, at 5:27 AM, David Seifert <soap@gentoo.org> wrote:
>
> Hi HDF5 team and users,
> is there any possibility for me to develop again the current VCS
> sources for adding pkgconfig + Meson support? This makes development
> and feature addition a lot easier, as there won't be any conflicts.
>
> Regards
> David
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.
> org
> Twitter: https://twitter.com/hdf5

-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170807/bebdf029/attachment-0001.html>

------------------------------

Message: 4
Date: Mon, 7 Aug 2017 15:05:28 +0000
From: John Readey <jreadey@hdfgroup.org>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID: <82D9CE59-6BBE-4E47-9D05-18DE033262F9@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Friedhelm,

   I?m not familiar with the specifics of your web app, but another
possibility is to just have the app call h5serv directly.

   Anika @ NASA Goddard wrote a nice blog article on this approach:
https://www.hdfgroup.org/2017/04/the-gfed-analysis-tool-an-
hdf-server-implementation/.

John

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of
Thomas Caswell <tcaswell@gmail.com>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Date: Saturday, August 5, 2017 at 9:48 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>, "
friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py

I would also look at h5serv (https://github.com/HDFGroup/h5serv) which
puts a server in front of your hdf5 file. It serves as a single
process owns it and serves as the serialization point which side-steps
almost all of the multiple-client issues. h5pyd
(https://github.com/HDFGroup/h5pyd)
is a client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov<mailto: > nelson99@llnl.gov>> wrote:
It doesn?t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing
in a very coordinated manner. All processes need to write the same
metadata to the file in ?collective? calls to the library, i.e. each
application makes the same calls using the same arguments in the same
order when making calls that modify the file metadata (creating files,
datasets or groups, writing attributes,
etc.<https://support.hdfgroup.org/HDF5/doc/RM/
CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that
might work well for you. HDF5 1.10 introduced a Single-writer
multi-reader (SWMR) mode = to open a file. Using a SWMR file for each
process, each process would open one file as the writer in SWMR mode,
and open the files from all the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

From: Hdf-forum
[mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-
forum-bounces@lists.hdfgroup.org>] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data
from an hdf5 file.

Because several people work with this file and long running processes
I would like to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What?s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5
-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.
hdfgroup.org/attachments/20170807/d8afdcac/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 6
****************************************

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/attachments/20170807/9e689fe5/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 7
****************************************

Friedhelm,

H5serv thanks good way but: I read the h5serv release notes and notify:
High Performance usage ©\ the current release of h5serv
serializes all requests, so would not be suitable for a demanding
environment with a large number of clients and/or high throughput rates....

We've been actually working on a successor product to h5serv -- called HDF Cloud -- which we will be announcing in a couple weeks. HDF Cloud adds significant functionality beyond h5serv, including scalability for large numbers of clients, support for AWS S3 object storage, etc. We have several Beta clients in testing already.

SWMR sounds good but:... What I see, is in early stage. I need also parallel write for big data loads.

SWMR is fairly mature... there's always room for improvement but plenty of HDF users rely on this capability and can share their experience here.

Feel free to reach out to me or Dax if you would like to learn more. Cheers,

-- Dave Pearah

···

________________________________
From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of ISCaD GmbH <friedhelm.matten@iscad-it.de>
Sent: Tuesday, August 8, 2017 2:24:46 AM
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] hdf5 parallel h5py

Hi,

I look for SWMR and h5serv:

H5serv thanks good way but:

I read the h5serv release notes and notify:

High Performance usage ©\ the current release of h5serv serializes all requests, so would not be suitable for a demanding environment with a large number of clients and/or high throughput rates.

and

Variable length datatypes - Variable length datatypes are now supported.

This are remarkable issues for me.

SWMR sounds good but:

What I see, is in early stage.
I need also parallel write for big data loads.

Thanks and regards

-----Urspr¨¹ngliche Nachricht-----
Von: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] Im Auftrag von hdf-forum-request@lists.hdfgroup.org
Gesendet: Montag, 7. August 2017 17:13
An: hdf-forum@lists.hdfgroup.org
Betreff: Hdf-forum Digest, Vol 98, Issue 7

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Jialin Liu)

----------------------------------------------------------------------

Message: 1
Date: Mon, 7 Aug 2017 11:12:23 -0400
From: Jialin Liu <jalnliu@lbl.gov>
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
        <CAOWawpGNLZz38c0CnPv00RyctgAdW9bGWLc+gNqVx_0Z7KcQ_A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi,
The way to parallel h5py is first to compile h5py with parallel HDF5. If you are looking for examples of parallel H5py, take a look at the nersc
webpage:
http://www.nersc.gov/users/data-analytics/data-management/i-o-libraries/hdf5-2/h5py/

Best,
Jialin

On Mon, Aug 7, 2017 at 11:06 AM, <hdf-forum-request@lists.hdfgroup.org> wrote:

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_
lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: hdf5 parallel h5py (Thomas Caswell)
   2. Re: "File too large" error, seemingly related to MPI
      (Quincey Koziol)
   3. Re: VCS URI (Quincey Koziol)
   4. Re: hdf5 parallel h5py (John Readey)

----------------------------------------------------------------------

Message: 1
Date: Sat, 05 Aug 2017 16:48:23 +0000
From: Thomas Caswell <tcaswell@gmail.com>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID:
        <CAA48SF_LEzc5tRgMWgTwv03SBOoCvBcFeycP6
pDRH_MVfP1HEg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I would also look at h5serv (https://github.com/HDFGroup/h5serv\) which
puts a server in front of your hdf5 file. It serves as a single
process owns it and serves as the serialization point which side-steps
almost all of the multiple-client issues. h5pyd
(https://github.com/HDFGroup/h5pyd\) is a client of h5serv which has an
identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov> wrote:

> It doesn?t sound like parallel HDF5 is what you are wanting to do here.
> Parallel HDF5 is for an application where all applications are
> writing
in a
> very coordinated manner. All processes need to write the same
> metadata to the file in ?collective? calls to the library, i.e. each
> application
makes
> the same calls using the same arguments in the same order when
> making
calls
> that modify the file metadata (creating files, datasets or groups,
writing
> attributes, etc.
> <https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html&gt;\).
>
> It sounds like you have separate applications that are executing in
> a somewhat independent manner. This will not work with parallel HDF5.
>
>
>
> Using the serial library, I can think of at least one approach that
> might work well for you. HDF5 1.10 introduced a Single-writer
> multi-reader
(SWMR)
> mode = to open a file. Using a SWMR file for each process, each
> process would open one file as the writer in SWMR mode, and open the
> files from
all
> the other processes as read-only in SWMR mode.
>
>
>
> http://docs.h5py.org/en/latest/swmr.html
>
>
>
> Jarom
>
>
>
> *From:* Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] *On
> Behalf Of *ISCaD GmbH
> *Sent:* Thursday, August 3, 2017 1:19 AM
> *To:* hdf-forum@lists.hdfgroup.org
> *Subject:* [Hdf-forum] hdf5 parallel h5py
>
>
>
> Dear all,
>
>
>
> I work on an web application which should store and receive the data
> from an hdf5 file.
>
>
>
> Because several people work with this file and long running
> processes I would like
>
> to use the mpi4py, h5py and HDF5.
>
>
>
> I work on debian linux stretch 64 Bit.
>
>
>
> What?s the way to parallel h5py.
>
>
>
> Thanks and regards
>
>
>
> Friedhelm Matten
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup\.
> org
> Twitter: https://twitter.com/hdf5
-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists\.
hdfgroup.org/attachments/20170805/4d84f060/attachment-0001.html>

------------------------------

Message: 2
Date: Mon, 7 Aug 2017 09:14:28 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] "File too large" error, seemingly related to
        MPI
Message-ID: <11A08003-02C4-494D-A746-D86F1C948FF6@lbl.gov>
Content-Type: text/plain; charset=utf-8

Hi Frederic,
        Could you give us some more details about your file and the
call(s) you are making to HDF5? I can?t think of any reason that it would
crash when creating a file like this, but something interesting could be
going on? :slight_smile:

        Quincey

> On Aug 7, 2017, at 5:28 AM, Frederic Perez > > <fredericperez1@gmail.com> > wrote:
>
> Hi,
>
> While writing significant amount of data in parallel, I obtain the
> following error stack:
>
> HDF5-DIAG: Error detected in HDF5 (1.8.16) MPI-process 66:
> #000: H5D.c line 194 in H5Dcreate2(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #001: H5Dint.c line 453 in H5D__create_named(): unable to create
> and link to dataset
> major: Dataset
> minor: Unable to initialize object
> #002: H5L.c line 1638 in H5L_link_object(): unable to create new
> link to object
> major: Links
> minor: Unable to initialize object
> #003: H5L.c line 1882 in H5L_create_real(): can't insert link
> major: Symbol table
> minor: Unable to insert object
> #004: H5Gtraverse.c line 861 in H5G_traverse(): internal path
> traversal
failed
> major: Symbol table
> minor: Object not found
> #005: H5Gtraverse.c line 641 in H5G_traverse_real(): traversal
> operator
failed
> major: Symbol table
> minor: Callback failed
> #006: H5L.c line 1685 in H5L_link_cb(): unable to create object
> major: Object header
> minor: Unable to initialize object
> #007: H5O.c line 3016 in H5O_obj_create(): unable to open object
> major: Object header
> minor: Can't open object
> #008: H5Doh.c line 293 in H5O__dset_create(): unable to create dataset
> major: Dataset
> minor: Unable to initialize object
> #009: H5Dint.c line 1060 in H5D__create(): can't update the
> metadata
cache
> major: Dataset
> minor: Unable to initialize object
> #010: H5Dint.c line 852 in H5D__update_oh_info(): unable to update
> layout/pline/efl header message
> major: Dataset
> minor: Unable to initialize object
> #011: H5Dlayout.c line 238 in H5D__layout_oh_create(): unable to
> initialize storage
> major: Dataset
> minor: Unable to initialize object
> #012: H5Dint.c line 1713 in H5D__alloc_storage(): unable to
> initialize dataset with fill value
> major: Dataset
> minor: Unable to initialize object
> #013: H5Dint.c line 1805 in H5D__init_storage(): unable to allocate
> all chunks of dataset
> major: Dataset
> minor: Unable to initialize object
> #014: H5Dchunk.c line 3575 in H5D__chunk_allocate(): unable to
> write raw data to file
> major: Low-level I/O
> minor: Write failed
> #015: H5Dchunk.c line 3745 in H5D__chunk_collective_fill(): unable
> to write raw data to file
> major: Low-level I/O
> minor: Write failed
> #016: H5Fio.c line 171 in H5F_block_write(): write through metadata
> accumulator failed
> major: Low-level I/O
> minor: Write failed
> #017: H5Faccum.c line 825 in H5F__accum_write(): file write failed
> major: Low-level I/O
> minor: Write failed
> #018: H5FDint.c line 260 in H5FD_write(): driver write request failed
> major: Virtual File Layer
> minor: Write failed
> #019: H5FDmpio.c line 1846 in H5FD_mpio_write():
> MPI_File_write_at_all
failed
> major: Internal error (too specific to document in detail)
> minor: Some MPI function failed
> #020: H5FDmpio.c line 1846 in H5FD_mpio_write(): Other I/O error ,
> error stack:
> ADIOI_NFS_WRITESTRIDED(672): Other I/O error File too large
> major: Internal error (too specific to document in detail)
> minor: MPI Error String
>
>
> It basically claims that I am creating a file too large. But I
> verified that the filesystem is capable of handling such a size. In
> my case, the file is around 4 TB when it crashes. Where could this
> problem come from? I thought HDF5 does not have a problem with very
> large files. Plus, I am dividing the file in several datasets, and
> the write operations work perfectly until, at some point, it crashes
> with the errors above.
>
> Could it be an issue with HDF5? Or could it be an MPI limitation? I
> am skeptic about the latter option: at the beginning, the program
> writes several datasets inside the file succesfully (all the
> datasets being the same size). If MPI was to blame, why wouldn't it
> crash at the first write?
>
> Thank you for your help.
> Fred
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup\.
> org
> Twitter: https://twitter.com/hdf5

------------------------------

Message: 3
Date: Mon, 7 Aug 2017 09:15:41 -0500
From: Quincey Koziol <koziol@lbl.gov>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] VCS URI
Message-ID: <ADB59151-50FC-4927-8165-9D2B0C975251@lbl.gov>
Content-Type: text/plain; charset="us-ascii"

Hi David,
        Sure, the git repo is here: https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse <https://bitbucket.hdfgroup.
org/projects/HDFFV/repos/hdf5/browse>

                Quincey

> On Aug 7, 2017, at 5:27 AM, David Seifert <soap@gentoo.org> wrote:
>
> Hi HDF5 team and users,
> is there any possibility for me to develop again the current VCS
> sources for adding pkgconfig + Meson support? This makes development
> and feature addition a lot easier, as there won't be any conflicts.
>
> Regards
> David
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup\.
> org
> Twitter: https://twitter.com/hdf5

-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists\.
hdfgroup.org/attachments/20170807/bebdf029/attachment-0001.html>

------------------------------

Message: 4
Date: Mon, 7 Aug 2017 15:05:28 +0000
From: John Readey <jreadey@hdfgroup.org>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>,
        "friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py
Message-ID: <82D9CE59-6BBE-4E47-9D05-18DE033262F9@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Friedhelm,

   I?m not familiar with the specifics of your web app, but another
possibility is to just have the app call h5serv directly.

   Anika @ NASA Goddard wrote a nice blog article on this approach:
https://www.hdfgroup.org/2017/04/the-gfed-analysis-tool-an-
hdf-server-implementation/.

John

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of
Thomas Caswell <tcaswell@gmail.com>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Date: Saturday, August 5, 2017 at 9:48 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>, "
friedhelm.matten@iscad-it.de" <friedhelm.matten@iscad-it.de>
Subject: Re: [Hdf-forum] hdf5 parallel h5py

I would also look at h5serv (https://github.com/HDFGroup/h5serv\) which
puts a server in front of your hdf5 file. It serves as a single
process owns it and serves as the serialization point which side-steps
almost all of the multiple-client issues. h5pyd
(https://github.com/HDFGroup/h5pyd\)
is a client of h5serv which has an identical high-level API to h5py.

Tom

On Fri, Aug 4, 2017 at 12:01 PM Nelson, Jarom <nelson99@llnl.gov<mailto: > nelson99@llnl.gov>> wrote:
It doesn?t sound like parallel HDF5 is what you are wanting to do here.
Parallel HDF5 is for an application where all applications are writing
in a very coordinated manner. All processes need to write the same
metadata to the file in ?collective? calls to the library, i.e. each
application makes the same calls using the same arguments in the same
order when making calls that modify the file metadata (creating files,
datasets or groups, writing attributes,
etc.<https://support.hdfgroup.org/HDF5/doc/RM/
CollectiveCalls.html>).
It sounds like you have separate applications that are executing in a
somewhat independent manner. This will not work with parallel HDF5.

Using the serial library, I can think of at least one approach that
might work well for you. HDF5 1.10 introduced a Single-writer
multi-reader (SWMR) mode = to open a file. Using a SWMR file for each
process, each process would open one file as the writer in SWMR mode,
and open the files from all the other processes as read-only in SWMR mode.

http://docs.h5py.org/en/latest/swmr.html

Jarom

From: Hdf-forum
[mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-
forum-bounces@lists.hdfgroup.org>] On Behalf Of ISCaD GmbH
Sent: Thursday, August 3, 2017 1:19 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: [Hdf-forum] hdf5 parallel h5py

Dear all,

I work on an web application which should store and receive the data
from an hdf5 file.

Because several people work with this file and long running processes
I would like to use the mpi4py, h5py and HDF5.

I work on debian linux stretch 64 Bit.

What?s the way to parallel h5py.

Thanks and regards

Friedhelm Matten

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5
-------------- next part -------------- An HTML attachment was
scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists\.
hdfgroup.org/attachments/20170807/d8afdcac/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 6
****************************************

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/attachments/20170807/9e689fe5/attachment.html&gt;

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 98, Issue 7
****************************************

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5