Portable way to write / read data

Hello,

What is the portable way to write / read data ?

Say my data is an array of doubles (C type).
My understanding is that if I write them in a data set with the H5T_NATIVE_DOUBLE type, then:
1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE according to the endianness (say it is BE) of the computer I use.
2. data are written in the hdf5 file.
Now I have to read the data from another computer (say it is a LE one).
My understanding is that I still must use the H5T_NATIVE_DOUBLE type, and, that:
1. data are read in BE.
2. data are converted to LE.
3. data are sent back (as LE).

Am I correct ? Is that the right way to do things ? If not, what is the right way ?

Happy new year,

Franck

Any answer ? No clue on that ?

FH

···

Le 2016-01-03 15:56, houssen a écrit :

Hello,

What is the portable way to write / read data ?

Say my data is an array of doubles (C type).
My understanding is that if I write them in a data set with the
H5T_NATIVE_DOUBLE type, then:
1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE according
to the endianness (say it is BE) of the computer I use.
2. data are written in the hdf5 file.
Now I have to read the data from another computer (say it is a LE one).
My understanding is that I still must use the H5T_NATIVE_DOUBLE type,
and, that:
1. data are read in BE.
2. data are converted to LE.
3. data are sent back (as LE).

Am I correct ? Is that the right way to do things ? If not, what is
the right way ?

Happy new year,

Franck

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

I don't think this is right. HDF deals with the endian-ness under the hood and you don't have to muck with it (in either #1 or #2).

···

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf Of houssen
Sent: Thursday, January 07, 2016 10:44 AM
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] Portable way to write / read data

Any answer ? No clue on that ?

FH

Le 2016-01-03 15:56, houssen a écrit :

Hello,

What is the portable way to write / read data ?

Say my data is an array of doubles (C type).
My understanding is that if I write them in a data set with the
H5T_NATIVE_DOUBLE type, then:
1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE according
to the endianness (say it is BE) of the computer I use.
2. data are written in the hdf5 file.
Now I have to read the data from another computer (say it is a LE
one).
My understanding is that I still must use the H5T_NATIVE_DOUBLE type,
and, that:
1. data are read in BE.
2. data are converted to LE.
3. data are sent back (as LE).

Am I correct ? Is that the right way to do things ? If not, what is
the right way ?

Happy new year,

Franck

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

I believe the type your data is stored and the type you write or read from
a separate issues and HDF deals with it all for you if you use types
properly.

When you create a dataset you select the actual storage type. If you use a
"NATIVE" type then the type on disk is the endianness of your system.

When reading and writing you also give a type for the buffer your data in
memory is. I believe this can be any type that is convertable to the type
used for the dataset creation. So you you could make a dataset in BE then
have write data from memory that is LE and HDF does the translation as long
as you told it the correct types. The useful thing here is that if you use
"NATIVE" types for read/write then it doesn't matter what type the dataset
was created with. HDF works it out for you. I imagine performance will
suffer though if they don't match.

CAUTION: I don't actaully know that this is true, but that's always been my
impression of how it works.

- David

···

On Thu, Jan 7, 2016 at 10:12 AM, SMITCH12@harris.com <SMITCH12@harris.com> wrote:

I don't think this is right. HDF deals with the endian-ness under the hood
and you don't have to muck with it (in either #1 or #2).

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org] On Behalf
Of houssen
Sent: Thursday, January 07, 2016 10:44 AM
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] Portable way to write / read data

Any answer ? No clue on that ?

FH

Le 2016-01-03 15:56, houssen a écrit :
> Hello,
>
> What is the portable way to write / read data ?
>
> Say my data is an array of doubles (C type).
> My understanding is that if I write them in a data set with the
> H5T_NATIVE_DOUBLE type, then:
> 1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE according
> to the endianness (say it is BE) of the computer I use.
> 2. data are written in the hdf5 file.
> Now I have to read the data from another computer (say it is a LE
> one).
> My understanding is that I still must use the H5T_NATIVE_DOUBLE type,
> and, that:
> 1. data are read in BE.
> 2. data are converted to LE.
> 3. data are sent back (as LE).
>
> Am I correct ? Is that the right way to do things ? If not, what is
> the right way ?
>
> Happy new year,
>
> Franck
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
>
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
> g
> Twitter: https://twitter.com/hdf5

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

David's comments are correct. HDF5 knows the types of data in memory and the types of data in the file. When there is a mismatch between them, it will automagically convert the types.

Mainly, you just need to tell HDF5 the correct type you have in memory when you write it and then correct buffer type you expect to have back in memory when you read it. If you move data between BE and LE machines, HDF5 will handle that. If you move it between IEEE-754 and the ole Cray floating point formats, HDF5 will handle the conversion for you. If you wanna write doubles but read it as floats, HDF5 can do that too. etc., etc.

When you *first* create a dataset, you get to choose how you want it in the file. Most people just choose native memory type. But you don't have to do it that way. Say you are running on IEEE-754 machine but anticipate a majority of your work involving the data to be on some old Cray float based machines. Then, you can, if you wanna bother, choose for the initially created dataset to be Cray float in the file even though you are inititally creating the dataset on an IEEE-754 machine. Then, when you write the data there, it'll convert on write.

Performance? I am fairly certain that all of HDF5 primtive data conversion routines are well above disk I/O rates. So, you won't even notice any performance costs when doing actual I/O. Years ago when cpu speeds weren't so far ahead of disk speeds, this was a bit bigger issue. For data in HDF5 files on SSDs, conversion performance may play a noticeable role. But if so, its probably *just*barely* noticeable. Long story short, I wouldn't worry about performance of conversion routines.

Lastly, you can even create your own floating point formats. Say you want a 36 bit floating point format that represents some old IBM 7600 format for some data you have from the 1960s. You can do that too. You just need to create the type and define its base, bias, exponenent, mantissa and sign bit fields. Once you do that, HDF5 can handle the conversion between it and any of the other formats it supports. Or, maybe you want some 16 bit fixed point format. You can do that too. It involves some nitty gritty programming via the HDF5 C library interface. But, it is possible.

Hope that helps.

Mark

···

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>> on behalf of David <list@aue.org<mailto:list@aue.org>>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Date: Thursday, January 7, 2016 10:25 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Subject: Re: [Hdf-forum] Portable way to write / read data

I believe the type your data is stored and the type you write or read from a separate issues and HDF deals with it all for you if you use types properly.

When you create a dataset you select the actual storage type. If you use a "NATIVE" type then the type on disk is the endianness of your system.

When reading and writing you also give a type for the buffer your data in memory is. I believe this can be any type that is convertable to the type used for the dataset creation. So you you could make a dataset in BE then have write data from memory that is LE and HDF does the translation as long as you told it the correct types. The useful thing here is that if you use "NATIVE" types for read/write then it doesn't matter what type the dataset was created with. HDF works it out for you. I imagine performance will suffer though if they don't match.

CAUTION: I don't actaully know that this is true, but that's always been my impression of how it works.

- David

On Thu, Jan 7, 2016 at 10:12 AM, SMITCH12@harris.com<mailto:SMITCH12@harris.com> <SMITCH12@harris.com<mailto:SMITCH12@harris.com>> wrote:
I don't think this is right. HDF deals with the endian-ness under the hood and you don't have to muck with it (in either #1 or #2).

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org<mailto:hdf-forum-bounces@lists.hdfgroup.org>] On Behalf Of houssen
Sent: Thursday, January 07, 2016 10:44 AM
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] Portable way to write / read data

Any answer ? No clue on that ?

FH

Le 2016-01-03 15:56, houssen a écrit :

Hello,

What is the portable way to write / read data ?

Say my data is an array of doubles (C type).
My understanding is that if I write them in a data set with the
H5T_NATIVE_DOUBLE type, then:
1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE according
to the endianness (say it is BE) of the computer I use.
2. data are written in the hdf5 file.
Now I have to read the data from another computer (say it is a LE
one).
My understanding is that I still must use the H5T_NATIVE_DOUBLE type,
and, that:
1. data are read in BE.
2. data are converted to LE.
3. data are sent back (as LE).

Am I correct ? Is that the right way to do things ? If not, what is
the right way ?

Happy new year,

Franck

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Yes, it helps ! :smiley:

Thanks,

Franck

···

Le 2016-01-07 20:31, Miller, Mark C. a écrit :

David's comments are correct. HDF5 knows the types of data in memory
and the types of data in the file. When there is a mismatch between
them, it will automagically convert the types.

Mainly, you just need to tell HDF5 the correct type you have in
memory when you write it and then correct buffer type you expect to
have back in memory when you read it. If you move data between BE and
LE machines, HDF5 will handle that. If you move it between IEEE-754
and the ole Cray floating point formats, HDF5 will handle the
conversion for you. If you wanna write doubles but read it as floats,
HDF5 can do that too. etc., etc.

When you *first* create a dataset, you get to choose how you want it
in the file. Most people just choose native memory type. But you don't
have to do it that way. Say you are running on IEEE-754 machine but
anticipate a majority of your work involving the data to be on some
old Cray float based machines. Then, you can, if you wanna bother,
choose for the initially created dataset to be Cray float in the file
even though you are inititally creating the dataset on an IEEE-754
machine. Then, when you write the data there, it'll convert on write.

Performance? I am fairly certain that all of HDF5 primtive data
conversion routines are well above disk I/O rates. So, you won't even
notice any performance costs when doing actual I/O. Years ago when cpu
speeds weren't so far ahead of disk speeds, this was a bit bigger
issue. For data in HDF5 files on SSDs, conversion performance may play
a noticeable role. But if so, its probably *just*barely* noticeable.
Long story short, I wouldn't worry about performance of conversion
routines.

Lastly, you can even create your own floating point formats. Say you
want a 36 bit floating point format that represents some old IBM 7600
format for some data you have from the 1960s. You can do that too. You
just need to create the type and define its base, bias, exponenent,
mantissa and sign bit fields. Once you do that, HDF5 can handle the
conversion between it and any of the other formats it supports. Or,
maybe you want some 16 bit fixed point format. You can do that too. It
involves some nitty gritty programming via the HDF5 C library
interface. But, it is possible.

Hope that helps.

Mark

From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org [14]> on
behalf of David <list@aue.org [15]>
Reply-To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org
[16]>
Date: Thursday, January 7, 2016 10:25 AM
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org [17]>
Subject: Re: [Hdf-forum] Portable way to write / read data

I believe the type your data is stored and the type you write or
read from a separate issues and HDF deals with it all for you if

you

use types properly.

When you create a dataset you select the actual storage type. If
you use a "NATIVE" type then the type on disk is the endianness of
your system.

When reading and writing you also give a type for the buffer your
data in memory is. I believe this can be any type that is
convertable to the type used for the dataset creation. So you you
could make a dataset in BE then have write data from memory that is
LE and HDF does the translation as long as you told it the correct
types. The useful thing here is that if you use "NATIVE" types for
read/write then it doesn't matter what type the dataset was created
with. HDF works it out for you. I imagine performance will suffer
though if they don't match.

CAUTION: I don't actaully know that this is true, but that's
always been my impression of how it works.

- David

On Thu, Jan 7, 2016 at 10:12 AM, SMITCH12@harris.com [12] >> <SMITCH12@harris.com [13]> wrote:

I don't think this is right. HDF deals with the endian-ness under
the hood and you don't have to muck with it (in either #1 or #2).

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-bounces@lists.hdfgroup.org
[1]] On Behalf Of houssen
Sent: Thursday, January 07, 2016 10:44 AM
To: hdf-forum@lists.hdfgroup.org [2]
Subject: Re: [Hdf-forum] Portable way to write / read data

Any answer ? No clue on that ?

FH

Le 2016-01-03 15:56, houssen a écrit :
> Hello,
>
> What is the portable way to write / read data ?
>
> Say my data is an array of doubles (C type).
> My understanding is that if I write them in a data set with
the
> H5T_NATIVE_DOUBLE type, then:
> 1. the type is modified to H5T_IEEE_F64BE or H5T_IEEE_F64LE
according
> to the endianness (say it is BE) of the computer I use.
> 2. data are written in the hdf5 file.
> Now I have to read the data from another computer (say it is a
LE
> one).
> My understanding is that I still must use the
H5T_NATIVE_DOUBLE type,
> and, that:
> 1. data are read in BE.
> 2. data are converted to LE.
> 3. data are sent back (as LE).
>
> Am I correct ? Is that the right way to do things ? If not,
what is
> the right way ?
>
> Happy new year,
>
> Franck
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org [3]
>

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or

[4]
> g
> Twitter: https://twitter.com/hdf5 [5]

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org [6]

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

[7]
Twitter: https://twitter.com/hdf5 [8]
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org [9]

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

[10]
Twitter: https://twitter.com/hdf5 [11]

Links:
------
[1] mailto:hdf-forum-bounces@lists.hdfgroup.org
[2] mailto:hdf-forum@lists.hdfgroup.org
[3] mailto:Hdf-forum@lists.hdfgroup.org
[4] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
[5] https://twitter.com/hdf5
[6] mailto:Hdf-forum@lists.hdfgroup.org
[7] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
[8] https://twitter.com/hdf5
[9] mailto:Hdf-forum@lists.hdfgroup.org
[10] http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
[11] https://twitter.com/hdf5
[12] mailto:SMITCH12@harris.com
[13] mailto:SMITCH12@harris.com
[14] mailto:hdf-forum-bounces@lists.hdfgroup.org
[15] mailto:list@aue.org
[16] mailto:hdf-forum@lists.hdfgroup.org
[17] mailto:hdf-forum@lists.hdfgroup.org