committed types question

I've run into an issue that puzzles me and I was hoping for some kind of explanation. I have a need to have multiple HDF files open for writing simultaneously and have tried to use committed data types in these files in the hopes of making them easier to share. The H5Tcommit2 function will fail if I try and commit the same data type to a second file. Is this because internally, the library makes some sort of association of that hid_t with something specific to the file and is only designed to do one at a time? Seems kind of a pain to for me to have to create a dozen copies of the same data type definitions (as that is roughly how many files we'll have open).

Another thing I've noticed with committed data types is that even when I use them and create datasets based on the committed types, the data sets still show the entire structure of the compound data types when I use h5dump on them. Am I doing something wrong, or is h5dump just being verbose?

Hi John,

  for which purpose do you actually use named datatypes?

Basically, once committed, they are bound to a file, like a dataset ID,
so indeed you would need to have as many copies of committed datatype
ID's as you have files open.

h5ls and h5dump both still show the full type information, and repeat
to show the internal information on datatypes, independently if they are
committed or not, so that would be all correct from your side

  Werner

···

On Mon, 14 Jun 2010 15:13:13 -0500, John Knutson <jkml@arlut.utexas.edu> wrote:

I've run into an issue that puzzles me and I was hoping for some kind of
explanation. I have a need to have multiple HDF files open for writing
simultaneously and have tried to use committed data types in these files
in the hopes of making them easier to share. The H5Tcommit2 function
will fail if I try and commit the same data type to a second file. Is
this because internally, the library makes some sort of association of
that hid_t with something specific to the file and is only designed to
do one at a time? Seems kind of a pain to for me to have to create a
dozen copies of the same data type definitions (as that is roughly how
many files we'll have open).

Another thing I've noticed with committed data types is that even when I
use them and create datasets based on the committed types, the data sets
still show the entire structure of the compound data types when I use
h5dump on them. Am I doing something wrong, or is h5dump just being
verbose?

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

--
___________________________________________________________________________
Dr. Werner Benger Visualization Research
Laboratory for Creative Arts and Technology (LCAT)
Center for Computation & Technology at Louisiana State University (CCT/LSU)
211 Johnston Hall, Baton Rouge, Louisiana 70803
Tel.: +1 225 578 4809 Fax.: +1 225 578-5362