Hi,
I have some questions regarding H5Dcreate_anon() as implemented in
version 1.8.18 of the HDF5 library...
I'd like to use this function to create a temporary test dataset. If
it meets a certain condition, which I basically can't determine until
writing of the test dataset to disk is finished, I'll make it
accessible in the HDF5 file on disk with H5Olink(). Otherwise I'll
discard the temporary dataset and try again with relevant changes.
I'd like to be certain of two things that are needed for this approach
to work well:
1) Does the dataset generated by H5Dcreate_anon() actually exist
(transiently) on-disk, rather than being a clever wrapper for some
memory buffer? I am generating the dataset chunked and writing it out
chunk-by-chunk, so insufficient RAM isn't a problem UNLESS there is a
concern with using H5Dcreate_anon() for a dataset too large to fit in
memory at once.
2) I understand that "normal" H5Dcreate() and a dataset write,
followed sometime later by H5Ldelete(), can end up (in 1.8.18)
resulting in wasted space in the file on disk. Can wasted space be
produced similarly by H5Dcreate_anon() when no later call to H5Olink()
is made? [Assume that H5Dclose() gets properly called.] I'm hoping
not ... ?
Thanks in advance for info on this subject!
Regarding the wasted space, I have secondary questions.
3) I know that h5repack can be used to produce a new file without
wasted space. But without h5repack, would the creation of more
datasets in the same file (with library version 1.8.18) re-use that
wasted disk space when possible?
4) There are apparently some mechanisms in 1.10.x for managing /
reclaiming wasted space on disk in HDF5 files? Does it happen
automatically upon any call to H5Ldelete() with the 1.10.x library, or
are some additional function calls needed? I can't really find
anything in the docs about this so a pointer would be much
appreciated. (As noted on this list previously, my employer can't
upgrade to 1.10.x until there is a way to produce 1.8.x backwards
compatible output, but eventually I guess we'll all get there...)
Thanks again,
···
--
Kevin B. McCarty
<kmccarty@gmail.com>