I’m working on the creation of a HDF5 filter that needs access to arbitrary metadata from the underlying file. The filter API won’t let me do that, however; the callbacks only provide access to the dataset’s property list, datatype, and dataspace:
htri_t (*H5Z_can_apply_func_t)(hid_t dcpl_id, hid_t type_id, hid_t space_id); herr_t (*H5Z_set_local_func_t)(hid_t dcpl_id, hid_t type_id, hid_t space_id);
I have been also working on another filter that, for performance reasons, needs to allocate memory once and reuse it across the various calls to the filter. While I can allocate that memory on
set_local(), the filter never knows when it’s safe to deallocate that, as the current filter design does not include a teardown callback.
I have a working patchset that introduces two new optional callbacks:
teardown(). In the first, the file id that’s embedded in the pipeline object (
H5O_shared_t) is provided to the user, along with the three other well known
hid_t objects. The signature of the latter is the same as
set_local()/can_apply(), and it’s called on
My first question to you is: is this the right place to discuss API changes? The second question is: are you willing to accept such modifications? Last, my understanding is that we’d need a new
H5Z_class3_t structure to prevent breaking existing applications. If that’s the case, then we could probably have a new version of
set_local() as well which simply takes an extra File handle object, as opposed to introducing one more callback.
Thank you in advance for your attention and guidance.