[quote=“derobins, post:2, topic:10003, full:true”]
Some things I can think of right off the bat:
Smarter about library-allocated memory (esp. filters & API calls that return library-allocated buffers) can you include space performance testing of this too
Actually remove calls marked as “deprecated” +1
Retire the multi VFD (but keep the split VFD aspect - we just don’t need multiple metadata channels) +1
Sanitize the metadata read code (the source of most CVE issues) +1
a time and space performance test suite
in-memory groups or “windowed” groups where a whole group subtree is handled by keeping entirely in memory once opened, can be explicitly sync’d w/disk by caller and is synced when closed.
thread-parallel reads (and maybe writes) for same and different datasets in same file
decouple parallel HDF5 from serial HDF5 so that only a single install point (-L/path/to/install -lhdf5 -lpar_hdf5 gets parallel features) serves both
A cook-book suite of real-world examples (e.g. not contrived for testing purposes but from real-world use cases) which are documented (and linked to other documentation sources like API ref, design, etc.) which demonstrate how to use HDF5 for common cases as well as how NOT to use HDF5. Best how-not-to-use example I have (https://github.com/markcmiller86/hdf5stuff/tree/master/graph_of_udts) is serialization of hierarchal data structures…naive users often wind up using HDF5 groups as the nodes in their hierarchy and this has huge negative performance implications
A way for apps to specify default properties (which may be different from the lib deployed/installed defaults) to be followed within the current executable (somewhat related to next item)
compression “strategies”…where caller’s don’t having to manipulate compression directly on each and every dataset written but can tell the lib what “strategy” they wish to follow and then on each write, it does something useful (e.g. compress int types with gzip but compress float types with zfp)
A simplified mode for error stack reporting to report just caller’s failed call (not internals)
A compression test-suite test-bed with appropriate raw-data files where compression of the same raw data via hdf5 is compared, routinely, with compression via common unix command-line tools and the performance differences are understood.
routine (3-4 x per year) scalability testing to tens of thousands of parallel tasks (we can provide compute resources)
A better way to handle the “direct write” (or read) case (so that it behaves like an ordinary write (or read)) so that objects which are compressed in memory can be written (or read) compressed to files without having to uncompress and recompress them. Bottom line, some consumers might want the data uncompressed when read while others might want the data to remain compressed even after having been read. On writes, if the data is already compressed in memory (maybe the caller needs to tell HDF5 that it is with a property), it should just go to disk compressed.
H5Dwrite_chunk (formerly H5DOwrite_chunk) allows you to write compressed or uncompressed data directly to disk. H5Dread_chunk allows you to read compressed data directly from disk.
Here’s another suggestion that is hopefully worthy of the “2.0” version…
Alternative storage representations of HDF5 data in cloud object stores indicate there is very little difference between a contiguous dataset and a chunked dataset with only one chunk and no filters applied. How about removing the contiguous storage layout and only have chunked and compact?
You’ve made me realize that there is an internal HDF5 issue at the moment since internally chunk sizes are stored as 32-bit values. The number of bytes in a chunk are stored in 32-bit integers.
Furthermore the H5D_chunk_iter_op_t is about to expose this 32-bit value to the public API rather than as hsize_t:
Actually, we are almost there after new indexing for chunked datasets were introduced in 1.10.0.
Current APIs and programming model still require to use H5Pset_chunk but this call could be omitted if there is only one chunk (i.e, contiguos storage). Then compression can also be used on “contiguous” dataset.
I couldn’t convince Quincey to introduce this change in 1.10.0 and the rest is history.
Set filter parameters with string keyword arguments.
Official registry for filter ID code strings. The current registry is a good start, if made official for existing ID code strings, not just the numbers.
The use of contiguous storage is well entrenched in the HDF5 universe. It has certain advantages such as plain simplicity, and optimal subset access on local storage. I would prefer that support for contiguous is sustained. If compression is desired, just go to chunked storage, as intended by design.
Why should contiguous storage not just be a simple case of chunked storage? Contiguous storage is basically just chunked storage with a single chunk and no filters, no?
I think I understand and agree with the spirit of this request. Its much easier to remember “gzip” or maybe “lzma2” as the identifier for a filter than “032105”. That said, can’t this already be achieved by adding a layer on top of the existing interface that keeps a mapping between strings and numbers? I don’t think the table would ever get so large that a linear search of it would have a negative performance impact. And, I honestly don’t think this needs to wait for an HDF5-2.0 or for THG to implement it to make it happen. It may already be implemented somewhere in the world
“Filter ID code strings”. That is a mouthful, sorry, but I was trying to be complete in more than one way. I am referring to the “Name” column in the HDF5 registry, in addition to simple names for the built-in filters. Here are a few examples.
Another thing to think about for HDF5 2.0…if you examine a lot of the functionality needed to manage metadata in an HDF5 file, you will find it is quite similar to what file systems have to do to manage their “media”. This kind of layering of the same abstractions, one upon the other, for purposes of providing ever more abstract storage objects is very similar to the IP protocol stack. On the way hand, this layering permits implementation of various abstractions. On the other hand, it feels duplicative, wasteful and unnecessarily complex. Are there ways that an HDF5 2.0 could avoid this and instead utilize some of the pieces of lower-level media abstractions directly instead of reimplementing their own to produce an HDF5 “container” inside of a file system container (inside of a spinning disk container), etc.