In our application we currently log real-time data from a system that produces frames of data about 5kB in size. We are currently logging this data at 25Hz and would like to significantly increase the logging rate in the future. This data frame holds the values of many different variables (about 1200 variables) from our system, so we keep a mapping that describes the offset and data type of each variable within this frame.
In the logging application, which logs to an HDF5 file, each time we receive a frame we iterate over the mapping and individually log each data element into its own dataset in the HDF5 file. Effectively doing:
- For each data element:
1.1 Get variable name, offset, and data type from the mapping
1.2 Index into the data frame and copy the value
1.3. Get a handle to the HDF5 dataset that has the same name and write value to it
This seems like it could be greatly improved by writing the whole data frame in one go each time a frame is received but I’m not sure this is a use case supported by HDF5, specially considering that we want to keep the hierarchical structure of the datasets in HDF5 file.
Is it possible to setup an HDF5 file in this way?