So, HDF5 lib has ability to do checksuming on any data passing from client to file via H5Dwrite/H5Dread, correct? This can be used to detect data corruption. Just curious, but has there ever been any discussion of extending this functionality to attribute data, HDF5 lib metadata, etc?
The way current checksuming is intended to work now is that data is checksumed on write and the checksum is stored for each chunk of a dataset. And, only upon read is detection of possible data corruption actually performed, by comparing checksum of chunk after read to whatever was stored for chunk during write, correct? If so, it means we can only detect data corruption upon read back.
Has anyone considered extending this idea to parity and data correction? I mean, storing parity bits and enabling the ability to do correction to corruptions upon read?
Also, does HDF5 library tools (maybe h5unjam) have some minimal ability to do data correction to HDF5 meatadata much like fsck can correct a corrupt file system?