First of all, I am very excited about the new feature of ros3 vfd offered by libhdf5 1.12, it will be very useful to port our existing software to the cloud if it turns out to be working as expected.
I wonder how the partial read works with the remote s3 storage. For example, if I only want to load one of the data sets or subset of one big dataset from h5 file, how does libhdf5 knows to only fetch the requested bytes from s3 through ranged get? I guess it is probably not exact match between the requested subsets and the actual bytes read from s3. If so, how much extra bytes will be downloaded? Or there is currently no partial Io implemented and the entire data will be downloaded always?
Also, how soon will the s3-write be available?And how does it affect(or help) the partial IO?