Hello,
When testing different chunk size for dataset compression, I obtain the
following results for the writing of a dataset of size 384 x 256 x 1024:
Chunk size Global max memory File size (MB) Write time
(seconds)
2 ^ 15 22% 15
24
2 ^ 12 35% 16
22
2 ^ 10 58% 26
22
Tests realized on Intel Xeon CPU E5530 @ 2.40GHz 4 cores, 4G RAM, Linux SLED
11, HDF5 1.8.3. I'm using the Nexus library that is a thin layer on top of
HDF.
The memory footprint is increasing when the chunk size is decreasing. That
is not an intuitive behaviour. Has somebody an explanation?
Yannick
Same post
ยทยทยท
--
View this message in context: http://hdf-forum.184993.n3.nabble.com/HDF5-chunk-profiling-tp4025718.html
Sent from the hdf-forum mailing list archive at Nabble.com.