Hi all,
Does the set_cache function have any effect with contiguously stored
datasets, or it works only for chunked ones?
If the latter, can the same caching functionality achieved by
set_sieve_buffer_size when using the default sec2 driver?
I am planning to use a large cache, like 0.5GB.
OS is Linux and the HDF version is 1.6.5. I know it is old, but I have no
option to upgrade.
Thank you for your answers!
Balint
Hi Balint,
Hi all,
Does the set_cache function have any effect with contiguously stored datasets, or it works only for chunked ones?
The latter.
If the latter, can the same caching functionality achieved by set_sieve_buffer_size when using the default sec2 driver?
To some extent, possibly.
I am planning to use a large cache, like 0.5GB.
OS is Linux and the HDF version is 1.6.5. I know it is old, but I have no option to upgrade.
It's certainly worth a try. However, if you've got that much memory, have you considered using the "core" file driver instead? (See H5Pset_fapl_core in the reference manual)
Quincey
···
On Jan 13, 2012, at 5:27 AM, Balint Takacs wrote:
From hdf-forum-bounces@hdfgroup.org Fri Jan 13 05:26:44 2012
From: Balint Takacs <takbal@gmail.com>
To: hdf-forum@hdfgroup.org
Hi all,
Does the set_cache function have any effect with contiguously stored
datasets, or it works only for chunked ones?
If the latter, can the same caching functionality achieved by
set_sieve_buffer_size when using the default sec2 driver?
I am planning to use a large cache, like 0.5GB.
OS is Linux and the HDF version is 1.6.5. I know it is old, but I have no
option to upgrade.
Thank you for your answers!
Balint
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
Hi Balint,
You question is a bit outside my area of expertise, but since
no one else has responded, I'll take a crack at it.
In 1.6.5, H5Pset_cache() allows you to configure the chunk
cache only (the metadata cache was redesigned and re-implemented in
1.6.4, and API calls to configure it were not added until 1.8).
While I haven't had occasion to work on the chunk cache code, it
is my understanding that the configuration of the chunk cache only
effects I/O for chunked data sets.
As to chunk cache size. While I am not sure exactly what
happens in 1.6.5, at least some version of HDF5 created a separate
chunk cache for each open data set. Thus if you use a large chunk
cache, you will want to watch your memory footprint if you open
multiple chunked data sets simultaneously.
I'm afraid I don't know enough about H5Pset_sieve_buffer_size()
to comment without digging into the code.
I hope this helps.
Best regards,
John Mainzer