Way for force a flushing H5Dwrite

Apologies if this winds up being a duplicate. I searched for a while before posting.

I have trouble debugging HDF5 applications using dataset chunking because the places where things are failing are typically in the H5Dwrite call(s) but I don’t get any error stack or reporting from HDF5 until H5Dclose. At that point, I get a slew of errors at it tries to preempt each chunk from memory to the file. I can’t tell where one chunk errors begin and another’s end.

So, I am wondering if there is a way to sort of force H5Dwrite to “do all its stuff” in the write call itself rather than piling everything up in the cache and doing its work in H5Dclose or H5Fclose. That way, I can more easily localize which writes are failing.

If setting chunk cache size to zero will have the intended effect, that is fine as a debugging aid…though I would hate to accidentally leave such debugging code in any real application.

Mark,

Have you tried H5Dflush? See https://portal.hdfgroup.org/display/HDF5/H5D_FLUSH.

My understanding was that the call should flush associated metadata from MD cache and all raw data from chunk cache.

Elena

It’s probably not exactly what you are trying to get from H5Dwrite though…

Right…ideally, I’d like to be able to set a break in H5Dwrite and then get a stack dump.

I guess I am neglecting to add, this is often within the context of some kind of user-defined filter (compression) I am writing :wink:

You could disable the chunk caching for the dataset. Then I/O will occur within H5Dwrite.

Quincey

So, just to be clear…disabling chunk caching is achieved by setting to zero rdcc_nslots in H5Pset_chunk_cache()

Yes, or setting rdcc_nbytes to zero.

Quincey