Chunk Cache Necessary for Write-Only App

I am developing a server application that, over time, will generate numerous HDF5 files of varying content. My App only writes files and immediately closes them. It never reads files or reopens them.

In my case, is the chunk cache necessary or would it be helpful? If I left it enabled, am I going to get a performance hit in terms of increasing memory over time?

There is one certain way to answer your question: profile it, and see if it is a bottle neck. The rest is a (educated) speculation:
The purpose of caching is to minimise IOPS, maximise throughput and relaxing latency. Picture a Pareto front if you will.
When you are to write smaller pieces than chunk size, they are accumulated into chunk cache call it a bucket. Once bucket is full: dumped into lower level IO calls.
Depending of the IO driver layer in action these chunks may be broken into 4K size pages, and delegated to OS level IO calls.

Easy to see the special case when you are using exact chunk size, and make an IO call. A bad implementation will do a double copy of the buffer, a good one writes directly from the passed pointer.

FYI: H5CPP does the right thing, also is profiled…
BTW: why the ‘numerous’ files? (as opposed to single container, numerous datasets)