I have a piece of code that writes very large files (>4TB) in parallel. I am worried about the behaviour of the library when the file size grows beyond what is set at the file-system level by
ulimit -f. This is not specified in the documentation as far as I can tell.
Will hdf5 produce some form of error message during a write? Or will it silently fail and produce truncated files? Or is that limit somehow bypassed?
Thanks in advance for any clarification of the behaviour.