memory leak with reading chunked data. [SEC=UNCLASSIFIED]


I have encountered a "memory leak" problem using h5py and also netcdf4-python so I attempted to replicate the problem in C with hdf5 API. The problem is:
When using chunked storage memory usage grows linearly with read and write operations and is never released until the process is killed by the kernel. When I run the same test with no chunking memory use (as measured) is static.

I see the same situation with the C example (attached). But when doing the chunked read example if I select the same dataspace hyperslab for each read (as opposed to a new hyperslab for each iteration) then memory use stays static.

My test dataset is (250,400,300) and the test loops through extracting hyperslabs (1,1,300).

I have tested this with:
HDF5 Version: 1.8.13
Linux 2.6.32-431.11.2.el6.x86_64

The databases I used for test are ~120Mb, but I can upload similar with smaller dimensions if required.
Has anyone encountered this problem or can suggest where I am going wrong?
Any suggestions would be greatly appreciated.


h5_read_test.c (2.53 KB)