Valgrind shows mMemory leak during H5Dread


I am a new bee to this HDF forum, We created an application that creates, reads and writes HDF data in a file
We are trying to run valgrind memory leak analysis on the application which reads the HDF5 file using H5Dread.

The valgrind output shows there are some memory leaks in the H5Dread method. The stack captured from valgrind logs are given below.

==3413== 10,485,840 bytes in 10 blocks are definitely lost in loss record 3,189 of 3,191
==3413== at 0x4C28F09: malloc (in /usr/lib64/valgrind/
==3413== by 0x793FE36: H5FL_blk_malloc (in /usr/PET/parc/lib64/
==3413== by 0x78F0F68: H5D__chunk_lock (in /usr/PET/parc/lib64/
==3413== by 0x78F235F: H5D__chunk_read (in /usr/PET/parc/lib64/
==3413== by 0x7902A7C: H5D__read (in /usr/PET/parc/lib64/
==3413== by 0x7902EF3: H5Dread (in /usr/PET/parc/lib64/
==3413== by 0x84174AA: H5Wrap::ReadWrapper(std::string const&, void*, unsigned long*, unsigned long*)

The signature of the read method is
herr_t H5Dread( hid_t dataset_id, hid_t mem_type_id, hid_t mem_space_id, hid_t file_space_id, hid_t xfer_plist_id, void * buf )
Where the "buf" is application buffer which we are releasing in our application, all other handles we are closing with appropriate Close methods.

Why should a read method allocated memory is there any specific reason why the H5D__chunk_read method does the allocation?
Is there something we are doing wrong or any cleanup method need to be called to free these allocations or these are false positives from valgrind?

Thank you in advance and would really appreciate your help.

Thanks and Regards,
Leninraj K