Compiling tarray.cpp with g++


I am trying to test the tarray.cpp example using g++ and not cmake. I do not need the entire hdf5 program to run, just the example. I included all of the dependencies in one folder and ran g++. I need to call the library, butI am not sure which one. For anyone who has compiled hdf5 with g++, what is the correct way of compiling an example?

Thank you for the help.

Compiling tarray.cpp example with g++

Would you be interested in an alternative C++ header only solution for HDF5?


Yes, I am ideally trying to get this running with a GPU, but first I want to run it g++.


Currently I don’t support GPU-s directly, but std::vector or typed memory regions are supported and well documented. Once you have the data in main memory, you have to use CUDA device transfer functions.

Compound datatypes are supported with a LLVM based compiler assisted reflection, you can read on it here.
Presentation slides are on this page. The official version supports gcc on linux, the one I’ve not released yet woll support most major compilers I could get a hold on.

If you have questions, ask away on this forum C++ section or on the github page.


Sorry, I mistakenly thought cmake was a compiler, I am willing to use it. So, what I am trying to do is take the code from the hdf5 or h5cpp github and GPU accelerate a few functions in the code. I don’t quite understand the compilation and linking strategy used for this project, so I am asking for the most simple way to use the nvc++ compiler in the project for the c++ code and use a few CUDA kernel functions to process the data. If this is the wrong way to try and extend the project, I am open to any and all suggestions so I can accelerate some parts of the code using CUDA.


yes, I dully noted your email few days ago on cmake and compilers.

interesting goal indeed, please do post your profiling results here… as for the compilation and linking strategy: please consult with g++ and ld linker, as for CUDA PTX refer to NVIDIA documentation.

Before you do anything I suggest you to study the bandwidth and latency difference between the CPU cache memory and IO devices – notice how much the CPU thread is waiting for data. Doing DMA transfer between disk storage and CUDA device is interesting… so is this DAOS plugin…