I want to use hdf5 file dataset directly into GPU memory using Kvikio
Things to note:
- We do not want to copy data from CPU to GPU or vice versa
- We want to use the same methods that are currently in h5py
- We want to save the result data directly to an HDF5 file without moving the result data to the CPU
- We want to do this only in Python
Can anyone please tell if this is possible or not? if so, how?
h5py doesn’t have anything built in to read to or write from GPU memory, and I don’t think the HDF5 C library does either.
For reading, you can get the offset & length of either the entire dataset (for contiguous datasets), or of each allocated chunk for chunked datasets, and use those to tell a library like kvikio to read the relevant data from the file. If you want to select part of the dataset, you’d need to handle that yourself; likewise if the data is stored compressed, you’d need to decompress what you read. Chunked & contiguous are the most common storage formats; if you have compact or virtual datasets, this is harder.
For writing, it’s a similar story, but you also need to tell HDF5 to allocate space in the file before you try to write; see
It’s entirely possible that someone has written part or all of this already - h5py seems to be used for saving & loading some types of neural network models, which are often run on GPUs. But I don’t know of a project to point to. Perhaps the users just live with copying the weights between GPU & main memory.
Hello @thomas1 Thanks for your valuable reply.
As you mentioned h5py doesn’t have anything build in to read to or write from GPU memory. So is there any other way to use the hdf5 file directly in GPU memory?
For reading, let’s make it simple right now, if I want to get the entire dataset how can we tell kvikio to read the relevant data from the file?
It would be good if you could give an example of read/write to understand it better.
The idea is, we have millions of data as input and we have to perform some operations on the input data which can output more then trillions of data. So, since this output data is very big and we have limited resources like GPU memory, so we cannot keep that data on GPU memory. So after performing any operation I wanted to save that output data directly in hdf5 file instead of GPU memory.
It depends on your dataset. You’ll need to work out first which HDF5 storage layout it uses, which you can do from h5py with
dset.id.get_create_plist().get_layout(). The simplest case is contiguous data (
get_layout() returns 1). In that case you can use
dset.id.get_offset() to find where in the file it starts, and
dset.id.get_storage_size() to find the length of the stored data. You can see the docs on the low-level dataset API here.
I haven’t done what you’re talking about, so please don’t expect a complete working example. I’m trying to give you a useful starting point to figure this out, and an idea of what might make it easier or harder.
Thanks @thomas1 and @gheber I will try this.