Thanks a lot for the response. Initially i was using h5c++ to process the
file in the following manner .......... the .h5 file contains 7 columns and
greater than 40000000 rows with double precision numbers.
Here is part of the code.
int main (void)
H5File file( FILE_NAME, H5F_ACC_RDONLY );
DataSet dataset = file.openDataSet( DATASET_NAME );
* Get filespace for rank and dimension
DataSpace filespace = dataset.getSpace();
* Get number of dimensions in the file dataspace
int rank = filespace.
* Get and print the dimension sizes of the file dataspace
hsize_t dims; // dataset dimensions
rank = filespace.getSimpleExtentDims( dims );
//cout << "dataset rank = " << rank << ", dimensions "
// << (unsigned long)(dims) << " x "
// << (unsigned long)(dims) << endl;
* Define the memory space to read dataset.
DataSpace mspace1(RANK, dims);
* Read dataset back and display.
dataset.read( data, PredType::NATIVE_DOUBLE, mspace1, filespace );
I have pasted only part of the code that is relevant . This works well but
as soon as it goes above double data; It gives segmentation
So as to overcome this i started using h5dump . As using this i dump the
dataset into a text file which i can read line by line and process it line
by line. And because of this then i don't have to define an array and hence
no memory problem.
So the point is that instead of reading the entire dataset into memory if
i can read line by line from the dataset and process it line by line then
also my problem will be solved. But I don't know how to do this ( reading
line by line from dataset from hdf5 file without using h5dump ). So please
Sushil Arun Samant.