Hdf-forum Digest, Vol 19, Issue 7

Namaskar Sir
Thanks a lot for the response. Initially i was using h5c++ to process the
file in the following manner .......... the .h5 file contains 7 columns and
greater than 40000000 rows with double precision numbers.

Here is part of the code.
double data[35000000][7];
int main (void)

{

    H5File file( FILE_NAME, H5F_ACC_RDONLY );
    DataSet dataset = file.openDataSet( DATASET_NAME );

    /*
     * Get filespace for rank and dimension
     */
    DataSpace filespace = dataset.getSpace();

    /*
     * Get number of dimensions in the file dataspace
     */
    int rank = filespace.
getSimpleExtentNdims();

    /*
     * Get and print the dimension sizes of the file dataspace
     */
    hsize_t dims[2]; // dataset dimensions
    rank = filespace.getSimpleExtentDims( dims );
    //cout << "dataset rank = " << rank << ", dimensions "
    // << (unsigned long)(dims[0]) << " x "
    // << (unsigned long)(dims[1]) << endl;

    /*
     * Define the memory space to read dataset.
     */
    DataSpace mspace1(RANK, dims);

    /*
     * Read dataset back and display.
     */

    dataset.read( data, PredType::NATIVE_DOUBLE, mspace1, filespace );
}

I have pasted only part of the code that is relevant . This works well but
as soon as it goes above double data[40000000][7]; It gives segmentation
fault.

So as to overcome this i started using h5dump . As using this i dump the
dataset into a text file which i can read line by line and process it line
by line. And because of this then i don't have to define an array and hence
no memory problem.

  So the point is that instead of reading the entire dataset into memory if
i can read line by line from the dataset and process it line by line then
also my problem will be solved. But I don't know how to do this ( reading
line by line from dataset from hdf5 file without using h5dump ). So please
help.

Pranam,
Sushil Arun Samant.