Reading chunked String dataset

Hi All,

Centos 7, HDF5 1.10.5

I am trying to read a relatively large ( size = 59,555 ) 1D Chunked String DataSet but run into segfaults, and other errors when running the following code. However, with the same code I am able to read in smaller datasets of ~24,000 without issue. I have tried with hyperslabs, but can’t seem to read more than the first line of the dataset.
I think it is must be due to the way I setup the char* buffer. Hopefully someone can help me resolve this. Happy to provide additional info if needed.

HDF5 "sampleForRyan (copy).hdf5" {
DATASET "HelmetGeneration/Helmet" {
   DATATYPE  H5T_STRING {
  STRSIZE 142;
  STRPAD H5T_STR_NULLPAD;
  CSET H5T_CSET_ASCII;
  CTYPE H5T_C_S1;
   }
   DATASPACE  SIMPLE { ( 59555 ) / ( 59555 ) }

 dataspace = inputDataset->GetSpace();

H5::StrType stype = inputDataset->getStrType();

size_t size = stype.getSize();

int rank = dataspace.getSimpleExtentNdims();

// 1D 
hsize_t dims_out[1];
int ndims = dataspace.getSimpleExtentDims( dims_out, NULL);

// char outdata[dims_out[0]][size];  // this works fine for smaller datasets 
   char outdata[59555][142];

// char *outdata = new char[dims_out[0] * size];
// char * outdata = new char[NX * NY]; 
// auto outdata = new char[dims_out[0]][142]();

inputDataset->read( outdata, stype);

Thanks,
Ryan

Maybe a stack size issue? 59,555*142 = 8,456,810. What’s the output of ulimit -s? There’s a linker option (-Wl,...) to bump the stack size. G.