SWMR with Compression Fails to Compress

I’m using the C++ interface to HDF5 (1.12.0) and have proved that when SWMR mode is enabled, compression is not honored. If I create the file with a line like:

_hdf5File = new H5File(_fileName, H5F_ACC_TRUNC|H5F_ACC_SWMR_WRITE);

and then create a data set with lines like:

DSetCreatPropList* plist = new DSetCreatPropList;
hsize_t chunkDims[2] = {1, nelems};
plist->setChunk(rank, chunkDims);
plist->setDeflate(4); // (0-9) lower levels are faster but result in less compression
DataSpace dataSpace(rank, dims, maxdims);
ds = _hdf5File->createDataSet(dataset, hid2DataType(dataType), dataSpace, *plist);

and add to the open file by extending data set with each write I find that I have to do an h5repack once I close the file to free the ‘unaccounted’ space. If I turn off SWMR, the compression works as I expect. There is something about enabling SWMR that prevents the ‘unaccounted’ memory space from being flushed.

Hi,

Compression feature should work in with SWMR. Could you please provide the code t to reproduce the problem along with C++ compiler and OS specification?

Thank you!

Elena

I wrote a short demo to simulate how I’m using HDF5 to write individual scalar values one element at a time (as they report by an asynchronous delivery source) so the code may look inefficient here but it’s how the design needed to be done.

A couple of comments - If I remove the “| H5F_ACC_SWMR_WRITE” when creating the file, the result looks perfectly fine with very little unaccounted space. If I include the SWMR flag The file is huge but in this example, for some reason, I can’t run h5stat (I can h5dump) and the data looks normal.

File size without SWMR -> 11KB
File size with SWMR -> 3.9MB

Based on my experience with more complex examples (not this one) if I could run h5repack, the file would reduce from 3.9MB to 11KB.

C++ compiler: g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
OS: Linux acnlin04.pbn.bnl.gov 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
The data is being written to a local linux file system disk (/dev/sda)

HDF5Demo.cxx

#include <iostream>
#include <string.h>
#include <stdlib.h>
#include <H5Cpp.h>
#include <H5Ppublic.h>
#include <H5Fpublic.h> // for SWMR support
#include <H5File.h>
#include <H5Location.h>

using namespace std;
using namespace H5;

int main (int argc, char* argv[])
{
	H5File* _hdf5File = new H5File("CompressionIssue.h5", H5F_ACC_TRUNC|H5F_ACC_SWMR_WRITE);

	// Create a scalar data set
	hsize_t dims[1] = {1}; // dataset dimensions at creation
	hsize_t maxdims[1] = {H5S_UNLIMITED}; // dataset maximum potential size
	int rank = 1; // 1 dimensional arrays
	DataSpace dataspace(rank, dims, maxdims);
	DSetCreatPropList* plist = new DSetCreatPropList;
	hsize_t chunkdims[1] = {3600};
	plist->setChunk(rank, chunkdims);
	plist->setDeflate(4); 
	DataSet ds = _hdf5File->createDataSet("MyScalars", DataType(PredType::NATIVE_DOUBLE), dataspace, *plist);
	delete plist;
	ds.close();

	// write to data set one element at a time - this will be how the data is reported
	DataSpace* filespace = NULL;
	DataSpace* memspace = NULL;
	srand(time(NULL));
	for (int i=0; i<1000; i++) {
		ds = _hdf5File->openDataSet("MyScalars");
		unsigned long npointsToAdd = 1;
		hsize_t npoints = ds.getSpace().getSimpleExtentNpoints();
		if (i==0) npoints--; // replace the first point if this is the initial add to the data set
		// We'll be appending to an existing data set so lets set our "pointer" to the end of the existing data set in the file
		// extend the data - let's add some points to the data set
		hsize_t extend[1] = { npointsToAdd }; // number of points being added
		hsize_t size[1];
		size[0] = npoints + extend[0]; // new size of the data set
		hsize_t offset[1];
		offset[0] = npoints; // where should new value(s) be inserted

		ds.extend(size);
		// Select a "hyperslab" in extended portion of the dataset.
		filespace = new DataSpace(ds.getSpace());
		filespace->selectHyperslab(H5S_SELECT_SET, extend, offset);

		// Define memory space.
		memspace = new DataSpace(1, extend, NULL);
		
		// generate random value and write to file
		double val = (double)rand()/RAND_MAX*1000.0;
		ds.write(&val, DataType(PredType::NATIVE_DOUBLE), *memspace, *filespace);

		// clean up
		delete memspace;
		delete filespace;

		// flush the file
		H5F_scope_t scope = H5F_SCOPE_GLOBAL;
		ds.flush(scope);
		ds.close();
	}

	_hdf5File->close();
	delete _hdf5File;
  return 0; 
}

Thank you for the example. Unfortunately, I cannot easily cut and paste to run it. Would it be possible to attach the program? Anyway… I think HDF5 has a bug, and the program has some flaws.

Several observations:

  1. According to the SWMR model, one has to create a file, create a dataset, then close the dataset and the file, then reopen the file with a SWMR flag, and open the dataset for writing, or one could use H5Fstart_swmr_write after the dataset is created. I am surprised that HDF5 allowed to proceed with writing data without an error. HDF5 should enforce the programming model.

  2. One of the possible scenarios when the file size would grow is when a chunk is written to a new place in the file each time when new data is added and chunk size changes (it is the case when compression is used). I don’t know why the SWMR flag makes a difference, so we will need to reproduce the problem and investigate it.

  3. Don’t add one element at a time and don’t flush each iteration. Performance will be awful. Fill the whole chunk, write it and flush the file.

Thank you!

Elena

Hi Elena.

I hope this email gets back to you. I’ve attached the source. Thanks for the recommendations.

I will make the suggested changes and see what effect it has on the files.

WRT #1 – I was surprised by this as well. I was actually closing and opening the file and data sets with each write but realized that it wasn’t necessary (it worked without closing) so I figured to eliminate the extra overhead of opening/closing. I will add that back in.

WRT #2 – I was wondering the same thing. Depending on data the compression algorithm and how chunks are handled, the compression will create variable size spaces. I figured the algorithms being used knows how to handle that.

WRT #3 – Typical scenarios for our logging system deliver data at 1Hz. This is why I was writing one point at a time. To collect data and only write at the chunk size interval (say 600 points) would mean 10 minutes between writes. Our users understand that there might be some delay in availability of logged data but 10 minutes may not be acceptable to them. I will experiment with the chunk sizes to see if smaller chunks can work.

Thanks for your help.

Seth

HDF5Demo.cxx (2.32 KB)

Minor update - After making the changes as suggested in your reply I see no difference in the results. I create the file with H5F_ACC_TRUNC|H5F_ACC_SWMR_WRITE, create the dataset as in the demo, close the data set, close the file. Then with each iteration I open the file with H5F_ACC_RDWR|H5F_ACC_SWMR_WRITE and open the data set.

If I comment out the plist->setDeflate(4) line, the file is 31KB. If I leave it in, the file size is 4MB.

Last comment - I recognized that the chunk size I am using in the demo is too big considering only adding 1k points. It should be noted that a smaller chunk size (10, 100) when writing 1000 points does make the compression work better but the file is still about 3-4 times greater then when SWMR is off.

Thank you for explanation (it makes perfect sense now) and the program! I was able to run it and can reproduce the issue. We will investigate.

Elena

Forgot to add:

Compression indeed works when SWMR is enabled. Try to run

%h5dump -pH CompressionIssue.h5
HDF5 "CompressionIssue.h5" {
GROUP "/" {
   DATASET "MyScalars" {
      DATATYPE  H5T_IEEE_F64LE
      DATASPACE  SIMPLE { ( 1000 ) / ( H5S_UNLIMITED ) }
      STORAGE_LAYOUT {
         CHUNKED ( 3600 )
         SIZE 7670 (1.043:1 COMPRESSION)
      }
      FILTERS {
         COMPRESSION DEFLATE { LEVEL 4 }
      }
      FILLVALUE {
         FILL_TIME H5D_FILL_TIME_IFSET
         VALUE  H5D_FILL_VALUE_DEFAULT
      }
      ALLOCATION_TIME {
         H5D_ALLOC_TIME_INCR
      }
   }
}
}

but there is a lot of unused space in the file

%h5stat CompressionIssue.h5
    .....
    Summary of file space information:
      File metadata: 645 bytes
      Raw data: 7670 bytes
      Amount/Percent of tracked free space: 0 bytes/0.0%
  Unaccounted space: 3908458 bytes
Total space: 3916773 bytes

I turned shuffle mode on and that helps too. It really does seem to be a sensitivity to chunk size that drives the unaccounted space. The problem in my situation is that choosing an optimal chunk size is not always obvious as the data reporting is at 1Hz or similar. As I mentioned previously, I could accumulate data and write when the chunk size is reached but that will lead to delays in the ability for readers to access the file with the most recent values.

Could you please try H5Pset_chunk_opts and see if it helps to reduce the amount of unused space that contributes to the file size when SWMR is used.

Thank you!

That definitely helps. In the example used in this thread, the file size I see for 1000 scalar doubles with 10 element chunks:

68KB with gzip compression, SWMR and shuffle mode on (DSetCreatPropList::setShuffle)

29KB with gzip compression, SWMR, shuffle mode, and filtering of partial chunks disabled -> big improvement!

For comparison, when SWMR is off

18KB with gzip compression and shuffle mode

22KB with gzip compression, shuffle mode and filtering of partial chunks disabled (yes, slightly larger file size!)

I think we’re in the write ballpark now with these two options (shuffle and H5Pset_chunk_opts). I’m going to apply this to my more complex program that includes array data and attributes to see the effect.

Thanks!

Seth