Problem writing the correct HDF5 file inside an Xdmf-File

Hi,

I am trying to write a Xdmf-file from my tool for visualization in ParaView or VisIt using the HDF5 format.

I initially created a VTK-file from my model and used Paraview → File → Save Data → Xdmf Data File (*.xmf) to create a working combination of light and heavy data in Xdmf/HDF5. Now I am trying to reverse engineer the file from my tool. I am using the Java HDF5 object package from HDFView to create my own HDF5 file. The setup is described in this thread.

When opening the Xdmf from the original VTK everything works fine and I see my model. However, when I open my self-written Xdmf/HDF5 file (File → Open → mesh.xmf) Paraview and VisIt immediately crash. A similar problem was described in this thread.

I did try to compare the two HDF5-files using HDFView but the values as well as Data Types seem to be identical. I set up the files to match the “original” from the Vtk conversion as close as possible including maximum dimension size and chunking. The data types and values inside both files seem to be identical.

I compared the two HDF5-files using VBinDiff. Although the values in the datasets seem to be identical, the structure of the two files seems to be different. Are there any guidelines how to create a proper HDF5-file for Xdmf? Are there any preferences I have to set in the Java HDF5 library from HDFView?

Can someone please give me a hint where the corruption of my files comes from?

I attached the two versions:
error2.rar (196.4 KB)
working.rar (199.0 KB)


I know its not a complete MWE but maybe this code-snippet shows how I setup the datasets in the HDF5-files

  • Create the HDF5-File

    // Retrieve an instance of the implementing class for the HDF5 format
    FileFormat fileFormat = FileFormat.getFileFormat(FileFormat.FILE_TYPE_HDF5);
    
    // If the implementing class wasn't found, it's an error.
    assert (fileFormat != null) : "Cannot find HDF5 " + FileFormat.class.getSimpleName() + ".";
      
    //
    try {
        // create a new file with a given file name.
        String fname = "C:\temp\mesh.h5";
    
        // If the implementing class was found, use it to create a new HDF5 file
        // with a specific file name.
        //
        // If the specified file already exists, it is truncated.
        // The default HDF5 file creation and access properties are used.
        //
        File f = new File(fname);
        if(f.exists() && !f.isDirectory()) {f.delete();}
        H5File testFile = (H5File) fileFormat.createFile(fname, FileFormat.FILE_CREATE_DELETE);
          
        // open the file
        testFile.open();
          
        // Retrieve the root group
        Group root = (Group)(testFile.getRootObject());
    
        // Fill the file
        ...
    
        // close file resource
        testFile.close();
          
    } catch (Exception ex) {
        Logger.getLogger(HDFHeavyDataWriter.class.getName()).log(Level.SEVERE, null, ex);
    }
    
  • Nodes/Geometry

    //
    int nrNodes = nodes.size();
     
    int CHUNK_X = 2446;
    int CHUNK_Y = 1
      
    // Set the dimensions
    long[] dims = { nrNodes, 3 };
    long[] maxdims = { HDF5Constants.H5S_UNLIMITED, HDF5Constants.H5S_UNLIMITED };
    long[] chunk_dims = { CHUNK_X, CHUNK_Y };
    
    // Data - float[] or double[] dependent on precision
    XYZNodeDataCreator c = new XYZNodeDataCreator(nodes);
    c.create();
    Object data = c.getCoordinates();
      
    // Datatype
    dataType = new H5Datatype(
            HDF5DataClass.FLOAT                                             // 1
           ,writerPrefs.getPrecision().getNumber()                          // 8
           ,HDF5DataByteOrder.NATIVE                                        // -1
           ,HDF5DataSign.NATIVE                                             // -1
      );
      
    // Set the dataset
    dataset = doc.createScalarDS(
            "Data0"
           ,grp                                                             // Group
           ,dataType                                                        // Datatype
           ,dims                                                            // Dimension sizes of the new dataset
           ,maxdims                                                         // Maximum dimension sizes of the new dataset, null if maxdims is the same as dims.
           ,chunk_dims                                                      // No chunking
           ,writerPrefs.getCompressionLevel().getNumber()                   // Compression level - 0
           ,null                                                            // No initial data values
    );
    dataset.init();
    dataset.write(data);
    
  • Elements/Topology

    // Calculate the vector length
    ElementVectorLengthCalculator lc = new ElementVectorLengthCalculator(elements);
    lc.calc();
    int lenArr = lc.get();
     
    int CHUNK_X = 1000;
    
    // Set the dimensions
    long[] dims = {lenArr};
    long[] maxdims = {HDF5Constants.H5S_UNLIMITED};
    long[] chunk_dims = {CHUNK_X};
      
    // Data
    ElementVectorCalculator c = new ElementVectorCalculator(
            lenArr
           ,elements
    );
    c.calc();
    Object data = c.get();
    
    // Datatype
    dataType = new H5Datatype(
          HDF5DataClass.INT                                               // 0
         ,writerPrefs.getPrecision().getNumber()                          // 4
         ,HDF5DataByteOrder.NATIVE                                        // -1
         ,HDF5DataSign.NATIVE                                             // -1
    );
      
    // Dataset
    dataset = doc.createScalarDS(
            "Data1"
            ,grp                                                            // Group
            ,dataType                                                       // Datatype
            ,dims                                                           // Dimension sizes of the new dataset
            ,maxdims                                                        // Maximum dimension sizes of the new dataset, null if maxdims is the same as dims.
            ,chunk_dims                                                     // No chunking
            ,writerPrefs.getCompressionLevel().getNumber()                  // Compression level
            ,null                                                           // No initial data values
    );
    dataset.init();
    dataset.write(data);
    

Edit:

I noticed, that, despite the values inside the two HDF5 files are identical, the file size differs. The one created by ParaView is 820k and mine is 814k. As the data dimensions are equal, where may this difference come from?

Edit2:
I used h5dump to create an ASCII representation of both HDF5 files. The data is identical. Same result for h5diff. However, the one written by ParaView is 820k and mine is 814k.

I noticed a slight difference in the headers:

This is the header from the working HDF5-file generated by ParaView:

HDF5 "working.h5" {
GROUP "/" {
   DATASET "Data0" {
	  DATATYPE  H5T_IEEE_F64LE
	  DATASPACE  SIMPLE { ( 17952, 3 ) / ( H5S_UNLIMITED, H5S_UNLIMITED ) }
   }
   DATASET "Data1" {
	  DATATYPE  H5T_STD_I32LE
	  DATASPACE  SIMPLE { ( 89352 ) / ( H5S_UNLIMITED ) }
   }
}
}

and this is the header from my file:

HDF5 "mesh.h5" {
GROUP "/" {
   DATASET "Data0" {
	  DATATYPE  H5T_IEEE_F64LE
	  DATASPACE  SIMPLE { ( 17952, 3 ) / ( H5S_UNLIMITED, H5S_UNLIMITED ) }
	  DATA {
	  }
   }
   DATASET "Data1" {
	  DATATYPE  H5T_STD_I32LE
	  DATASPACE  SIMPLE { ( 89352 ) / ( H5S_UNLIMITED ) }
	  DATA {
	  }
   }
}
}

Notice the 2

	  DATA {
	  }

Any idea where these come from?


Edit3:
The difference in the header seems to come from working on my file. If I generate a clean new file, both headers are identical.

I used h5debug to get some more information.

This is the result for the working file:

h5debug working.h5
Reading signature at address 0 (rel)
File Super Block...
File name (as opened):                             working.h5
File name (after resolving symlinks):              working.h5
File access flags                                  0x00000000
File open reference count:                         1
Address of super block:                            0 (abs)
Size of userblock:                                 0 bytes
Superblock version number:                         0
Free list version number:                          0
Root group symbol table entry version number:      0
Shared header version number:                      0
Size of file offsets (haddr_t type):               8 bytes
Size of file lengths (hsize_t type):               8 bytes
Symbol table leaf node 1/2 rank:                   4
Symbol table internal node 1/2 rank:               16
Indexed storage internal node 1/2 rank:            32
File status flags:                                 0x00
Superblock extension address:                      UNDEF (rel)
Shared object header message table address:        UNDEF (rel)
Shared object header message version number:       0
Number of shared object header message indexes:    0
Address of driver information block:               UNDEF (rel)
Root group symbol table entry:
   Name offset into private heap:                  0
   Object header address:                          96
   Cache info type:                                Symbol Table
   Cached entry information:
	  B-tree address:                              136
	  Heap address:                                680

and this is the result for my file:

h5debug mesh.h5
Reading signature at address 0 (rel)
File Super Block...
File name (as opened):                             mesh.h5
File name (after resolving symlinks):              mesh.h5
File access flags                                  0x00000000
File open reference count:                         1
Address of super block:                            0 (abs)
Size of userblock:                                 0 bytes
Superblock version number:                         3
Free list version number:                          0
Root group symbol table entry version number:      0
Shared header version number:                      0
Size of file offsets (haddr_t type):               8 bytes
Size of file lengths (hsize_t type):               8 bytes
Symbol table leaf node 1/2 rank:                   4
Symbol table internal node 1/2 rank:               16
Indexed storage internal node 1/2 rank:            32
File status flags:                                 0x00
Superblock extension address:                      UNDEF (rel)
Shared object header message table address:        UNDEF (rel)
Shared object header message version number:       0
Number of shared object header message indexes:    0
Address of driver information block:               UNDEF (rel)
Root group symbol table entry:
   Name offset into private heap:                  0
   Object header address:                          48
   Cache info type:                                Nothing Cached

The differences are in Superblock version number and Root group symbol table entry.

Is a different HDF5 version a possibility for the differences? Is it possible to set the Superblock version number? From what I have found here, maybe some data in my file forces version 3?!

Ok, this seems to be it. I tried the whole thing with HDFView 2.13.0 and the included older HDF5 version. Using h5debug this generates:

h5debug mesh.h5
Reading signature at address 0 (rel)
File Super Block...
File name (as opened):                             mesh.h5
File name (after resolving symlinks):              mesh.h5
File access flags                                  0x00000000
File open reference count:                         1
Address of super block:                            0 (abs)
Size of userblock:                                 0 bytes
Superblock version number:                         2
Free list version number:                          0
Root group symbol table entry version number:      0
Shared header version number:                      0
Size of file offsets (haddr_t type):               8 bytes
Size of file lengths (hsize_t type):               8 bytes
Symbol table leaf node 1/2 rank:                   4
Symbol table internal node 1/2 rank:               16
Indexed storage internal node 1/2 rank:            32
File status flags:                                 0x00
Superblock extension address:                      UNDEF (rel)
Shared object header message table address:        UNDEF (rel)
Shared object header message version number:       0
Number of shared object header message indexes:    0
Address of driver information block:               UNDEF (rel)
Root group symbol table entry:
   Name offset into private heap:                  0
   Object header address:                          48
   Cache info type:                                Nothing Cached

Which obviously still is not the same as from the VTK XdmfWriter, but it works. With Superblock version number: 2 I can open my model in ParaView 5.5 & 5.6.

Martin,

I think you are right about this being a HDF5 library version issue. It appears that you originally created a 1.10 compatible HDF5 file because your original Xdmf/HDF5 writer was using a HDF5 1.10.x library version. Then you tried to read the file with ParaView linked with HDF5 1.8.x. 1.8 library versions are not able to read 1.10 compatible files. Superblock 3 is a symptom of 1.10 compatible format.

If you want to keep up with current HDF5 1.10 library versions, there are a couple different strategies. You can upgrade your reader and display tools such as ParaView, to use the current HDF5 1.10.x version. Then your tools will be able to read both 1.8 and 1.10 compatible files.

Alternatively you can use one of the backward compatibility options when creating new files, so that 1.8 compatible files will be written with the 1.10 library. Then you don’t need to upgrade display tools, and your 1.8 files will also work with other people’s down-level software.

Because HDF5 1.10 has now been released for almost three years, I recommend upgrading all your tools to 1.10. Use one of the backward compatibility options when creating files, only if you will be sharing files with other people that might be using down level tools.

Here is more information about backward compatibility options. This is the C API, you might want to look up the equivalent for Java:
https://portal.hdfgroup.org/display/HDF5/H5P_SET_LIBVER_BOUNDS