Hi,
I am trying to write a Xdmf-file from my tool for visualization in ParaView or VisIt using the HDF5 format.
I initially created a VTK-file from my model and used Paraview → File → Save Data → Xdmf Data File (*.xmf) to create a working combination of light and heavy data in Xdmf/HDF5. Now I am trying to reverse engineer the file from my tool. I am using the Java HDF5 object package from HDFView to create my own HDF5 file. The setup is described in this thread.
When opening the Xdmf from the original VTK everything works fine and I see my model. However, when I open my self-written Xdmf/HDF5 file (File → Open → mesh.xmf) Paraview and VisIt immediately crash. A similar problem was described in this thread.
I did try to compare the two HDF5-files using HDFView but the values as well as Data Types seem to be identical. I set up the files to match the “original” from the Vtk conversion as close as possible including maximum dimension size and chunking. The data types and values inside both files seem to be identical.
I compared the two HDF5-files using VBinDiff. Although the values in the datasets seem to be identical, the structure of the two files seems to be different. Are there any guidelines how to create a proper HDF5-file for Xdmf? Are there any preferences I have to set in the Java HDF5 library from HDFView?
Can someone please give me a hint where the corruption of my files comes from?
I attached the two versions:
error2.rar (196.4 KB)
working.rar (199.0 KB)
I know its not a complete MWE but maybe this code-snippet shows how I setup the datasets in the HDF5-files
-
Create the HDF5-File
// Retrieve an instance of the implementing class for the HDF5 format FileFormat fileFormat = FileFormat.getFileFormat(FileFormat.FILE_TYPE_HDF5); // If the implementing class wasn't found, it's an error. assert (fileFormat != null) : "Cannot find HDF5 " + FileFormat.class.getSimpleName() + "."; // try { // create a new file with a given file name. String fname = "C:\temp\mesh.h5"; // If the implementing class was found, use it to create a new HDF5 file // with a specific file name. // // If the specified file already exists, it is truncated. // The default HDF5 file creation and access properties are used. // File f = new File(fname); if(f.exists() && !f.isDirectory()) {f.delete();} H5File testFile = (H5File) fileFormat.createFile(fname, FileFormat.FILE_CREATE_DELETE); // open the file testFile.open(); // Retrieve the root group Group root = (Group)(testFile.getRootObject()); // Fill the file ... // close file resource testFile.close(); } catch (Exception ex) { Logger.getLogger(HDFHeavyDataWriter.class.getName()).log(Level.SEVERE, null, ex); }
-
Nodes/Geometry
// int nrNodes = nodes.size(); int CHUNK_X = 2446; int CHUNK_Y = 1 // Set the dimensions long[] dims = { nrNodes, 3 }; long[] maxdims = { HDF5Constants.H5S_UNLIMITED, HDF5Constants.H5S_UNLIMITED }; long[] chunk_dims = { CHUNK_X, CHUNK_Y }; // Data - float[] or double[] dependent on precision XYZNodeDataCreator c = new XYZNodeDataCreator(nodes); c.create(); Object data = c.getCoordinates(); // Datatype dataType = new H5Datatype( HDF5DataClass.FLOAT // 1 ,writerPrefs.getPrecision().getNumber() // 8 ,HDF5DataByteOrder.NATIVE // -1 ,HDF5DataSign.NATIVE // -1 ); // Set the dataset dataset = doc.createScalarDS( "Data0" ,grp // Group ,dataType // Datatype ,dims // Dimension sizes of the new dataset ,maxdims // Maximum dimension sizes of the new dataset, null if maxdims is the same as dims. ,chunk_dims // No chunking ,writerPrefs.getCompressionLevel().getNumber() // Compression level - 0 ,null // No initial data values ); dataset.init(); dataset.write(data);
-
Elements/Topology
// Calculate the vector length ElementVectorLengthCalculator lc = new ElementVectorLengthCalculator(elements); lc.calc(); int lenArr = lc.get(); int CHUNK_X = 1000; // Set the dimensions long[] dims = {lenArr}; long[] maxdims = {HDF5Constants.H5S_UNLIMITED}; long[] chunk_dims = {CHUNK_X}; // Data ElementVectorCalculator c = new ElementVectorCalculator( lenArr ,elements ); c.calc(); Object data = c.get(); // Datatype dataType = new H5Datatype( HDF5DataClass.INT // 0 ,writerPrefs.getPrecision().getNumber() // 4 ,HDF5DataByteOrder.NATIVE // -1 ,HDF5DataSign.NATIVE // -1 ); // Dataset dataset = doc.createScalarDS( "Data1" ,grp // Group ,dataType // Datatype ,dims // Dimension sizes of the new dataset ,maxdims // Maximum dimension sizes of the new dataset, null if maxdims is the same as dims. ,chunk_dims // No chunking ,writerPrefs.getCompressionLevel().getNumber() // Compression level ,null // No initial data values ); dataset.init(); dataset.write(data);
Edit:
I noticed, that, despite the values inside the two HDF5 files are identical, the file size differs. The one created by ParaView is 820k and mine is 814k. As the data dimensions are equal, where may this difference come from?
Edit2:
I used h5dump
to create an ASCII representation of both HDF5 files. The data is identical. Same result for h5diff
. However, the one written by ParaView is 820k and mine is 814k.
I noticed a slight difference in the headers:
This is the header from the working HDF5-file generated by ParaView:
HDF5 "working.h5" {
GROUP "/" {
DATASET "Data0" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 17952, 3 ) / ( H5S_UNLIMITED, H5S_UNLIMITED ) }
}
DATASET "Data1" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 89352 ) / ( H5S_UNLIMITED ) }
}
}
}
and this is the header from my file:
HDF5 "mesh.h5" {
GROUP "/" {
DATASET "Data0" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 17952, 3 ) / ( H5S_UNLIMITED, H5S_UNLIMITED ) }
DATA {
}
}
DATASET "Data1" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 89352 ) / ( H5S_UNLIMITED ) }
DATA {
}
}
}
}
Notice the 2
DATA {
}
Any idea where these come from?
Edit3:
The difference in the header seems to come from working on my file. If I generate a clean new file, both headers are identical.