···
On Nov 14, 2017 3:27 PM, "Miller, Mark C." <miller86@llnl.gov> wrote:
Hey Michael,
This is all in parallel? Thats awesome to get working!
Just a very brief look see on the h5dump output suggests the chunking is
all Nx1.
Filters are applied on a per-chunk basis with a parameter set nonetheless
that is fixed for all chunks (unless the filter is encoding params *into*
the chunks themselves as opposed to dataset header messages). So, in the
case of ZFP, with Nx1 chunking, its not really giving ZFP the greatest
opportunity to compress because its essentially limiting its work to one
dimension.
Next, the chunk size does seem to yield pretty small chunks.
Is this a constraint in the way parallel is being handled? Can you easily
switch to larger chunks and chunks with non-unity in more dimensions?
Mark
"Hdf-forum on behalf of Michael K. Edwards" wrote:
It's good to have a reference example when testing an integration like
this. I've attached the patch I've been using against the "maint"
(3.8.x) branch of PETSc. It's obviously not suitable for integration
(it blindly applies ZFP to floating point Vecs and BloscLZ to integer
Vecs), but it does exercise the code paths in interesting ways.
Here's how I configure the HDF5 develop branch (for debug purposes):
./configure --prefix=/usr/local 'MAKE=/usr/bin/gmake' 'CC=mpicc'
'CFLAGS=-fPIC -fstack-protector -g3 -fopenmp' 'AR=/usr/bin/ar'
'ARFLAGS=cr' 'CXX=mpicxx' 'CXXFLAGS=-fstack-protector -g -fopenmp
-fPIC' 'F90=mpif90' 'F90FLAGS=-fPIC -ffree-line-length-0 -g -fopenmp'
'F77=mpif90' 'FFLAGS=-fPIC -ffree-line-length-0 -g -fopenmp'
'FC=mpif90' 'FCFLAGS=-fPIC -ffree-line-length-0 -g -fopenmp'
'--enable-shared' '--with-default-api-version=v18' '--enable-parallel'
'--enable-fortran' 'F9X=mpif90' '--with-zlib=yes' '--with-szlib=yes'
And here's how I configure and run PETSc (again, for debug purposes):
./configure --without-x --with-openmp
--with-blaslapack-dir=/opt/intel/mkl --with-hdf5 --download-p4est=yes
--download-triangle=yes --download-pragmatic=yes --download-metis=yes
--download-eigen=yes
make PETSC_DIR=/home/centos/p4est/petsc/petsc
PETSC_ARCH=arch-linux2-c-debug all
cd src/snes/examples/tutorials
make PETSC_DIR=/home/centos/p4est/petsc/petsc
PETSC_ARCH=arch-linux2-c-debug ex12
/usr/local/bin/mpiexec -n 4 ./ex12 -run_type full
-variable_coefficient nonlinear -nonzero_initial_guess 1 -interpolate
1 -petscspace_order 2 -snes_max_it 10 -snes_type fas
-snes_linesearch_type bt -snes_fas_levels 3 -fas_coarse_snes_type
newtonls -fas_coarse_snes_linesearch_type basic -fas_coarse_ksp_type
cg -fas_coarse_pc_type jacobi -fas_coarse_snes_monitor_short
-fas_levels_snes_max_it 4 -fas_levels_snes_type newtonls
-fas_levels_snes_linesearch_type bt -fas_levels_ksp_type cg
-fas_levels_pc_type jacobi -fas_levels_snes_monitor_short
-fas_levels_cycle_snes_linesearch_type bt -snes_monitor_short
-snes_converged_reason -snes_view -simplex 0 -petscspace_poly_tensor
-dm_plex_convert_type p4est -dm_forest_minimum_refinement 0
-dm_forest_initial_refinement 2 -dm_forest_maximum_refinement 4
-dm_p4est_refine_pattern hash -dm_view_hierarchy
Basically this is a smoke test for various shapes and sizes of object
that occur in an adaptive mesh refinement use case. The
"-dm_view_hierarchy" flag is what triggers the write of three HDF5
files. The typical structure looks like this:
[centos@centos74 tutorials]$ h5dump -pH ex12-2.h5
HDF5 "ex12-2.h5" {
GROUP "/" {
GROUP "fields" {
DATASET "solution error" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 636 ) / ( 636 ) }
STORAGE_LAYOUT {
CHUNKED ( 636 )
SIZE 1566 (3.249:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32013
COMMENT H5Z-ZFP-0.7.0 (ZFP-0.5.2) github.com/LLNL/H5Z-ZFP
PARAMS { 5374064 91252346 10163 -924844032 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "geometry" {
DATASET "vertices" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 170, 2 ) / ( 170, 2 ) }
STORAGE_LAYOUT {
CHUNKED ( 170, 2 )
SIZE 3318 (0.820:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32013
COMMENT H5Z-ZFP-0.7.0 (ZFP-0.5.2) github.com/LLNL/H5Z-ZFP
PARAMS { 5374064 91252346 -1879048169 -924844022 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "labels" {
GROUP "Face Sets" {
GROUP "1" {
DATASET "indices" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 20, 1 ) / ( 20, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 20, 1 )
SIZE 96 (0.833:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 80 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "2" {
DATASET "indices" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 20, 1 ) / ( 20, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 20, 1 )
SIZE 96 (0.833:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 80 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "3" {
DATASET "indices" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 14, 1 ) / ( 14, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 14, 1 )
SIZE 72 (0.778:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 56 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "4" {
DATASET "indices" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 22, 1 ) / ( 22, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 22, 1 )
SIZE 104 (0.846:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 88 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
}
GROUP "marker" {
GROUP "1" {
DATASET "indices" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 80, 1 ) / ( 80, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 80, 1 )
SIZE 336 (0.952:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 320 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
}
}
GROUP "topology" {
DATASET "cells" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 1136, 1 ) / ( 1136, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 1136, 1 )
SIZE 1476 (3.079:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 4544 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
ATTRIBUTE "cell_dim" {
DATATYPE H5T_STD_I32LE
DATASPACE SCALAR
}
}
DATASET "cones" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 636, 1 ) / ( 636, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 636, 1 )
SIZE 155 (16.413:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 2544 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
DATASET "order" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 636, 1 ) / ( 636, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 636, 1 )
SIZE 755 (3.370:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 2544 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
DATASET "orientation" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 1136, 1 ) / ( 1136, 1 ) }
STORAGE_LAYOUT {
CHUNKED ( 1136, 1 )
SIZE 216 (21.037:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 4544 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
}
}
GROUP "vertex_fields" {
DATASET "solution error_potential" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 170 ) / ( 170 ) }
STORAGE_LAYOUT {
CHUNKED ( 170 )
SIZE 617 (2.204:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32013
COMMENT H5Z-ZFP-0.7.0 (ZFP-0.5.2) github.com/LLNL/H5Z-ZFP
PARAMS { 5374064 91252346 2707 -924844032 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
ATTRIBUTE "vector_field_type" {
DATATYPE H5T_STRING {
STRSIZE 7;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
}
}
}
GROUP "viz" {
GROUP "topology" {
DATASET "cells" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 130, 4 ) / ( 130, 4 ) }
STORAGE_LAYOUT {
CHUNKED ( 130, 4 )
SIZE 592 (3.514:1 COMPRESSION)
}
FILTERS {
USER_DEFINED_FILTER {
FILTER_ID 32001
COMMENT blosc
PARAMS { 2 2 4 2080 5 1 0 }
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
ALLOCATION_TIME {
H5D_ALLOC_TIME_EARLY
}
ATTRIBUTE "cell_corners" {
DATATYPE H5T_STD_I32LE
DATASPACE SCALAR
}
ATTRIBUTE "cell_dim" {
DATATYPE H5T_STD_I32LE
DATASPACE SCALAR
}
}
}
}
}
}
On Thu, Nov 9, 2017 at 3:27 PM, Michael K. Edwards <m.k.edwards@gmail.com> wrote:
It's exciting to be able to show the collective filtered IO feature as
part of a full software stack. Thank you for your hard work on this,
and please let me know what more I can do to help keep it on glide
path for release.
On Thu, Nov 9, 2017 at 3:22 PM, Jordan Henderson <jhenderson@hdfgroup.org> wrote:
Thanks! I'll discuss this with others and see what the best way to proceed
forward from this is. I think this has been a very productive discussion and
very useful feedback.
________________________________
From: Michael K. Edwards <m.k.edwards@gmail.com>
Sent: Thursday, November 9, 2017 5:01:33 PM
To: Jordan Henderson
Cc: HDF Users Discussion List
Subject: Re: [Hdf-forum] Collective IO and filters
And here's the change to H5Z-blosc (still using the private H5MM APIs):
diff --git a/src/blosc_filter.c b/src/blosc_filter.c
index bfd8c3e..9bc1a42 100644
--- a/src/blosc_filter.c
+++ b/src/blosc_filter.c
@@ -16,6 +16,7 @@
#include <string.h>
#include <errno.h>
#include "hdf5.h"
+#include "H5MMprivate.h"
#include "blosc_filter.h"
#if defined(__GNUC__)
@@ -194,20 +195,21 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
/* We're compressing */
if (!(flags & H5Z_FLAG_REVERSE)) {
- /* Allocate an output buffer exactly as long as the input data; if
- the result is larger, we simply return 0. The filter is flagged
- as optional, so HDF5 marks the chunk as uncompressed and
- proceeds.
+ /* Allocate an output buffer BLOSC_MAX_OVERHEAD (currently 16) bytes
+ larger than the input data, to accommodate the BLOSC header.
+ If compression with the requested parameters causes the data itself
+ to grow (thereby causing the compressed data, with header, to exceed
+ the output buffer size), fall back to memcpy mode (clevel=0).
*/
- outbuf_size = (*buf_size);
+ outbuf_size = nbytes + BLOSC_MAX_OVERHEAD;
#ifdef BLOSC_DEBUG
fprintf(stderr, "Blosc: Compress %zd chunk w/buffer %zd\n",
nbytes, outbuf_size);
#endif
- outbuf = malloc(outbuf_size);
+ outbuf = H5MM_malloc(outbuf_size);
if (outbuf == NULL) {
PUSH_ERR("blosc_filter", H5E_CALLBACK,
@@ -217,7 +219,11 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
blosc_set_compressor(compname);
status = blosc_compress(clevel, doshuffle, typesize, nbytes,
- *buf, outbuf, nbytes);
+ *buf, outbuf, outbuf_size);
+ if (status < 0) {
+ status = blosc_compress(0, doshuffle, typesize, nbytes,
+ *buf, outbuf, outbuf_size);
+ }
if (status < 0) {
PUSH_ERR("blosc_filter", H5E_CALLBACK, "Blosc compression error");
goto failed;
@@ -228,7 +234,7 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
/* declare dummy variables */
size_t cbytes, blocksize;
- free(outbuf);
+ H5MM_xfree(outbuf);
/* Extract the exact outbuf_size from the buffer header.
*
@@ -243,7 +249,14 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
fprintf(stderr, "Blosc: Decompress %zd chunk w/buffer %zd\n",
nbytes, outbuf_size);
#endif
- outbuf = malloc(outbuf_size);
+ if (outbuf_size == 0) {
+ H5MM_xfree(*buf);
+ *buf = NULL;
+ *buf_size = outbuf_size;
+ return 0; /* Size of compressed/decompressed data */
+ }
+
+ outbuf = H5MM_malloc(outbuf_size);
if (outbuf == NULL) {
PUSH_ERR("blosc_filter", H5E_CALLBACK, "Can't allocate
decompression buffer");
@@ -259,14 +272,14 @@ size_t blosc_filter(unsigned flags, size_t cd_nelmts,
} /* compressing vs decompressing */
if (status != 0) {
- free(*buf);
+ H5MM_xfree(*buf);
*buf = outbuf;
*buf_size = outbuf_size;
return status; /* Size of compressed/decompressed data */
}
failed:
- free(outbuf);
+ H5MM_xfree(outbuf);
return 0;
} /* End filter function */
On Thu, Nov 9, 2017 at 2:45 PM, Jordan Henderson <jhenderson@hdfgroup.org> wrote:
As the filtered collective path simply calls through the filter pipeline
by
way of the H5Z_pipeline() function, it would seem that either the filter
pipeline itself is not handling this case correctly, or this is somewhat
unexpected behavior for the pipeline to deal with.
Either way, I think a pull request/diff file would be very useful for
going
over this. If you're able to generate a diff between what you have now and
the current develop branch/H5Z-blosc code and put it here that would be
useful. I don't think that there should be too much in the way of
logistics
for getting this code in, we just want to make sure that we approach the
solution in the right way without breaking something else.