Avoiding zero length chunks in parallel extensible datasets

Greetings,

I am using parallel HD5 to store time dependant data. I am using the extensible dataset with hyperslab. Everything works fine as long as every process has data available. However, I did not succeed with zero length chunkswhe a process has no available data. I am sure there is a very simple solution for this problem, but I did not find anything in the forum or the web.

I modified the repository example ph5_f90_hyperslab_by_chunk.F90 to illustrate the problem.

Activating line 36, the code works. Using line 34 does not work.

Any help is greatly appreciated.

Kind regards,

Dieke

As a new user I am not allowed to attach files.
The fortran code goes here :

!
! Number of processes is assumed to be 4
!
PROGRAM DATASET_BY_CHUNK

 USE HDF5 ! This module contains all necessary modules
 USE MPI

 IMPLICIT NONE

 CHARACTER(LEN=11), PARAMETER :: filename = "sds_chnk.h5"  ! File name
 CHARACTER(LEN=8), PARAMETER :: dsetname = "IntArray" ! Dataset name

 INTEGER(HID_T) :: file_id       ! File identifier
 INTEGER(HID_T) :: dset_id       ! Dataset identifier
 INTEGER(HID_T) :: filespace     ! Dataspace identifier in file
 INTEGER(HID_T) :: memspace      ! Dataspace identifier in memory
 INTEGER(HID_T) :: plist_id      ! Property list identifier

 INTEGER(HSIZE_T), DIMENSION(2) :: dimsf = (/8,1/) ! Dataset dimensions
                                                   ! in the file.
 INTEGER(HSIZE_T), DIMENSION(2) :: dimsfi = (/8,1/)
 INTEGER(HSIZE_T), DIMENSION(2) :: chunk_dims ! Chunks dimensions

 INTEGER(HSIZE_T),  DIMENSION(2) :: count
 INTEGER(HSSIZE_T), DIMENSION(2) :: offset

 INTEGER(HSIZE_T), DIMENSION(1:2) :: maxdims
 
 
 ! Distribution of data across processes:
 
 ! does not work
 !INTEGER, DIMENSION(4) :: partSize(4) = (/3,0,3,2/)
 ! does work
 INTEGER, DIMENSION(4) :: partSize(4) = (/3,1,2,2/)
 
 INTEGER, ALLOCATABLE :: data (:,:)  ! Data to write
 INTEGER :: rank = 2 ! Dataset rank

 INTEGER :: error, error_n  ! Error flags
 !
 ! MPI definitions and calls.
 !
 INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror       ! MPI error flag
 INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
 INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank

 comm = MPI_COMM_WORLD
 info = MPI_INFO_NULL

 CALL MPI_INIT(mpierror)
 CALL MPI_COMM_SIZE(comm, mpi_size, mpierror)
 CALL MPI_COMM_RANK(comm, mpi_rank, mpierror)
 ! Quit if mpi_size is not 4
 if (mpi_size .NE. 4) then
    write(*,*) 'This example is set up to use only 4 processes'
    write(*,*) 'Quitting....'
    goto 100
 endif

 !
 ! Initialize HDF5 library and Fortran interfaces.
 !
 CALL h5open_f(error)

 !
 ! Setup file access property list with parallel I/O access.
 !
 CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
 CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)

 !
 ! Create the file collectively.
 !
 CALL h5fcreate_f(filename, H5F_ACC_TRUNC_F, file_id, error, access_prp = plist_id)
 CALL h5pclose_f(plist_id, error)
 !
 ! Create the data space for the  dataset.
 !
 maxdims(2) = H5S_UNLIMITED_F
 maxdims(1) = H5S_UNLIMITED_F
 
 count(1) =  partSize(mpi_rank+1)
 count(2) =  1
 offset(1) = SUM(partSize(1:mpi_rank))
 offset(2) = 0
 chunk_dims(1) = partSize(mpi_rank+1)
 chunk_dims(2) = 1
 CALL h5screate_simple_f(rank, dimsf, filespace, error, maxdims)
 CALL h5screate_simple_f(rank, chunk_dims, memspace, error)

 !
 ! Create chunked dataset.
 !
 CALL h5pcreate_f(H5P_DATASET_CREATE_F, plist_id, error)
 CALL h5pset_chunk_f(plist_id, rank, chunk_dims, error)
 CALL h5dcreate_f(file_id, dsetname, H5T_NATIVE_INTEGER, filespace, &
                  dset_id, error, plist_id)
 CALL h5sclose_f(filespace, error)
 !     !
 ! Select hyperslab in the file.
 !
 CALL h5dget_space_f(dset_id, filespace, error)
 CALL h5sselect_hyperslab_f (filespace, H5S_SELECT_SET_F, offset, count, error)
 !
 ! Initialize data buffer with trivial data.
 !
 ALLOCATE (data(1,partSize(mpi_rank+1)))
 data = mpi_rank + 1
 !
 ! Create property list for collective dataset write
 !
 CALL h5pcreate_f(H5P_DATASET_XFER_F, plist_id, error)
 CALL h5pset_dxpl_mpio_f(plist_id, H5FD_MPIO_COLLECTIVE_F, error)
 IF(error/=0) STOP
 !
 ! Write the dataset collectively.
 !
 CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data, dimsf, error, &
                 file_space_id = filespace, mem_space_id = memspace, xfer_prp = plist_id)
 IF(error/=0) STOP
 
 CALL h5sclose_f(filespace, error)

!
! WRITE modified data into a second row in the file
!

! extent the data set
dimsfi = (/SUM(partSize),2/)
CALL h5dset_extent_f(dset_id, dimsfi, error)
IF(error/=0) STOP

 CALL h5dget_space_f(dset_id,filespace,error)

! modified data
data = data + 100
! new offset in file
offset(2) = 1
CALL h5sselect_hyperslab_f (filespace, H5S_SELECT_SET_F, offset, count, error)
CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data, dimsf, error, &
file_space_id = filespace, mem_space_id = memspace, xfer_prp = plist_id)
IF(error/=0) STOP

 !
 ! Deallocate data buffer.
 !
 DEALLOCATE(data)

 !
 ! Close dataspaces.
 !
 CALL h5sclose_f(filespace, error)
 CALL h5sclose_f(memspace, error)
 !
 ! Close the dataset.
 !
 CALL h5dclose_f(dset_id, error)
 !
 ! Close the property list.
 !
 CALL h5pclose_f(plist_id, error)
 !
 ! Close the file.
 !
 CALL h5fclose_f(file_id, error)

 !
 ! Close FORTRAN interfaces and HDF5 library.
 !
 CALL h5close_f(error)
 IF(mpi_rank.EQ.0) WRITE(*,'(A)') "PHDF5 example finished with no errors"

100 continue
CALL MPI_FINALIZE(mpierror)

 END PROGRAM DATASET_BY_CHUNK

You should be able to upload attachments now. Can you try again, though, since the post’s code formatting is hard to follow?

Hi,

thanks for enabling me for uploading code.
I have attached the code in question.
I looking forward to hints pointing me in the right direction.

Best regards,

Dieke

hyperslab_by_chunk.F90 (5.3 KB)

Thanks for uploading the source; it’s much easier to read. I would take a look at this example; you can ignore the filtering bits.

For ranks that don’t have anything to write, you need to select none for the dataspace. Let me know if this doesn’t help.

Thanks for the quick reply.
After realizing to use the 1.14.4.3 Version, I was able to run ph5_f90_filtered_writes_no_sel.F90.
(H5S_BLOCK_F was missing with older versions (1.14.0) ).
Using the example, I was able to adapt my example accordingly.

I attach the working solution, Maybe it is useful for somebody else.

Best regards and thanks again

Dieke
hyperslab_by_chunk.F90 (5.0 KB)