Write attributes collectively in MPI run

Good day,

I am puzzled how to work with attributes in parallel.

Code description:
Master processor (Pid=0) creates file, datasets and attributes (based on number of processor used in mpi run) which will belong to every processor. Then every processor opens created that file, opens its own dataset (based on name), writes data to its dataset (this works perfect) and should rewrite attributes for its dataset.

I found that even though I set that every processor opens its own attributes which are connected with its own dataset, only 1 processor rewrites its attributes, whereas all other do nothing with attributes.

I found that attributes should be written collectively. However, in my code every processor will have its own set of parameters which should be written as attributes, so each processor knows only its own set of parameters but not of other processors. Thus, I cannot write attributes collectively because processors do not know parameters (I want to write as attributes) of each other, but only their own.

What is a good way to solve this problem? I can preliminary send all parameters (lets say) to MASTER processor which will write all attributes, but can I do it in easier way in order every processor writes its own parameters to attributes?

Thank you in advance!


  program test1

 use hdf5
 implicit none
 use mpi

 integer :: numtasks, Pid, len, ierr , mpierror, info, c_len
 integer :: i, j
 !------------------ HDF variables ------------------!
 integer(hid_t) :: plist_id      ! property list identifier
 integer(hid_t) :: dcpl          ! property list identifier
 integer(hid_t) :: file_id       ! file identifier
 integer(hid_t) :: dataset_id       ! dataset identifier
 integer(hid_t) :: dataspace_id     ! dataspace identifier for dataset
 integer(hid_t) :: attrib_id         ! attribute identifier
 INTEGER(HID_T) :: attr_id ! Attribute identifier
 integer(hsize_t), dimension(2) :: dimsf = (/10,10/) ! dataset dimensions
 integer :: rank = 2             ! dataset rank
 INTEGER(HID_T) :: aspace_id ! dataspace identifier for attributes
 INTEGER(HID_T) :: atype_id ! Attribute parameters
 INTEGER(HSIZE_T), DIMENSION(1) :: adims = (/2/) ! Attribute dimension
 INTEGER(HSIZE_T), DIMENSION(1) :: data_dims ! Attribute dimension
 INTEGER :: arank = 1 ! Attribute rank
 INTEGER(SIZE_T) :: attrlen ! Length of the attribute string

 CHARACTER(LEN=30), DIMENSION(2) :: attr_data ! attribute data
 integer, allocatable, target :: data(:,:)   ! data to write to dataset
 integer(hsize_t), dimension(2) :: cdims = (/5,5/) ! chunk size
 character(mpi_max_processor_name) hostname
 character(len=100) :: filename  ! file name
 character(len=3) :: c
 character(len=10) :: dataset_name ! dataset name for specific processor
 ! initialize mpi
 call mpi_init(ierr)
 ! get number of tasks
 call mpi_comm_size(mpi_comm_world, numtasks, mpierror)
 ! get my rank
 call mpi_comm_rank(mpi_comm_world, Pid, mpierror)
 ! initialize HDF5 fortran interface
 call h5open_f(ierr)
 info = mpi_info_null
 filename = "test1.hdf5"   ! define file name
 attr_data(1) = "My first attribute"
 attr_data(2) = "My second attribute"
 attrlen = 30

 ! initialize some data to write
 allocate ( data(dimsf(1),dimsf(2)))
 do i = 1, dimsf(2)
    do j = 1, dimsf(1)
       data(j,i) = Pid + 1

 ! create datatype for attributes
 call h5tcopy_f(H5T_NATIVE_CHARACTER,atype_id,ierr)
 ! Set the total size for a datatype
 call h5tset_size_f(atype_id,attrlen,ierr)

  ! have the  0 processor create the hdf5 data and attributes layout

  if (Pid == 0) then

 ! create scalar dataspace for the attribute
 call h5screate_simple_f(arank,adims,aspace_id, ierr)

     ! create the hdf5 file
     call h5fcreate_f(filename, h5f_acc_trunc_f, file_id, ierr)
     ! create the dataspace for the dataset
     call h5screate_simple_f(rank, dimsf, dataspace_id, ierr)
     ! create properties variable for the data
     call h5pcreate_f(h5p_dataset_create_f, dcpl, ierr)
    ! set chunk size
     call h5pset_chunk_f(dcpl, 2, cdims, ierr)

     ! data pattern
     call h5pset_alloc_time_f(dcpl, h5d_alloc_time_early_f, ierr)
     ! now create datasets for every processor

    do i=1,numtasks

     ! create name for every dataset
        write(c,"(i0)") i
        dataset_name = "dataset" // trim(c)

     ! create datasets for every processor
     call h5dcreate_f(file_id, dataset_name, h5t_native_integer, &
                         dataspace_id, dataset_id, ierr, dcpl_id=dcpl)
     ! Create attribute for dataset
     call h5acreate_f(dataset_id,"attributesMY",atype_id,aspace_id,attr_id, ierr)
     ! Write initial data for attributes
     data_dims(1) = 2
     call h5awrite_f(attr_id, atype_id, attr_data, data_dims, ierr)
     ! close attribute
     call h5aclose_f(attr_id, ierr)
     ! close created datasets
     call h5dclose_f(dataset_id, ierr)

    end do

     ! close dataspace for attribute
     call h5sclose_f(aspace_id, ierr)
     ! close dataspace
     call h5sclose_f(dataspace_id, ierr)

     ! close the properties
     call h5pclose_f(dcpl, ierr)

     ! close the file
     call h5fclose_f(file_id, ierr)
  end if

  ! use an mpi barrier to make sure every thing is synched
  call mpi_barrier(mpi_comm_world, ierr)


  ! setup file access property variable with parallel i/o access
  call h5pcreate_f(h5p_file_access_f, plist_id, ierr)
  call h5pset_fapl_mpio_f(plist_id, mpi_comm_world, info, ierr)  

  ! open created by Pid=0 hdf5 file by every core to write data
  call h5fopen_f(filename, h5f_acc_rdwr_f, file_id, ierr, plist_id)

  ! close the property list
  call h5pclose_f(plist_id, ierr)

 ! create the dataset names based on processor Pid
 write(c,"(i0)") Pid + 1
 dataset_name = "dataset" // trim(c)
 ! open dataset (each processor opens its own dataset)
 call h5dopen_f(file_id, dataset_name, dataset_id, ierr)

 ! open properties and define mpio model (collective)
 call h5pcreate_f(h5p_dataset_xfer_f, plist_id, ierr)
 call h5pset_dxpl_mpio_f(plist_id, h5fd_mpio_collective_f, ierr)
! open attribute by every processor
 call h5aopen_by_name_f(dataset_id, ".", "attributesMY", attrib_id, ierr, h5fd_mpio_collective_f)
 ! new data which should be written in attributes by every processor
     data_dims(1) = 2
     attr_data(1) = "Updated attribute 1!"
     attr_data(2) = "Updated attribute 2!"
 ! here every processor should rewrite attributes to its dataset
 call h5awrite_f(attrib_id, atype_id, attr_data, data_dims, ierr)
 call h5aclose_f(attrib_id, ierr)
 ! here very processor writes its own data to its own dataset
 call h5dwrite_f(dataset_id, h5t_native_integer, data, dimsf, ierr, xfer_prp = plist_id)
 ! close the property list, the data set and the file
 call h5pclose_f(plist_id, ierr)
 call h5dclose_f(dataset_id,ierr)
 call h5fclose_f(file_id, ierr)

 ! close fortran interface
 call h5close_f(ierr)

 ! deallocate fortran data

 ! finalize mpi
 call mpi_finalize(ierr)


Just curious but how important is using attributes to your workflow with this data? I mean, the obvious solution is to replace your use of attributes with a companion dataset. If you have many attributes but all are of type int, one might be inclined to then write them to a dataset of type int. But, I would suggest that you define a compound datatype for (all of) your attributes (which allows you to give each a name and (various) type) and then write out a single-datum dataset with all that data (which you would ordinarily have handled as attributes) to the companion dataset. Then, each processor can write its own, independently and you side step the collective attribute issue entirely. But, it does mean that aother parts of your workflow, if currently operating on those attributes, would have to be adjusted to deal with the data in a slightly different way.

Thank you for the comment! Your solution is actually how I finally approached to the problem. I am just curious what is a way to write parameters as attributes belonging to every core by every core separately. Just feel that I do not fully understand steps of working with attributes. But yes, just the use of separate dataset is the solution too.

Well, I think certain aspects of HDF5 API have been designed with parallelism in mind more than others. So, you have this sort of odd dichotomy where some objects (e.g. datasets) support parallelism fairly well where others (e.g. attributes) do not. And, to make matters worse, attributes look, smell and behave a lot like datasets except that a) they ride along with metadata (not raw data) during I/O operations, b) are limited in size as a result and c) do not support partial I/O.

It does feel odd that attributes are not implemented under the covers as constrained datasets; constrained in size such that they are small enough to ride along with metadata, constrained in read/write such that H5S_ALL is only option allowed for them but otherwise utilize the same implementation under the covers. If they had been, then they might have been able to behave just like datasets where parallelism is concerned. I am sure there is some rationale for this and it would be interesting to find out what it is.

1 Like

Good questions!

HDF5 attributes are part of HDF5 metadata (stored in the object headers or in the heaps). Current parallel implementation of HDF5 requires that all operations on HDF5 metadata have to be collective. It is clear that it is performance impediment and a trap, especially for the novice users. There are some design ideas how to solve this problem, but nothing in implementation yet.

Thank you!


In the near future I am adding attributes and MPI capability to H5CPP an easy to use C++11 compiler assisted template library.

According to this discussion the best is to make attributes and MPI mutually exclusive.
Does this seem correct? Anyone against it?


As a first implementation it sounds reasonable, but would magic be possible to prevent users from modifying attributes independently? :grinning:


Since the early 2000’s I’ve been afraid of magic bureau and stopped doing magic :).
How about at first do mutual exclusivity. Becuase it is so simple: deny access to h5::att_t handle.**
Then on a bright day, I could look into if it is viable to track parallelism at compile time.
If so, then compile error can be thrown – runtime error case of course is much easier and I know it can be done.

Other ideas?

** of course c/c++ is about freedom, there is no way to prevent anyone knowledgeable to carry out the evil plan of domination for global memory!

Mark - one of the ECP-funded projects (that Elena is probably referring to) is to allow independent metadata modifications, freeing users from the “collective metadata modification” restriction. I’m guessing that we’ll have a prototype in the next 4-6 months.


Thought I recalled something about that. That said, is there a place the (public) community can go to track ongoing high level activity like this?