How do I impose that only certain processors write the H5p file?

Dear all,
When I use
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)
I assume that all processors are called to write the solution.
However, I just want the processors which have coords(3) equals a certain value to write the solution… (constant z plane)
Figure below shows an example of what I want and what the program is doing.
How do I impose this condition? I have already tried
« IF (coord(3) == 1) THEN
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
etc…
ENDIF «
However It doesn’t like it.

Thank you in advance,
Cheers,
Maria

Maria Luis CASTELA
Doctorante
Ecole Centrale Paris
Laboratoire EM2C
Grande Voie des Vignes
92295 Chatenay-Malabry, France
Tel. : +33 (0)1 41 13 10 34
Courriel : maria.castela@ecp.fr

Hi Maria,

Maria Castela <maria.castela@centralesupelec.fr> writes:

Dear all,
When I use
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)
I assume that all processors are called to write the solution.
However, I just want the processors which have coords(3) equals a certain value to write the solution… (constant z plane)
Figure below shows an example of what I want and what the program is doing.
How do I impose this condition? I have already tried
« IF (coord(3) == 1) THEN
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
etc…
ENDIF «
However It doesn’t like it.

We do need this in our code as well, and we have two different ways of
doing it:

1) if all processors are going to take part in the I/O operation, then
   when you define the amount of data each processor is contributing,
   this will be 0 for all processors except those with coord(3) .EQ. 1
   then all processors do exactly the same collective operations, except
   that the amount of data they read/write will be different.

2) you create a MPI communicator that groups those processors where
   coord(3) .EQ. 1. Then, only those processors will contribute to the
   I/O operation, which is similar to what your post seems to imply you
   want to do, but then you have to make sure that the communicator in
   your calls is not the global one (MPI_COMM_WORLD), but rather the one
   you created specifially for coord(3) .EQ. 1

If this doesn't lead you very far, I can give you further details.

Cheers,

···

--
Ángel de Vicente
http://www.iac.es/galeria/angelv/
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en

Yes! Indeed, the second is the efficient way to store my 2D-slice.
So, following option 2:

1- I’ve grouped the processors with coord(3).EQ.1 from original_group:

  MPI_COMM_GROUP(MPI_COMM_WORLD, original_group)
  MPI_GROUP_INCL(original_group, nb_process_2D_SLICE, processes_2D_SLICE, 2D_group,code)
  
2- I’ve created a MPI communicator for this group:

  MPI_COMM_CREAT(MPI_COMM_WORLD, my_group_2D, MPI_COMM_2D_SLICE, code)

—Problem—

When I do:
  CALL MPI_COMM_RANK(MPI_COMM_2D_SLICE, 2D_ranks, code)
I’ve a segmentation fault… Did you have this problem?

—After solving this problem —

3 - You say that instead of MPI_COMM_WORLD I shall use MPI_COMM_2D_SLICE here?:
    
    comm = MPI_COMM_2D_SLICE
    h5pset_fapl_mpio_f(plist_id, comm, info)

Thank you a lot!
It was very helpful
Cheers,
Maria

···

On 10 Mar 2015, at 14:38, Angel de Vicente <angelv@iac.es> wrote:

Hi Maria,

Maria Castela <maria.castela@centralesupelec.fr> writes:

Dear all,
When I use
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)
I assume that all processors are called to write the solution.
However, I just want the processors which have coords(3) equals a certain value to write the solution… (constant z plane)
Figure below shows an example of what I want and what the program is doing.
How do I impose this condition? I have already tried
« IF (coord(3) == 1) THEN
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
etc…
ENDIF «
However It doesn’t like it.

We do need this in our code as well, and we have two different ways of
doing it:

1) if all processors are going to take part in the I/O operation, then
  when you define the amount of data each processor is contributing,
  this will be 0 for all processors except those with coord(3) .EQ. 1
  then all processors do exactly the same collective operations, except
  that the amount of data they read/write will be different.

2) you create a MPI communicator that groups those processors where
  coord(3) .EQ. 1. Then, only those processors will contribute to the
  I/O operation, which is similar to what your post seems to imply you
  want to do, but then you have to make sure that the communicator in
  your calls is not the global one (MPI_COMM_WORLD), but rather the one
  you created specifially for coord(3) .EQ. 1

If this doesn't lead you very far, I can give you further details.

Cheers,
--
Ángel de Vicente
http://www.iac.es/galeria/angelv/
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Hi,

sorry for the delay, I was sucked into a hole last week...

Maria Luis Castela <maria.castela@centralesupelec.fr> writes:

Yes! Indeed, the second is the efficient way to store my 2D-slice.

well, the way for storing (in RAM or in file) for both options is
exactly the same. The only thing that changes is how you call the I/O
writing routines, but if we stick to option 2:

So, following option 2:

1- I’ve grouped the processors with coord(3).EQ.1 from original_group:

  MPI_COMM_GROUP(MPI_COMM_WORLD, original_group)
  MPI_GROUP_INCL(original_group, nb_process_2D_SLICE,
  processes_2D_SLICE, 2D_group,code)
  
2- I’ve created a MPI communicator for this group:

  MPI_COMM_CREAT(MPI_COMM_WORLD, my_group_2D, MPI_COMM_2D_SLICE, code)

OK. I do it with MPI_CART_SUB, but I guess it doesn't matter, as far as
you have a communicator that holds those processors only with coord(3).EQ.1

—Problem—

When I do:
  CALL MPI_COMM_RANK(MPI_COMM_2D_SLICE, 2D_ranks, code)
I’ve a segmentation fault… Did you have this problem?

I never need to call MPI_COMM_RANK with the 'slice' communicator.

3 - You say that instead of MPI_COMM_WORLD I shall use MPI_COMM_2D_SLICE here?:
    
    comm = MPI_COMM_2D_SLICE
    h5pset_fapl_mpio_f(plist_id, comm, info)

the relevant part of my code to make sure that only those at the bottom
of my domain write to file looks like this:

            IF (my_coordinates(3) .EQ. 0) THEN
               CALL h5pcreate_f(H5P_FILE_ACCESS_F, fapl_id, error)
               CALL h5pset_fapl_mpio_f(fapl_id, pmlzcomm, MPI_INFO_NULL,
               error)
               CALL h5fopen_f(H5filename, H5F_ACC_RDWR_F, file_id,
               error, access_prp = fapl_id)

               CALL MPI_BARRIER(pmlzcomm,error) !!! I only need this
               in one of the clusters we use, don't now why...

               [...]

               CALL h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE,
               pml_v2%z_low(slx:elx,sly:ely,:,:), dims_pml, error, &
                    file_space_id = fspace_id, mem_space_id = mspace_id,
               xfer_prp = dxpl_id)

               CALL h5sclose_f(fspace_id,error)
               CALL h5sclose_f(mspace_id,error)
               CALL h5dclose_f(dset_id, error)
               CALL h5pclose_f(fapl_id, error)
               CALL h5pclose_f(dxpl_id, error)
               CALL h5fclose_f(file_id, error)
            END IF

Cheers,

···

--
Ángel de Vicente
http://www.iac.es/galeria/angelv/
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en

Angel,
Thank you for your help, I now have my horizontal and vertical slices :smiley:
Cheers!
Maria

···

Le 18 mars 2015 à 12:16, Angel de Vicente <angelv@iac.es> a écrit :

Hi,

sorry for the delay, I was sucked into a hole last week...

Maria Luis Castela <maria.castela@centralesupelec.fr> writes:

Yes! Indeed, the second is the efficient way to store my 2D-slice.

well, the way for storing (in RAM or in file) for both options is
exactly the same. The only thing that changes is how you call the I/O
writing routines, but if we stick to option 2:

So, following option 2:

1- I’ve grouped the processors with coord(3).EQ.1 from original_group:

  MPI_COMM_GROUP(MPI_COMM_WORLD, original_group)
  MPI_GROUP_INCL(original_group, nb_process_2D_SLICE,
  processes_2D_SLICE, 2D_group,code)
  
2- I’ve created a MPI communicator for this group:

  MPI_COMM_CREAT(MPI_COMM_WORLD, my_group_2D, MPI_COMM_2D_SLICE, code)

OK. I do it with MPI_CART_SUB, but I guess it doesn't matter, as far as
you have a communicator that holds those processors only with coord(3).EQ.1

—Problem—

When I do:
  CALL MPI_COMM_RANK(MPI_COMM_2D_SLICE, 2D_ranks, code)
I’ve a segmentation fault… Did you have this problem?

I never need to call MPI_COMM_RANK with the 'slice' communicator.

3 - You say that instead of MPI_COMM_WORLD I shall use MPI_COMM_2D_SLICE here?:
    
    comm = MPI_COMM_2D_SLICE
    h5pset_fapl_mpio_f(plist_id, comm, info)

the relevant part of my code to make sure that only those at the bottom
of my domain write to file looks like this:

           IF (my_coordinates(3) .EQ. 0) THEN
              CALL h5pcreate_f(H5P_FILE_ACCESS_F, fapl_id, error)
              CALL h5pset_fapl_mpio_f(fapl_id, pmlzcomm, MPI_INFO_NULL,
              error)
              CALL h5fopen_f(H5filename, H5F_ACC_RDWR_F, file_id,
              error, access_prp = fapl_id)

              CALL MPI_BARRIER(pmlzcomm,error) !!! I only need this
              in one of the clusters we use, don't now why...

              [...]

              CALL h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE,
              pml_v2%z_low(slx:elx,sly:ely,:,:), dims_pml, error, &
                   file_space_id = fspace_id, mem_space_id = mspace_id,
              xfer_prp = dxpl_id)

              CALL h5sclose_f(fspace_id,error)
              CALL h5sclose_f(mspace_id,error)
              CALL h5dclose_f(dset_id, error)
              CALL h5pclose_f(fapl_id, error)
              CALL h5pclose_f(dxpl_id, error)
              CALL h5fclose_f(file_id, error)
           END IF

Cheers,
--
Ángel de Vicente
http://www.iac.es/galeria/angelv/
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5