Hi! I've been using this amazing tool for a couple of months. I've already
implemented it in several serial codes of mine and it worked perfectly.
My problem appears when I start using more than one process. I have already
compiled all the tutorials presented in the site, but I was not able to
parallelize my datasets. Above this message, I have pasted the part of the
code where I try to parallelize the writing method. In this situation, I'm
using an one-dimensional coord variable "z". The variable ztot corresponds
to the quantity of faces of the domain for each process (I write all my
variables on the faces).
Another problem I think I'll have is related to the 3d variables. All the
samples of the site work with a 2d array (parallelization and hyperslabs).
What will be the difference when applying a 3d dataset? Thanks for your
time!
dims(1)=ztot
dims(2)=ytot
dims(3)=xtot
!
dimsz(1)=ztot
dimsy(1)=ytot
dimsx(1)=xtot
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
CALL MPI_INIT(mpierror)
CALL MPI_COMM_SIZE(comm, mpi_size, mpierror)
CALL MPI_COMM_RANK(comm, mpi_rank, mpierror)
CALL h5open_f(error)
CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)
!
CALL h5fcreate_f(filename, H5F_ACC_TRUNC_F, file_id, error,
access_prp = plist_id)
CALL h5pclose_f(plist_id, error)
CALL h5screate_simple_f(rank2, dimsz, z_space_id, error)
CALL h5dcreate_f(file_id, dsetname_z, H5T_NATIVE_DOUBLE, z_space_id, &
z_id, error)!, plist_id)
CALL h5pcreate_f(H5P_DATASET_XFER_F, plist_id, error)
CALL h5pset_dxpl_mpio_f(plist_id, H5FD_MPIO_COLLECTIVE_F, error)
CALL h5dwrite_f(z_id, H5T_NATIVE_DOUBLE, z(1:ztot), data_dims2, error,
&
xfer_prp =
plist_id)
CALL h5sclose_f(z_space_id, error)
CALL h5dclose_f(z_id, error)
CALL h5pclose_f(plist_id, error)
CALL h5fclose_f(file_id, error)
!
CALL h5close_f(error)
···
--
*
Marcelo Maia Ribeiro Damasceno
MFLab - Laboratório de Mecânica dos Fluidos
Universidade Federal de Uberlândia
Uberlândia, MG - Brasil
Telefone: +55 34 9922-1979
*