no specific subroutine for the generic ‘h5dread_f’

Hello,

I am having trouble with a code that uses hdf5. The code is written in
fortran90 it consists of a main program(proccor.f90) and a
module(module_correlation_functions.f90).

After using the makefile to compile the code, I get the following error:

gfortran -O3 -c module_correlation_functions.f90 -I/usr/local/hdf5/include
-L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a
/usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a
/usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath
-Wl,/usr/local/hdf5/lib
module_correlation_functions.f90:1583:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
module_correlation_functions.f90:1588:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
module_correlation_functions.f90:1593:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
makefile:38: recipe for target 'module_correlation_functions.o' failed
make: *** [module_correlation_functions.o] Error 1

The makelife I used to compile the code includes the location of the hdf5
library :

LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib
/usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a
/usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz
-ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
FCFLAGS = -O3
# List of executables to be built within the package
PROGRAMS = procorr

# "make" builds all
all: $(PROGRAMS)

procorr.o: module_correlation_functions.o
procorr: module_correlation_functions.o

# ======================================================================
# And now the general rules, these should not require modification
# ======================================================================

# General rule for building prog from prog.o; $^ (GNU extension) is
# used in order to list additional object files on which the
# executable depends
%: %.o
        $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)

# General rules for building prog.o from prog.f90 or prog.F90; $< is
# used in order to list only the first prerequisite (the source file)
# and not the additional prerequisites such as module or include files
%.o: %.f90
        $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)

# Utility targets
.PHONY: clean veryclean

clean:
        rm -f *.o *.mod *.MOD
        rm -f .last_fourier_transform
        rm -f cdm_redshift0_*
        rm -f *~ $(PROGRAMS)

The call hdf5 statement is located on the module file
(module_correlation_functions.f90)

I am probably doing something wrong on the makefile because I used the same
hdf5 library location to compile another fortran90+hdf5 code without any
trouble. Could please help me ?

The gfortran version used is: GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4)
5.4.0
and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran
option.

Kind Regards,

···

--
Guido

Can you include how you declared your arguments in h5dread_f? I would suspect that one of your arguments is wrong and the compiler is not finding the correct interface.

Scot

···

On Nov 4, 2017, at 12:03 AM, Guido granda muñoz <guidogranda@gmail.com> wrote:

Hello,

I am having trouble with a code that uses hdf5. The code is written in fortran90 it consists of a main program(proccor.f90) and a module(module_correlation_functions.f90).

After using the makefile to compile the code, I get the following error:

gfortran -O3 -c module_correlation_functions.f90 -I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
module_correlation_functions.f90:1583:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
module_correlation_functions.f90:1588:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
module_correlation_functions.f90:1593:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
makefile:38: recipe for target 'module_correlation_functions.o' failed
make: *** [module_correlation_functions.o] Error 1

The makelife I used to compile the code includes the location of the hdf5 library :

LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
FCFLAGS = -O3
# List of executables to be built within the package
PROGRAMS = procorr

# "make" builds all
all: $(PROGRAMS)

procorr.o: module_correlation_functions.o
procorr: module_correlation_functions.o

# ======================================================================
# And now the general rules, these should not require modification
# ======================================================================

# General rule for building prog from prog.o; $^ (GNU extension) is
# used in order to list additional object files on which the
# executable depends
%: %.o
        $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)

# General rules for building prog.o from prog.f90 or prog.F90; $< is
# used in order to list only the first prerequisite (the source file)
# and not the additional prerequisites such as module or include files
%.o: %.f90
        $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)
      
# Utility targets
.PHONY: clean veryclean

clean:
        rm -f *.o *.mod *.MOD
        rm -f .last_fourier_transform
        rm -f cdm_redshift0_*
        rm -f *~ $(PROGRAMS)

The call hdf5 statement is located on the module file (module_correlation_functions.f90)

I am probably doing something wrong on the makefile because I used the same hdf5 library location to compile another fortran90+hdf5 code without any trouble. Could please help me ?

The gfortran version used is: GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0
and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran option.

Kind Regards,

--
Guido
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Also, some ".f90" programs require the "fortran2003" feature as well. This is a
possible cause ofr "no specific subroutine for the generic" problem.

Pierre

···

On Mon, Nov 06, 2017 at 02:49:44PM +0000, Scot Breitenfeld wrote:

Can you include how you declared your arguments in h5dread_f? I would suspect that one of your arguments is wrong and the compiler is not finding the correct interface.

Scot

> On Nov 4, 2017, at 12:03 AM, Guido granda muñoz <guidogranda@gmail.com> wrote:
>
> Hello,
>
> I am having trouble with a code that uses hdf5. The code is written in fortran90 it consists of a main program(proccor.f90) and a module(module_correlation_functions.f90).
>
> After using the makefile to compile the code, I get the following error:
>
> gfortran -O3 -c module_correlation_functions.f90 -I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> module_correlation_functions.f90:1583:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
> 1
> Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
> module_correlation_functions.f90:1588:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
> 1
> Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
> module_correlation_functions.f90:1593:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
> 1
> Error: There is no specific subroutine for the generic ‘h5dread_f’ at (1)
> makefile:38: recipe for target 'module_correlation_functions.o' failed
> make: *** [module_correlation_functions.o] Error 1
>
>
> The makelife I used to compile the code includes the location of the hdf5 library :
>
>
> LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> FCFLAGS = -O3
> # List of executables to be built within the package
> PROGRAMS = procorr
>
> # "make" builds all
> all: $(PROGRAMS)
>
> procorr.o: module_correlation_functions.o
> procorr: module_correlation_functions.o
>
> # ======================================================================
> # And now the general rules, these should not require modification
> # ======================================================================
>
> # General rule for building prog from prog.o; $^ (GNU extension) is
> # used in order to list additional object files on which the
> # executable depends
> %: %.o
> $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)
>
> # General rules for building prog.o from prog.f90 or prog.F90; $< is
> # used in order to list only the first prerequisite (the source file)
> # and not the additional prerequisites such as module or include files
> %.o: %.f90
> $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)
>
> # Utility targets
> .PHONY: clean veryclean
>
> clean:
> rm -f *.o *.mod *.MOD
> rm -f .last_fourier_transform
> rm -f cdm_redshift0_*
> rm -f *~ $(PROGRAMS)
>
> The call hdf5 statement is located on the module file (module_correlation_functions.f90)
>
> I am probably doing something wrong on the makefile because I used the same hdf5 library location to compile another fortran90+hdf5 code without any trouble. Could please help me ?
>
> The gfortran version used is: GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0
> and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran option.
>
>

Hello Pierre,

What do you mean by fortran90 programs require fortran 2003 features?

Regards,

···

2017-11-08 16:43 GMT-05:00 <hdf-forum-request@lists.hdfgroup.org>:

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_
lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Re: no specific subroutine for the generic ?h5dread_f?
      (Pierre de Buyl)
   2. Re: Collective IO and filters (Dana Robinson)
   3. Re: Collective IO and filters (Michael K. Edwards)

----------------------------------------------------------------------

Message: 1
Date: Wed, 8 Nov 2017 22:39:32 +0100
From: Pierre de Buyl <pierre.debuyl@kuleuven.be>
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] no specific subroutine for the generic
        ?h5dread_f?
Message-ID: <20171108213932.GJ2236@pi-x230>
Content-Type: text/plain; charset=utf-8

Also, some ".f90" programs require the "fortran2003" feature as well. This
is a
possible cause ofr "no specific subroutine for the generic" problem.

Pierre

On Mon, Nov 06, 2017 at 02:49:44PM +0000, Scot Breitenfeld wrote:
> Can you include how you declared your arguments in h5dread_f? I would
suspect that one of your arguments is wrong and the compiler is not finding
the correct interface.
>
> Scot
>
> > On Nov 4, 2017, at 12:03 AM, Guido granda mu?oz <guidogranda@gmail.com> > wrote:
> >
> > Hello,
> >
> > I am having trouble with a code that uses hdf5. The code is written in
fortran90 it consists of a main program(proccor.f90) and a
module(module_correlation_functions.f90).
> >
> > After using the makefile to compile the code, I get the following
error:
> >
> > gfortran -O3 -c module_correlation_functions.f90
-I/usr/local/hdf5/include -L/usr/local/hdf5/lib
/usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a
/usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz
-ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> > module_correlation_functions.f90:1583:68:
> >
> > call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
> > 1
> > Error: There is no specific subroutine for the generic ?h5dread_f? at
(1)
> > module_correlation_functions.f90:1588:68:
> >
> > call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
> > 1
> > Error: There is no specific subroutine for the generic ?h5dread_f? at
(1)
> > module_correlation_functions.f90:1593:68:
> >
> > call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
> > 1
> > Error: There is no specific subroutine for the generic ?h5dread_f? at
(1)
> > makefile:38: recipe for target 'module_correlation_functions.o' failed
> > make: *** [module_correlation_functions.o] Error 1
> >
> >
> > The makelife I used to compile the code includes the location of the
hdf5 library :
> >
> >
> > LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib
/usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a
/usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz
-ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> > FCFLAGS = -O3
> > # List of executables to be built within the package
> > PROGRAMS = procorr
> >
> > # "make" builds all
> > all: $(PROGRAMS)
> >
> > procorr.o: module_correlation_functions.o
> > procorr: module_correlation_functions.o
> >
> > # ============================================================

> > # And now the general rules, these should not require modification
> > # ============================================================

> >
> > # General rule for building prog from prog.o; $^ (GNU extension) is
> > # used in order to list additional object files on which the
> > # executable depends
> > %: %.o
> > $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)
> >
> > # General rules for building prog.o from prog.f90 or prog.F90; $< is
> > # used in order to list only the first prerequisite (the source file)
> > # and not the additional prerequisites such as module or include files
> > %.o: %.f90
> > $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)
> >
> > # Utility targets
> > .PHONY: clean veryclean
> >
> > clean:
> > rm -f *.o *.mod *.MOD
> > rm -f .last_fourier_transform
> > rm -f cdm_redshift0_*
> > rm -f *~ $(PROGRAMS)
> >
> > The call hdf5 statement is located on the module file
(module_correlation_functions.f90)
> >
> > I am probably doing something wrong on the makefile because I used the
same hdf5 library location to compile another fortran90+hdf5 code without
any trouble. Could please help me ?
> >
> > The gfortran version used is: GNU Fortran (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0
> > and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran
option.
> >
> >

------------------------------

Message: 2
Date: Wed, 8 Nov 2017 21:41:42 +0000
From: Dana Robinson <derobins@hdfgroup.org>
To: "M.K.Edwards@gmail.com" <m.k.edwards@gmail.com>
Cc: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] Collective IO and filters
Message-ID: <66D4ED4C-FE71-4A3F-AEF1-9AAB70C1E1AF@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Yes. We already do this in our test harness. See test/dynlib3.c in the
source distribution. It's a very short source file and should be easy to
understand.

Dana

On 11/8/17, 13:28, "Michael K. Edwards" <m.k.edwards@gmail.com> wrote:

    Thank you, Dana! Do you think it would be appropriate (not just as of
    the current implementation, but in terms of the interface contract) to
    use H5free_memory() on the buffer passed into an H5Z plugin, replacing
    it with a new (post-compression) buffer allocated via H5allocate()?

    On Wed, Nov 8, 2017 at 1:23 PM, Dana Robinson <derobins@hdfgroup.org> > wrote:
    > The public H5allocate/resize/free_memory() API calls use the
library's
    > memory allocator to manage memory, if that is what you are looking
for.
    >
    >
    >
    > https://support.hdfgroup.org/HDF5/doc/RM/RM_H5.html
    >
    >
    >
    > Dana Robinson
    >
    > Software Developer
    >
    > The HDF Group
    >
    >
    >
    > From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf of
Jordan
    > Henderson <jhenderson@hdfgroup.org>
    > Reply-To: HDF List <hdf-forum@lists.hdfgroup.org>
    > Date: Wednesday, November 8, 2017 at 12:59
    > To: "M.K.Edwards@gmail.com" <m.k.edwards@gmail.com>
    > Cc: HDF List <hdf-forum@lists.hdfgroup.org>
    > Subject: Re: [Hdf-forum] Collective IO and filters
    >
    >
    >
    > Ah yes, I can see what you mean by the difference between the use of
these
    > causing issues between in-tree and out-of-tree plugins. This is
particularly
    > interesting in that it makes sense to allocate the chunk data
buffers using
    > the H5MM_ routines to be compliant with the standards of HDF5 library
    > development, but causes issues with those plugins which use the raw
memory
    > routines. Conversely, if the chunk buffers were to be allocated
using the
    > raw routines, it would break compatibility with the in-tree filters.
Thank
    > you for bringing this to my attention; I believe I will need to
think on
    > this one, as there are a few different ways of approaching the
problem, with
    > some being more "correct" than others.

------------------------------

Message: 3
Date: Wed, 8 Nov 2017 13:43:15 -0800
From: "Michael K. Edwards" <m.k.edwards@gmail.com>
To: Dana Robinson <derobins@hdfgroup.org>
Cc: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] Collective IO and filters
Message-ID:
        <CAJ0nH1gaxnfyGcOiaaJ1rd-jdpWdsw8f4iXRewHFW7y7gCWuFQ@
mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Great. What's the best way to communicate this to plugin developers,
so that their code gets updated appropriately in advance of the 1.12
release?

On Wed, Nov 8, 2017 at 1:41 PM, Dana Robinson <derobins@hdfgroup.org> > wrote:
> Yes. We already do this in our test harness. See test/dynlib3.c in the
source distribution. It's a very short source file and should be easy to
understand.
>
> Dana
>
> On 11/8/17, 13:28, "Michael K. Edwards" <m.k.edwards@gmail.com> wrote:
>
> Thank you, Dana! Do you think it would be appropriate (not just as
of
> the current implementation, but in terms of the interface contract)
to
> use H5free_memory() on the buffer passed into an H5Z plugin,
replacing
> it with a new (post-compression) buffer allocated via H5allocate()?
>
> On Wed, Nov 8, 2017 at 1:23 PM, Dana Robinson <derobins@hdfgroup.org> > wrote:
> > The public H5allocate/resize/free_memory() API calls use the
library's
> > memory allocator to manage memory, if that is what you are looking
for.
> >
> >
> >
> > https://support.hdfgroup.org/HDF5/doc/RM/RM_H5.html
> >
> >
> >
> > Dana Robinson
> >
> > Software Developer
> >
> > The HDF Group
> >
> >
> >
> > From: Hdf-forum <hdf-forum-bounces@lists.hdfgroup.org> on behalf
of Jordan
> > Henderson <jhenderson@hdfgroup.org>
> > Reply-To: HDF List <hdf-forum@lists.hdfgroup.org>
> > Date: Wednesday, November 8, 2017 at 12:59
> > To: "M.K.Edwards@gmail.com" <m.k.edwards@gmail.com>
> > Cc: HDF List <hdf-forum@lists.hdfgroup.org>
> > Subject: Re: [Hdf-forum] Collective IO and filters
> >
> >
> >
> > Ah yes, I can see what you mean by the difference between the use
of these
> > causing issues between in-tree and out-of-tree plugins. This is
particularly
> > interesting in that it makes sense to allocate the chunk data
buffers using
> > the H5MM_ routines to be compliant with the standards of HDF5
library
> > development, but causes issues with those plugins which use the
raw memory
> > routines. Conversely, if the chunk buffers were to be allocated
using the
> > raw routines, it would break compatibility with the in-tree
filters. Thank
> > you for bringing this to my attention; I believe I will need to
think on
> > this one, as there are a few different ways of approaching the
problem, with
> > some being more "correct" than others.
>
>

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 101, Issue 14
******************************************

--
Guido

Hello Scot,

The subroutine that includes the declaration of arguments is bellow. The
argument declaration is the following:

        integer(8) :: recl_test,flen_test,iolength_test
        integer :: error,hdferr,hdferr2,rank
        INTEGER(HID_T) :: file_id,dset_id,dataspace_id

···

-----------------------------------------------------------------
subroutine load_single_file(filename,filetype)

        ! loads particles into
        ! x(:),y(:),z(:),mass(:slight_smile:
        ! and updates the variables
        ! npart = number of particles

        implicit none
        character(*),intent(in) :: filename
        integer*4,intent(in) :: filetype
        logical*4 :: loadmasses
        integer*8 :: file_size
        integer*4 :: bytes_per_particle,i
        integer :: allocate_status

        ! guido debug
        integer(8) :: recl_test,flen_test,iolength_test
        integer :: error,hdferr,hdferr2,rank
        INTEGER(HID_T) :: file_id,dset_id,dataspace_id

        if (allocated(x)) deallocate(x)
        if (allocated(y)) deallocate(y)
        if (allocated(z)) deallocate(z)
        if (allocated(mass)) deallocate(mass)

        loadmasses = filetype<0

        if (loadmasses) then
            bytes_per_particle = 16
        else
            bytes_per_particle = 12
        end if

        if (abs(filetype) == 2) then ! Simple binary file

            ! determine number of particles
            inquire(file=trim(filename), size=file_size)
            npart = file_size/int(bytes_per_particle,8)
            if (int(npart,8)*int(bytes_per_particle,8).ne.file_size) then
                write(*,'(A)')
                write(*,'(A)') 'Format of input file not recognized.
Consider specifying a different format using -input.'
                stop
            end if

            ! load particles
            allocate(x(npart),y(npart),z(npart),mass(npart))
            open(1,file=trim(filename),action='read',form='
unformatted',status='old',access='stream')
            if (loadmasses) then
                read(1) (x(i),y(i),z(i),mass(i),i=1,npart)
            else
                read(1) (x(i),y(i),z(i),i=1,npart)
                mass = 1.0
            end if
            close(1)

        else if (abs(filetype) == 3) then ! Simple ascii file

            ! determine number of particles
            npart = 0
            open(1,file=trim(filename),action='read',form='formatted'
,status='old')
            stat = 0
            do while (stat==0)
                read(1,*,IOSTAT=stat) xempty
                if (stat.ne.0) exit
                npart = npart+1
            end do
            close(1)

            allocate(x(npart),y(npart),z(npart),mass(npart))
            open(1,file=trim(filename),action='read',form='formatted'
,status='old')
            if (loadmasses) then
                do i = 1,npart
                    read(1,*) x(i),y(i),z(i),mass(i)
                end do
            else
                do i = 1,npart
                    read(1,*) x(i),y(i),z(i)
                end do
                mass = 1.0
            end if
            close(1)

        else if (abs(filetype) == 4) then ! Gadget file binary

            write(*,*) "The record length is: ",recl_test

            if(.true.) then
                call h5open_f(error)
                call h5fopen_f(trim(filename)//'.
hdf5',H5F_ACC_RDONLY_F,file_id,error)

                call h5dopen_f(file_id,'x',dset_id,error)
                call h5dget_space_f(dset_id,dataspace_id,hdferr)
                call h5sget_simple_extent_npoints_
f(dataspace_id,npart,hdferr2)
                write(*,*) 'The number of particles is :',npart
                allocate(x(npart),y(npart),z(npart),mass(npart),stat=
allocate_status)
                call H5LTread_dataset_float_f(dset_id,'x',x)
                call h5dclose_f(dset_id,error)

                call h5dopen_f(file_id,'y',dset_id,error)
                call H5LTread_dataset_float_f(dset_id,'y',y)
                !call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
                call h5dclose_f(dset_id,error)

                call h5dopen_f(file_id,'z',dset_id,error)
                call H5LTread_dataset_float_f(dset_id,'z',z)
                !call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
                call h5dclose_f(dset_id,error)

                call h5fclose_f(file_id,error)
            endif
            if(allocate_status /= 0) then
                 write(*,'(A)') 'memory problem.!'
            else
                 write(*,'(A)') 'memory is ok.'
            endif
            if (loadmasses) then
                read(1) (mass(i),i=1,npart)
            else
                mass = 1.0
            end if
            !close(1)

        end if

        if (npart==huge(npart)) then
            write(*,'(A)')
            write(*,'(A)') 'No single file can contain more than 2^31
particles.'
            stop
        end if

    end subroutine load_single_file

2017-11-06 13:00 GMT-05:00 <hdf-forum-request@lists.hdfgroup.org>:

Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_
lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Memory allocation/deallocation (Andreas Derler)
   2. Re: no specific subroutine for the generic ?h5dread_f?
      (Scot Breitenfeld)

----------------------------------------------------------------------

Message: 1
Date: Mon, 6 Nov 2017 09:59:43 +0100
From: Andreas Derler <andreas.derler@wirecube.at>
To: hdf-forum@lists.hdfgroup.org.
Subject: [Hdf-forum] Memory allocation/deallocation
Message-ID: <5e5c0745-3a23-a188-a535-9e662af499ec@wirecube.at>
Content-Type: text/plain; charset=utf-8; format=flowed

Hi,

I am trying to use the Java HDF5 interface (JHI5) in an application server
environment, where I am writing
to many different HDF5 files within a single JVM instance.

However, while using HDF5 I am running into memory issues. Basically, I am
facing the issue that writing
to an HDF5 file causes memory to be allocated, which, even after
successful writing and
closing the file is not deallocated.

So I would like to know if there is a way to clear all allocated memory
after writing to the file using the JHI5 library.

I already made sure that everything is closed and also tried to limit the
cache size using H5Pset_cache and H5Pset_chunk_cache.
However, changing cache sizes does not eliminate the problem, that the
memory is not deallocated after closing the file.
Also calling the function H5garbage_collect does not seem to change this
behaviour.
I saw in the docs that the native implementation provides the call
H5Pset_evict_on_close (https://support.hdfgroup.org/
HDF5/doc/RM/RM_H5P.html#Property-SetEvictOnClose),
however, this call seems not to be available in the JHI5 version.

Is there any other way to make sure that all memory is deallocated, or am
I doing something wrong? To this end, I am posting an example code I am
using:

final long[] dims = { 0, 0 };
final long[] maxdims = { HDF5Constants.H5S_UNLIMITED,
HDF5Constants.H5S_UNLIMITED };
final int RANK = 2;
long cache_size = 1024L*1024; // cache size in bytes

try {
   dims[0] = data.length; // num rows
   dims[1] = data[0].length; // num cols
   int file_id = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
   H5.H5Pset_cache(file_id, 0, 521L, cache_size, 1);
   file_id = H5.H5Fcreate(filename, HDF5Constants.H5F_ACC_TRUNC,
HDF5Constants.H5P_DEFAULT, file_id);
   int dataspace_id = H5.H5Screate_simple(RANK, dims, maxdims);
   int dataset_access_property_list_id = H5.H5Pcreate(HDF5Constants.
H5P_DATASET_ACCESS);
   H5.H5Pset_chunk_cache(dataset_access_property_list_id, 521L,
cache_size, 1);
   int dataset_creation_property_list_id = H5.H5Pcreate(HDF5Constants.
H5P_DATASET_CREATE);
   long[] dim_chunk = { dims[1], 1 };
   H5.H5Pset_chunk(dataset_creation_property_list_id, RANK, dim_chunk);
   int dataset_id = H5.H5Dcreate(file_id, path, HDF5Constants.H5T_NATIVE_DOUBLE,
dataspace_id,
                 HDF5Constants.H5P_DEFAULT, dataset_creation_property_list_id,
dataset_access_property_list_id);
   H5.H5Dwrite(dataset_id, HDF5Constants.H5T_NATIVE_DOUBLE,
HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,
         HDF5Constants.H5P_DEFAULT, data);

   H5.H5Fflush(dataset_id, HDF5Constants.H5F_SCOPE_GLOBAL);
   H5.H5Dclose(dataset_id);
   H5.H5Sclose(dataspace_id);
   H5.H5Pclose(dataset_creation_property_list_id);
   H5.H5Pclose(dataset_access_property_list_id);
   H5.H5Fclear_elink_file_cache(file_id);
   H5.H5Pclose(file_id);
   H5.H5Fclose(file_id);
   H5.H5garbage_collect();
} catch (final Exception e) {
   e.printStackTrace();
}

------------------------------

Message: 2
Date: Mon, 6 Nov 2017 14:49:44 +0000
From: Scot Breitenfeld <brtnfld@hdfgroup.org>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org>
Subject: Re: [Hdf-forum] no specific subroutine for the generic
        ?h5dread_f?
Message-ID: <E3DC4B0F-7690-4408-B94E-8F12CE3FA8EB@hdfgroup.org>
Content-Type: text/plain; charset="utf-8"

Can you include how you declared your arguments in h5dread_f? I would
suspect that one of your arguments is wrong and the compiler is not finding
the correct interface.

Scot

> On Nov 4, 2017, at 12:03 AM, Guido granda mu?oz <guidogranda@gmail.com> > wrote:
>
> Hello,
>
> I am having trouble with a code that uses hdf5. The code is written in
fortran90 it consists of a main program(proccor.f90) and a
module(module_correlation_functions.f90).
>
> After using the makefile to compile the code, I get the following error:
>
> gfortran -O3 -c module_correlation_functions.f90
-I/usr/local/hdf5/include -L/usr/local/hdf5/lib
/usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a
/usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz
-ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> module_correlation_functions.f90:1583:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
> 1
> Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
> module_correlation_functions.f90:1588:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
> 1
> Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
> module_correlation_functions.f90:1593:68:
>
> call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
> 1
> Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
> makefile:38: recipe for target 'module_correlation_functions.o' failed
> make: *** [module_correlation_functions.o] Error 1
>
>
> The makelife I used to compile the code includes the location of the
hdf5 library :
>
>
> LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib
/usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a
/usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz
-ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
> FCFLAGS = -O3
> # List of executables to be built within the package
> PROGRAMS = procorr
>
> # "make" builds all
> all: $(PROGRAMS)
>
> procorr.o: module_correlation_functions.o
> procorr: module_correlation_functions.o
>
> # ======================================================================
> # And now the general rules, these should not require modification
> # ======================================================================
>
> # General rule for building prog from prog.o; $^ (GNU extension) is
> # used in order to list additional object files on which the
> # executable depends
> %: %.o
> $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)
>
> # General rules for building prog.o from prog.f90 or prog.F90; $< is
> # used in order to list only the first prerequisite (the source file)
> # and not the additional prerequisites such as module or include files
> %.o: %.f90
> $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)
>
> # Utility targets
> .PHONY: clean veryclean
>
> clean:
> rm -f *.o *.mod *.MOD
> rm -f .last_fourier_transform
> rm -f cdm_redshift0_*
> rm -f *~ $(PROGRAMS)
>
> The call hdf5 statement is located on the module file
(module_correlation_functions.f90)
>
> I am probably doing something wrong on the makefile because I used the
same hdf5 library location to compile another fortran90+hdf5 code without
any trouble. Could please help me ?
>
> The gfortran version used is: GNU Fortran (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0
> and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran
option.
>
>
> Kind Regards,
>
> --
> Guido
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 101, Issue 5
*****************************************

--
Guido

npart should be an array and not a scalar. If you use the F2003 interface, then you don’t need to supply npart.

Scot

···

On Nov 8, 2017, at 5:59 PM, Guido granda muñoz <guidogranda@gmail.com<mailto:guidogranda@gmail.com>> wrote:

Hello Scot,

The subroutine that includes the declaration of arguments is bellow. The argument declaration is the following:

        integer(8) :: recl_test,flen_test,iolength_test
        integer :: error,hdferr,hdferr2,rank
        INTEGER(HID_T) :: file_id,dset_id,dataspace_id
-----------------------------------------------------------------
subroutine load_single_file(filename,filetype)

        ! loads particles into
        ! x(:),y(:),z(:),mass(:slight_smile:
        ! and updates the variables
        ! npart = number of particles

        implicit none
        character(*),intent(in) :: filename
        integer*4,intent(in) :: filetype
        logical*4 :: loadmasses
        integer*8 :: file_size
        integer*4 :: bytes_per_particle,i
        integer :: allocate_status

        ! guido debug
        integer(8) :: recl_test,flen_test,iolength_test
        integer :: error,hdferr,hdferr2,rank
        INTEGER(HID_T) :: file_id,dset_id,dataspace_id

        if (allocated(x)) deallocate(x)
        if (allocated(y)) deallocate(y)
        if (allocated(z)) deallocate(z)
        if (allocated(mass)) deallocate(mass)

        loadmasses = filetype<0

        if (loadmasses) then
            bytes_per_particle = 16
        else
            bytes_per_particle = 12
        end if

        if (abs(filetype) == 2) then ! Simple binary file

            ! determine number of particles
            inquire(file=trim(filename), size=file_size)
            npart = file_size/int(bytes_per_particle,8)
            if (int(npart,8)*int(bytes_per_particle,8).ne.file_size) then
                write(*,'(A)')
                write(*,'(A)') 'Format of input file not recognized. Consider specifying a different format using -input.'
                stop
            end if

            ! load particles
            allocate(x(npart),y(npart),z(npart),mass(npart))
            open(1,file=trim(filename),action='read',form='unformatted',status='old',access='stream')
            if (loadmasses) then
                read(1) (x(i),y(i),z(i),mass(i),i=1,npart)
            else
                read(1) (x(i),y(i),z(i),i=1,npart)
                mass = 1.0
            end if
            close(1)

        else if (abs(filetype) == 3) then ! Simple ascii file

            ! determine number of particles
            npart = 0
            open(1,file=trim(filename),action='read',form='formatted',status='old')
            stat = 0
            do while (stat==0)
                read(1,*,IOSTAT=stat) xempty
                if (stat.ne.0) exit
                npart = npart+1
            end do
            close(1)

            allocate(x(npart),y(npart),z(npart),mass(npart))
            open(1,file=trim(filename),action='read',form='formatted',status='old')
            if (loadmasses) then
                do i = 1,npart
                    read(1,*) x(i),y(i),z(i),mass(i)
                end do
            else
                do i = 1,npart
                    read(1,*) x(i),y(i),z(i)
                end do
                mass = 1.0
            end if
            close(1)

        else if (abs(filetype) == 4) then ! Gadget file binary

            write(*,*) "The record length is: ",recl_test

            if(.true.) then
                call h5open_f(error)
                call h5fopen_f(trim(filename)//'.hdf5',H5F_ACC_RDONLY_F,file_id,error)

                call h5dopen_f(file_id,'x',dset_id,error)
                call h5dget_space_f(dset_id,dataspace_id,hdferr)
                call h5sget_simple_extent_npoints_f(dataspace_id,npart,hdferr2)
                write(*,*) 'The number of particles is :',npart
                allocate(x(npart),y(npart),z(npart),mass(npart),stat=allocate_status)
                call H5LTread_dataset_float_f(dset_id,'x',x)
                call h5dclose_f(dset_id,error)

                call h5dopen_f(file_id,'y',dset_id,error)
                call H5LTread_dataset_float_f(dset_id,'y',y)
                !call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
                call h5dclose_f(dset_id,error)

                call h5dopen_f(file_id,'z',dset_id,error)
                call H5LTread_dataset_float_f(dset_id,'z',z)
                !call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
                call h5dclose_f(dset_id,error)

                call h5fclose_f(file_id,error)
            endif
            if(allocate_status /= 0) then
                 write(*,'(A)') 'memory problem.!'
            else
                 write(*,'(A)') 'memory is ok.'
            endif
            if (loadmasses) then
                read(1) (mass(i),i=1,npart)
            else
                mass = 1.0
            end if
            !close(1)

        end if

        if (npart==huge(npart)) then
            write(*,'(A)')
            write(*,'(A)') 'No single file can contain more than 2^31 particles.'
            stop
        end if

    end subroutine load_single_file

2017-11-06 13:00 GMT-05:00 <hdf-forum-request@lists.hdfgroup.org<mailto:hdf-forum-request@lists.hdfgroup.org>>:
Send Hdf-forum mailing list submissions to
        hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

or, via email, send a message with subject or body 'help' to
        hdf-forum-request@lists.hdfgroup.org<mailto:hdf-forum-request@lists.hdfgroup.org>

You can reach the person managing the list at
        hdf-forum-owner@lists.hdfgroup.org<mailto:hdf-forum-owner@lists.hdfgroup.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Hdf-forum digest..."

Today's Topics:

   1. Memory allocation/deallocation (Andreas Derler)
   2. Re: no specific subroutine for the generic ?h5dread_f?
      (Scot Breitenfeld)

----------------------------------------------------------------------

Message: 1
Date: Mon, 6 Nov 2017 09:59:43 +0100
From: Andreas Derler <andreas.derler@wirecube.at<mailto:andreas.derler@wirecube.at>>
To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>.
Subject: [Hdf-forum] Memory allocation/deallocation
Message-ID: <5e5c0745-3a23-a188-a535-9e662af499ec@wirecube.at<mailto:5e5c0745-3a23-a188-a535-9e662af499ec@wirecube.at>>
Content-Type: text/plain; charset=utf-8; format=flowed

Hi,

I am trying to use the Java HDF5 interface (JHI5) in an application server environment, where I am writing
to many different HDF5 files within a single JVM instance.

However, while using HDF5 I am running into memory issues. Basically, I am facing the issue that writing
to an HDF5 file causes memory to be allocated, which, even after successful writing and
closing the file is not deallocated.

So I would like to know if there is a way to clear all allocated memory after writing to the file using the JHI5 library.

I already made sure that everything is closed and also tried to limit the cache size using H5Pset_cache and H5Pset_chunk_cache.
However, changing cache sizes does not eliminate the problem, that the memory is not deallocated after closing the file.
Also calling the function H5garbage_collect does not seem to change this behaviour.
I saw in the docs that the native implementation provides the call H5Pset_evict_on_close (https://support.hdfgroup.org/HDF5/doc/RM/RM_H5P.html#Property-SetEvictOnClose),
however, this call seems not to be available in the JHI5 version.

Is there any other way to make sure that all memory is deallocated, or am I doing something wrong? To this end, I am posting an example code I am using:

final long[] dims = { 0, 0 };
final long[] maxdims = { HDF5Constants.H5S_UNLIMITED, HDF5Constants.H5S_UNLIMITED };
final int RANK = 2;
long cache_size = 1024L*1024; // cache size in bytes

try {
   dims[0] = data.length; // num rows
   dims[1] = data[0].length; // num cols
   int file_id = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
   H5.H5Pset_cache(file_id, 0, 521L, cache_size, 1);
   file_id = H5.H5Fcreate(filename, HDF5Constants.H5F_ACC_TRUNC, HDF5Constants.H5P_DEFAULT, file_id);
   int dataspace_id = H5.H5Screate_simple(RANK, dims, maxdims);
   int dataset_access_property_list_id = H5.H5Pcreate(HDF5Constants.H5P_DATASET_ACCESS);
   H5.H5Pset_chunk_cache(dataset_access_property_list_id, 521L, cache_size, 1);
   int dataset_creation_property_list_id = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
   long[] dim_chunk = { dims[1], 1 };
   H5.H5Pset_chunk(dataset_creation_property_list_id, RANK, dim_chunk);
   int dataset_id = H5.H5Dcreate(file_id, path, HDF5Constants.H5T_NATIVE_DOUBLE, dataspace_id,
                 HDF5Constants.H5P_DEFAULT, dataset_creation_property_list_id, dataset_access_property_list_id);
   H5.H5Dwrite(dataset_id, HDF5Constants.H5T_NATIVE_DOUBLE, HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,
         HDF5Constants.H5P_DEFAULT, data);

   H5.H5Fflush(dataset_id, HDF5Constants.H5F_SCOPE_GLOBAL);
   H5.H5Dclose(dataset_id);
   H5.H5Sclose(dataspace_id);
   H5.H5Pclose(dataset_creation_property_list_id);
   H5.H5Pclose(dataset_access_property_list_id);
   H5.H5Fclear_elink_file_cache(file_id);
   H5.H5Pclose(file_id);
   H5.H5Fclose(file_id);
   H5.H5garbage_collect();
} catch (final Exception e) {
   e.printStackTrace();
}

------------------------------

Message: 2
Date: Mon, 6 Nov 2017 14:49:44 +0000
From: Scot Breitenfeld <brtnfld@hdfgroup.org<mailto:brtnfld@hdfgroup.org>>
To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>>
Subject: Re: [Hdf-forum] no specific subroutine for the generic
        ?h5dread_f?
Message-ID: <E3DC4B0F-7690-4408-B94E-8F12CE3FA8EB@hdfgroup.org<mailto:E3DC4B0F-7690-4408-B94E-8F12CE3FA8EB@hdfgroup.org>>
Content-Type: text/plain; charset="utf-8"

Can you include how you declared your arguments in h5dread_f? I would suspect that one of your arguments is wrong and the compiler is not finding the correct interface.

Scot

On Nov 4, 2017, at 12:03 AM, Guido granda mu?oz <guidogranda@gmail.com<mailto:guidogranda@gmail.com>> wrote:

Hello,

I am having trouble with a code that uses hdf5. The code is written in fortran90 it consists of a main program(proccor.f90) and a module(module_correlation_functions.f90).

After using the makefile to compile the code, I get the following error:

gfortran -O3 -c module_correlation_functions.f90 -I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
module_correlation_functions.f90:1583:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
module_correlation_functions.f90:1588:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
module_correlation_functions.f90:1593:68:

                 call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error)
                                                                    1
Error: There is no specific subroutine for the generic ?h5dread_f? at (1)
makefile:38: recipe for target 'module_correlation_functions.o' failed
make: *** [module_correlation_functions.o] Error 1

The makelife I used to compile the code includes the location of the hdf5 library :

LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib
FCFLAGS = -O3
# List of executables to be built within the package
PROGRAMS = procorr

# "make" builds all
all: $(PROGRAMS)

procorr.o: module_correlation_functions.o
procorr: module_correlation_functions.o

# ======================================================================
# And now the general rules, these should not require modification
# ======================================================================

# General rule for building prog from prog.o; $^ (GNU extension) is
# used in order to list additional object files on which the
# executable depends
%: %.o
        $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF)

# General rules for building prog.o from prog.f90 or prog.F90; $< is
# used in order to list only the first prerequisite (the source file)
# and not the additional prerequisites such as module or include files
%.o: %.f90
        $(FC) $(FCFLAGS) -c $^ $(LIBSHDF)

# Utility targets
.PHONY: clean veryclean

clean:
        rm -f *.o *.mod *.MOD
        rm -f .last_fourier_transform
        rm -f cdm_redshift0_*
        rm -f *~ $(PROGRAMS)

The call hdf5 statement is located on the module file (module_correlation_functions.f90)

I am probably doing something wrong on the makefile because I used the same hdf5 library location to compile another fortran90+hdf5 code without any trouble. Could please help me ?

The gfortran version used is: GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0
and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran option.

Kind Regards,

--
Guido
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

------------------------------

Subject: Digest Footer

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

------------------------------

End of Hdf-forum Digest, Vol 101, Issue 5
*****************************************

--
Guido
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5