How to compile v 1.10.7 on RHEL oneAPI: error: C compiler cannot create executables

What options do I need to get --enable-fortran to work? Just --enable-parallel works without errors.

I’ve tried:
./configure --prefix=/cluster/shared/apps/hdf5p/1.10.7 --enable-fortran --enable-parallel CC=mpiicc FC=mpif90 CXX=mpiicpc CFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" FFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" CXXFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" FFLAGS="-I$INTEL/oneapi/mpi/latest/include -L$INTEL/oneapi/mpi/latest/lib"

As well as FC=mpiifort

checking for gcc... mpiicc
checking whether the C compiler works... no
configure: error: in `/cluster/home/me/hdf5-hdf5-1_10_7':
configure: error: C compiler cannot create executables

./configure --prefix=/cluster/shared/apps/hdf5p/1.10.7 --enable-fortran --enable-parallel CC=mpiicc FC=mpiifort CXX=mpiicpc CFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" FFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" CXXFLAGS="-fPIC -O3 -xHost -ip -fno-alias -align" FFLAGS="-I$INTEL/oneapi/mpi/latest/include -L$INTEL/oneapi/mpi/latest/lib"
module loaded:

1) oneapi/tbb/2021.11  oneapi/intel_ipp_intel64/2021.10 15) oneapi/dal/2024.0.0
2) oneapi/oclfpga/2024.0.0 9) oneapi/ifort/2024.0.2 16) oneapi/compiler/2024.0.2
3) oneapi/compiler-rt/2024.0.2 10) oneapi/dpl/2022.3 17) oneapi/ccl/2021.11.2
4) oneapi/vtune/2024.0 11) oneapi/dpct/2024.0.0 18) oneapi/advisor/2024.0
5) oneapi/mpi/2021.11 12) oneapi/dnnl/3.3.0 19) oneapi/2024.0.0/2024.0.0
6) oneapi/mkl/2024.0 13) oneapi/dev-utilities/2024.0.0 20) oneapi/hpctoolkit/mpi/2021.11
7) oneapi/intel_ippcp_intel64/2021.9 14) oneapi/debugger/2024.0.0
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by HDF5 configure 1.10.7, which was
generated by GNU Autoconf 2.69.  Invocation command line was

$ ./configure --prefix=/cluster/shared/apps/hdf5p/1.10.7 --enable-fortran --enable-parallel CC=mpiicc FC=mpiifort CXX=mpiicpc CFLAGS=-fPIC -O3 -xHost -ip -fno-alias -align FFLAGS=-fPIC -O3 -xHost -ip -fno-alias -align CXXFLAGS=-fPIC -O3 -xHost -ip -fno-alias -align FFLAGS=-I/cluster/shared/apps/oneapi/mpi/latest/include -L/cluster/shared/apps/oneapi/mpi/latest/lib

config.log.

The config.log file is not accessible. Can you upload it here?

Hi, @rk3199 !

Please try

  1. HDF5 1.14.4.2 (latest release)
  2. Intel OneAPI 2024.1 (latest release)
  3. ./configure --prefix=/tmp CXX=“$(which mpiicpc) -cc=$(which icpx)” CC=“$(which mpiicc) -cc=$(which icx)” FC=“$(which mpiifort) -fc=$(which ifx)” LDFLAGS=“-L/opt/intel/oneapi/mpi/latest/lib” --enable-fortran --enable-parallel

Here’s a complete working example of GitHub Action:

Here’s a proof:

I hope this helps!

See also: instaltion of hdf5 with latest intel compilers - #5 by hyoklee

1 Like

The config.log file is not accessible. Can you upload it here?

Here is a copy of rk1399’s original config.log, which I looked at yesterday.
config.log (45.2 KB)

That upload site “file.io” is weird. I think it was set up to delete the file after single download, which I may have done accidentally.

Yes, it is right there on their website: “As soon as it has been received by the intended recipient, your file is gone forever!”

“Intended recipient.” Huh!

1 Like

From the log file, it appears to be an issue with your mpiicc wrapper. Maybe you are supposed to use a different wrapper instead of mpiicc on your cluster?

configure:4692: mpiicc  -fPIC -O3 -xHost -ip -fno-alias -align     conftest.c  >&5
/cluster/shared/apps/oneapi/hpctoolkit/mpi/2021.11/bin/mpiicx: line 539: icc: command not found
configure:4696: $? = 127
configure:4734: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "HDF5"
| #define PACKAGE_TARNAME "hdf5"
| #define PACKAGE_VERSION "1.10.7"
| #define PACKAGE_STRING "HDF5 1.10.7"
| #define PACKAGE_BUGREPORT "help@hdfgroup.org"
| #define PACKAGE_URL ""
| #define PACKAGE "hdf5"
| #define VERSION "1.10.7"
| /* end confdefs.h.  */
| 
| int
| main ()
| {
| 
|   ;
|   return 0;
| }

Thanks for the replies! One of our researchers was asking for the older version. I tried the suggestion:
./configure --prefix=/tmp CXX="$(which mpiicpc) -cc=$(which icpx)" CC="$(which mpiicc) -cc=$(which icx)" FC="$(which mpiifort) -fc=$(which ifx)" LDFLAGS="-L/cluster/shared/apps/oneapi/hpctoolkit/mpi/latest/lib" --enable-fortran --enable-parallel

These errors seem fatal:

icx: error: unknown argument '-qversion'; did you mean '--version'?
conftest.c:11:10: fatal error: 'ac_nonexistent.h' file not found
   11 | #include <ac_nonexistent.h>
      |          ^~~~~~~~~~~~~~~~~~
1 error generated.

config.log (124.6 KB)

Ah, yes, if you want Fortran enabled, you need to use Autotools 2.71 and rerun autogen.sh because, for some reason, Intel introduced a compiler flag starting with “-l” which is getting interpreted as a library. This made us require Autotools 2.71, which causes all sorts of headaches for older systems. You might have more success using CMake instead.

/usr/bin/ld: cannot find -loopopt=1
icx: error: linker command failed with exit code 1 (use -v to see invocation)
configure:7644: $? = 1
configure: failed program was:
1 Like

Fortunately we have a test node where I installed the Fedora 2.71 autotools. And sure enough it works. Hope this helps someone down the line.

1 Like

More notes for hdf5-1.10.7 + oneAPI 2024.1 users:

  1. GitHub provides hdf5-1_10_7.tar.gz . You can also run autoreconf instead of autogen.sh.
  2. 1.10.7 test fails:
    Testing contiguous, no data type conversion (char->char) *FAILED*

Reference: actions/.github/workflows/lin-auto-icx-f-10.7.yml at main · hyoklee/actions

1 Like

I do see an autogen.sh file from https://github.com/HDFGroup/hdf5/archive/refs/tags/hdf5-1_10_7.tar.gz

-rwxr-xr-x 1 me user 8471 Oct 16 2020 **autogen.sh**

Thank you so much for checking it thoroughly!
My first GH Action failed and I thought it was due to missing autogen.sh.

Intel shared the article referencing this.

1 Like

[quote=“brtnfld, post:5, topic:12257”]
From the log file, it appears to be an issue with your mpiicc wrapper. https://eldfall-chronicles.com/product-category/miniatures/ Maybe you are supposed to use a different wrapper instead of mpiicc on your cluster? https://sharpedgeshop.com/collections/gyuto-knives-chefs-knife
[/quote] Your cluster requires a different wrapper instead of mpiicc?

Some HPC facilities have their own MPI wrappers that they would rather have users use instead of mpi**. This is common for Cray systems where you have PrgEnv– modules, and the compiler wrappers (cc for C, ftn for Fortran) include additional link options and compiler flags.