HDF5 Linux soname versioning


Hi HDF5 folks,

I was wondering how the soname of HDF5 shared object files (.so files) on Linux is decided?

For both HDF5 1.10.5 and 1.10.6 shared libraries, the built-out filenames differ (libhdf5.so.103.1.0 versus libhdf5.so.103.2.0 respectively), but the built-in sonames (and therefore, the filenames of symlinks to these libraries) are the same:

$ objdump -x libhdf5.so.103.1.0 | grep SONAME
SONAME libhdf5.so.103
$ objdump -x libhdf5.so.103.2.0 | grep SONAME
SONAME libhdf5.so.103

Nevertheless, trying to run an application that was built against HDF5 1.10.5, using a shared library built from HDF5 1.10.6, causes an immediate abort:

Warning! HDF5 library version mismatched error
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Headers are 1.10.5, library is 1.10.6
Aborted (core dumped)

This doesn’t seem to me to be in the spirit of how SONAME versioning is supposed to work on Linux. When two different library versions have the same SONAME, this is supposed to mean that applications linked against the older version can be safely run against the newer library version.

If the two library versions are actually incompatible and it is not safe to use them this way (as implied by the error message and forced abort), then the build process ought to give them different sonames. Otherwise – that is, they are not incompatible – then this error message should not be issued at runtime.



The libraries are compatible, the headers are another story. It is possible to avoid that error, with maybe an environment variable - or a compile definition. (You would need to check the docs).

Now, a different answer to the same question, the library is runtime compatible - replace 1.10.6 version over 1.10.5. Compile time you really should use the same headers that match the library.



I don’t really understand what it means for two library versions to be (runtime) compatible but their headers not, I’m afraid. One cannot reasonably write libhdf5 client code [at least in C or C++] without ever #include’ing a library header.

So if you’re saying that different header versions are not compatible, then this is functionally the same as saying that library versions can never be mixed. Which is fine, but then it seems to me that the soname of HDF5 built as a shared object ought therefore to change with every single release.

On the other hand, if there are cases where it would be OK to mix versions…

Let’s posit that it is never guaranteed to be safe to use an OLDER libhdf5.so version with an application built against NEWER HDF5 headers. I agree it makes perfect sense to have a runtime check for that, and optionally to let users override it with the HDF5_DISABLE_VERSION_CHECK environment variable.

But for an application that was built against OLDER HDF5 headers, is it OK to upgrade libhdf5.so.<soversion> underneath it or not? For any two adjacent library versions, this question presumably has a clear yes-or-no answer. If “no”, then the soname should be changed between those library versions – that’s the whole rationale behind soname versioning. (Then it is the runtime linker that prevents the user from getting in trouble.)

If this rule is strictly followed, then when the soname hasn’t changed but the runtime library version is increased from that in the headers the application was originally compiled with, there is no reason for the runtime error. The user shouldn’t have to set an environment variable to get “safe” behavior rather than a program abort.

I agree that it’s always ideal for the application to be both built at compile-time, and loaded at runtime, against the same library version. But for whatever reason that isn’t always feasible or convenient.

Thanks for considering,


What I mean by runtime compatible;
You have an application built with 1.10.5 installed. 1.10.6 is API compatible and the 1.10.6 library can replace the 1.10.5 library in the installation. You can continue to rebuild your application with the 1.10.5 headers and library and it will work with installed 1.10.6, 1.10.7.

However, there are compile time header changes that are only known by 1.10.6 that were introduced (the public API did not change!) and that is why the header check happens. The options to override that check are available.



You can continue to rebuild your application with the 1.10.5 headers and library and it will work with installed 1.10.6, 1.10.7.

I am glad to hear that this is the intended behavior. My observation is that in this situation, it gives the error I partly quoted above and then aborts – unless I have set the HDF5_DISABLE_VERSION_CHECK environment variable.

My concern about the current situation has two parts. The first is that, as an ISV employee, it would be nice to be able to send our users updated copies of HDF5 shared objects – say to fix a CVE – without rebuilding all our own software. Requiring our users to set the HDF5_DISABLE_VERSION_CHECK variable themselves is not feasible, and setting it ourselves (with putenv() or in a wrapper shell script) could mask situations where a version mismatch is actually a problem, e.g. if the user has a polluted LD_LIBRARY_PATH from for instance a Conda install.

The second part of my concern is that version skew in both directions is treated as an equal problem. In fact, per your comments it should generally be OK to use an application built against 1.10.5 headers with the 1.10.6 runtime library. But (as you state) because of the header changes in 1.10.6, it will not be OK to use an application built against 1.10.6 headers with the 1.10.5 runtime library. I understand and accept this. However, the version check treats these two situations equally even though they are asymmetric – and if we (or our users) have to override it to accommodate the “safe” situation, then we also have to accept the possibility of accidentally masking the “unsafe” situation.

Would it be possible to get the best of both worlds, where the header version check is triggered only in the “unsafe” situation where the runtime library is an older version than the headers used when the application was compiled? Or does something about the implementation make this not feasible?

(By the way, the situation of an application built against older headers but using a newer runtime is very common in Linux distributions. And for this reason, it appears that Debian has simply patched out the version check in its copy of libhdf5 [1], and Fedora has run up against it multiple times [2], [3].)

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=693610
[2] https://lists.fedoraproject.org/pipermail/devel/2014-January/193470.html
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1648654

Thanks again for your time! Apologies if I seem to be beating a dead horse here.



Short answer: yes there has been plenty of anguish about this. It has a long history of arguments for and against that check.

However, I think you could use a “runtime” 1.10.5 library in place of a 1.10.6 library. The .so number in question is the “API” compatibility version, as long as that number is the same - the public API matches. I believe the third number is the issue at hand - if that is different then the headers are different. So it’s possible if you use the CPP interface, those headers may not change yet the core library did. NOTE: I have not verified this.



I’ve been arguing for years (decades?) for HDF5 to drop this ridiculous runtime version check to no avail. It makes maintaining HDF5 for linux distributions way more difficult than it should be and is antiquated practice.