What to do when two shared libraries which uses different versions of the same 3rd party library? - linux

I have a process A which uses two shared libraries: libA.so and libB.so. Because the two libraries were written by different people. Unfortunately libA.so uses version 1.0 of the 3rd party library libD.so. While libB.so uses version 2.0 of the library in static form libD.a. I know that if libA.so and libA.so use libD.so, some errors might happen because of the Global Symbol Interpose. But does this situation have the same problem too?
I know the link flag -Bsymbolic could be used on libA.so or libB.so to force the symbol resolving symbols with the library first. In order to make process A run correctly, both of the two libraries must be linked with this flag, am I right? However, I don't have the source code of libA.so. So I cannot re-link the libA.so again.
To be more general, if one process uses two 3rd party libraries, which contains another same 3rd party library. Will the same thing happen? Is there anything I can do to solve this problem?

This may or may not help you, but given the lack of information I'm hoping it at least sparks an idea or leads you to something similar.
This is an application that allows you to alter your shell settings on a per directory basis:
https://github.com/zimbatm/direnv
It sounds like you actually have an issue that would require you to recompile one of your libraries from source though. That's not ideal, but if there is no build using a compatible thirdparty version you might seek a completely different library to accomplish the original task.

Related

Avoid dynamic linking in dependencies

I am developing a project against a custom linux and I am having troubles with dynamic dlls that are referenced by dependencies.
Is there a way to know if a dependency has dynamic linked libraries before hand? Is it possible to somehow avoid those libraries? I want to have a static binary (MUSL didn’t work for me as one dependency doesn’t compile with it).
Thanks
If you're compiling against glibc, you'll need to have at least some dynamic linking. While it is possible to statically link glibc, that isn't a supported configuration since the name service switch won't work in such a case.
In general, you should expect a build-dependency on cc or pkg-config to be an indicator of the use of a C or C++ library. That isn't a guarantee either way, but it is probably going to be the case the vast majority of the time. Some of those libraries will be able to be linked statically, but of course if you do that you must recompile your code every time any of your dependencies has a security update or you'll have a vulnerability. There's unfortunately no clear way to tell whether static linking is an option in such a case other than looking at the build.rs or the documentation of the crate.

How to detect missing symbols in shared library with libtool

As stated, I want to be able to check that a shared library, created by libtool, is not missing any symbols,
I have written a library that is built as a shared library, 'A'. It depends in turn on another library 'B'.
The other library 'B' does not follow strict semver, and so sometimes introduces new functions in minor or patch releases.
Although I try to put appropriate #if B_LIB_VERSION >= 42 in the code for my library to not attempt to call a function in library B if it is not going to be available, apparently I sometimes get the version incorrect. This causes an error when the program is run.
Is it possible with libtool, or any other tool, to ask it to produce a list of all the symbols that are not found in a shared library, or any of the libraries that it will load?
As stated, I want to be able to check that a shared library, created by libtool, is not missing any symbols,
That's hard to do with shared libraries, as they are designed to allow for late symbol resolution. If you're not using dlopen type features, you might be able to build static executables from static versions of A and B and look for missing symbols.
The other library 'B' does not follow strict semver, and so sometimes introduces new functions in minor or patch releases.
I'd seriously consider searching for a replacement library, than having to keep on dealing with their dependency issues.
Is it possible with libtool, or any other tool, to ask it to produce a list of all the symbols that are not found in a shared library, or any of the libraries that it will load?
No, not really. nm will give you a list of symbols that are undefined (and referenced) in a shared library. objdump might be of some use also. On linux, ldd might do some of what you want. But generally there is no way of knowing exactly what a shared library loads, even without considering dlopen.
libltdl might be of some use also if you have to stick with the misbehaving library. At least you can figure out at runtime if libB.42 has symbol xyz or not. It's not as easy as the conditional code way of doing things.

GNU/Debian Linux and LD

Lets say I have a massive project which consists of multiple dynamic libraries that will all get installed to /usr/lib or /usr/lib64. Now lets say that one of the libraries call into another of the compiled libraries. If I place both of the libraries that are dependent on eachother in the same location will the ld program be able to allow the two libraries to call eachother?
The answer is perhaps yes, but it is a very bad design to have circular references between two libraries (i.e. liba.so containing function fa, calling function fb from libb.so, calling function ga from liba.so).
You should merge the two libraries in one libbig.so. And don't worry, libraries can be quite big. (some corporations have Linux libraries of several hundred megabytes of code).
The gold linker from package binutils-gold on Debian should be useful to you. It works faster than the older linker from binutils.
Yes, as long as their location is present in set of directories ld searches for libraries in. You can override this set by using LD_LIBRARY_PATH enviroment variable.
See this manual, it will resolve your questions.
If you mean the runtime dynamic linker /lib/ld-linux* (as opposed to /usr/bin/ld), it will look for libraries in your LD_LIBRARY_PATH, which typically includes /usr/lib and /usr/lib64.
In general, /lib/ld-* are used for .so libraries at run-time; /usr/bin/ld is used for .a libraries at compile-time.
However, if your libraries are using dlopen() or similar to find one another (e.g. plug-ins), they may have other mechanisms for finding one another. For example, many plug-in systems will use dlopen to read every library in a certain (one or many) directory/ies.

Any downsides to using statically linked applications on Linux?

I seen several discussions here on the subject, but wanted to ask about my particular situation:
If I have some 3rd part libraries which my application is using, and I'd like to link them together in order to save myself the hassle in LD_LIBRARY, etc., is there any downside to it on Linux, other then larger file size?
Also, is it possible to statically link only some libraries, and other (standard Linux libraries) to link dynamically?
Thanks.
It is indeed possible to dynamically link against some libraries and statically link against others.
It sounds like what you really want to do is dynamically link against the system libraries, and statically link against the nonstandard ones that a user may not have installed (or that different users may have different installations of).
That's perfectly reasonable.
It's not generally a good idea to statically link against system libraries, especially libc.
It can often make sense to statically link against libraries that do not come with the OS and that will not be distributed with your application.
There are some bits of libc - those that use nsswitch - that need to load libraries dynamically. This can cause problems if you want to produce a completely static binary.
Statically linking your 3rd party libraries into your application should be completely fine.
The statically linked binary will be larger than if you had uses a shared library, but I find that disadvantage outweight the library path hassles, provided I control the distribution of all the libraries involved. If you are dependant on a particular distros shared libraries, then you have no choice but to use dynamic linking.
The main disadvantage I see is your application loses any automatic bugfixes that might be applied to a shared library. On the flip-side you don't get new bugs.
Static linking does not just affect the file size of the library, it also affects the memory footprint and start up time of the application. Dynamically linked libraries are loaded once no matter how many programs use them. Statically linked libraries must be loaded once per program that uses them (because they are now part of that program).
To answer your second question, yes, it is possible to have dynamic and static libraries linked to the same application. Just be careful to avoid interlibrary dependencies so you don't have a problem with library order. You should be able to list the libraries in any arbitrary order. Where I work, we prefer to list them alphabetically.
Edit: To link a static library, use the flag -lfoo. To add a directory to the library search path, use -L/path/to/libfoo.
Edit: You don't have to link a dynamic library. Your program can use a function provided by your compiler to open a dynamic library at run time, or you can link it at compile time and the compiler will resolve the symbols but not include them in the binary. See pjc50's comment below.
Statically linking will make your binary bulky, but you wont need to have a shared version of that library on the target runtime environment. This is especially the case while developing embedded apps.

Loading multiple shared libraries with different versions

I have an executable on Linux that loads libfoo.so.1 (that's a SONAME) as one of its dependencies (via another shared library). It also links to another system library, which, in turn, links to a system version, libfoo.so.2. As a result, both libfoo.so.1 and libfoo.so.2 are loaded during execution, and code that was supposed to call functions from library with version 1 ends up calling (binary-incompatible) functions from a newer system library with version 2, because some symbols stay the same. The result is usually stack smashing and a subsequent segfault.
Now, the library which links against the older version is a closed-source third-party library, and I can't control what version of libfoo it compiles against. Assuming that, the only other option left is rebuilding a bunch of system libraries that currently link with libfoo.so.2 to link with libfoo.so.1.
Is there any way to avoid replacing system libraries wiith local copies that link to older libfoo? Can I load both libraries and have the code calling correct version of symbols? So I need some special symbol-level versioning?
You may be able to do some version script tricks:
http://sunsite.ualberta.ca/Documentation/Gnu/binutils-2.9.1/html_node/ld_26.html
This may require that you write a wrapper around your lib that pulls in libfoo.so.1 that exports some symbols explicitly and masks all others as local. For example:
MYSYMS {
global:
foo1;
foo2;
local:
*;
};
and use this when you link that wrapper like:
gcc -shared -Wl,--version-script,mysyms.map -o mylib wrapper.o -lfoo -L/path/to/foo.so.1
This should make libfoo.so.1's symbols local to the wrapper and not available to the main exe.
I can only come up with a work-around. Which would be to statically link a version of the "system library" that you are using. For your static build, you could make it link against the same old version as the third-party library. Given that it does not rely on the newer version...
Perhaps it is also possible to avoid these problems with not linking to the third-party library the ordinary way. Instead, your program could load it at execution time. Perhaps then it could be shadowed against the rest. But I don't know much about that.

Resources