Linking to a specific .so version with gcc/ld/cmake, rejecting others - linux

Is it possible to link to a specific shared library with g++/cmake such that my application will not run if the exact version is not present on the target machine? Ultimately, I don't want to use any library versions I haven't directly tested with.
I've seen this question, but it doesn't handle the case of rejecting versions.
I understand that the dynamic linker does do this to some extent via the SONAME, e.g. libmylib.so.0 won't link to an application requiring libmylib.so.1. But is there a way to discriminate at higher version resolution than the SONAME supplies (e.g. only link if libmylib.so.1.5.3 is present)? Or is this just bad practice?

Related

linking 2 conflicting versions of a libraries

A third party vendor is releasing a prebuilt security library to me and I do not have access to it`s code or makefiles. This library is compiled against specific versions of openssl & protobuf. Problem is, the app I work on, chromium, is also using modified versions of these 2 libraries (well, technically boringssl is not openssl; but they share symbols). They are being compiled with the chromium source and being linked in statically. When I add the security library to chromium, I end up with 2 conflicting versions of the libraries and objects that are compiled against different headers. This of course leads to runtime crashes and unpredictable results. Is there anything I can do to make sure that everything is linked properly and symbols do not clash?
Is there anything I can do to make sure that everything is linked properly and symbols do not clash?
Your only real choices are:
Use dlopen(..., RTLD_LOCAL); on the 3rd-party library
Ask the vendor to give you a version built against Chromium tree.
Stop using this 3rd party library altogether.
The solution proposed by Petesh -- link openssl and protobuf statically into the 3rd party library and hide their symbols -- looks like another possibility, but has licensing implications. It looks like both protobuf and openssl allow for binary redistribution, so this may actually work, but IANAL.

Is it possible to compile a portable executible on Linux based on yum or rpm?

Usually one rpm depends on many other packages or libs. This is not easy for massive deployment without internet access.
Since yum can automatically resolve dependencies. Is it possible to build a portable executable? So that we can copy it to other machines with the same OS.
If you want a known collection of RPMs to install, yum offers a downloadonly plugin. With that, you should be able to collect all the associated RPMs in one shot to install what you wanted on a disconnected machine.
The general way to build a binary without runtime library dependencies is to build it to be static, ie. using the -static argument to gcc, which links in static versions of the libraries required such that they're included in the resulting executable. This doesn't bundle in any data file dependencies or external executables (ie. libexec-style helpers), but simpler applications often don't need them.
For more complex needs (where data files are involved, or elements of the dependency chain can't be linked in for one reason or another), consider using AppImageKit -- which bundles an application and its dependency chain into a runnable ISO. See docs/links at PortableLinuxApps.org.
In neither of these cases does rpm or yum have anything to do with it. It's certainly possible to build an RPM that packages static executables, but that's a matter of changing the %build section of the spec file such that it passes -static to gcc, not of doing anything RPM-specific.
To be clear, by the way -- there are compelling reasons why we don't use static libraries all the time!
Using shared libraries means that applying a security update to a library only means replacing the library itself, not recompiling all applications using it.
Using shared libraries is more memory-efficient, since the single shared copy of the library in memory can be used by multiple applications.
Using shared libraries means your executables don't need to include full copies of all the libraries they use, making them much smaller.

rpm upgrading shared object used by other program

I am generating rpm-A that has program P-A.1.1, and two libs L-A.1.1 and L-B.1.1.
L-A.1.1 changes some APIs it used to expose compared to it's previous version - L-A.1.0
Say the machine had another program P-B.1.0 that uses L-A.1.0.
Will installing rpm-A break program P-B.1.0?
Will L-A.1.1 co-exist with L-A.1.0?
A
If you are upgrading the package that had previously provided P-A.1.0 and the new version of the package no longer provides that version of the library and only provides the P-A.1.1 version of the library then RPM will not allow that upgrade to occur without being forced because it would break P-B.1.0.
You have a number of options to handle this sort of thing.
You can provide both libraries in the same package.
You can change the package name (e.g. gnupg.gnupg2 or iptables/iptables-ipv6 though those are both for slightly different reasons than this).
You can use library symbol versioning to have your library expose both APIs at the same time (I believe).

Why do NSS modules have to end in .so.2 on Linux?

I've built a Name Service Switch Module for Red Hat Linux.
Using strace, I've determined that the OS looks for the library in various directories, but only for files with the extension .so.2 (e.g. libnss_xxx.so.2, where xxx is the service name)
Why doesn't it look for .so or .so.1 libraries? Is there any guarantee that it won't stop looking for .so.2 libraries and start looking for .so.3 libraries in future?
EDIT: http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html, says that the 2 is 'a version number that is incremented whenever the interface changes'.
So I guess that:
The version of NSS requires version 2 of the libraries.
An OS update with an updated NSS might require a different version number.
Can someone confirm whether that is true?
You're assumption is generally true with a minor edit:
The version of NSS requires a version of the libraries with interface version 2.
An OS update with an updated NSS might require a different version number.
The version of an interface does not necessarily need to change with the version of the library, i.e. a newer version of the library might still provide the same interface.
There are two types of so files: shared libraries (loaded and scanned for symbols at compile time, loaded again and linked at program startup time) and modules (loaded and linked at run time). The idea of shared libraries is that your program requires a certain version of the library. This version is determined at compile time. Once the program is compiled, it should continue to work even if a new (incompatible) version of the library is installed. This means that the new version must be a different file, so old programs can still use the old library while newer (or more recently compiled) programs use the newer version.
To properly use this system, your program must somehow make sure that the library version it needs will continue to be installed. This is one of the tasks of a distribution's packaging system. The package containing your program must have a dependency on the required version of the library package.
However, you seem to be talking about modules. Things are different there. They do not carry such a version, because ld.so (which takes care of loading shared libraries) isn't the one loading them. Your program should be bundled with those modules, so the module versions are always compatible with the program using them. This works for most programs.
But it doesn't work if your program allows third party modules. So they can come up with their own versioning system. This seems to be what nss has done (I'm not familiar with it, though). This means that they have defined a protocol version (currently 2), which specifies what a module should look like: which symbols need to be defined, what arguments do functions expect, these sort of things. If you create a module following version 2 of the protocol, you should name your module .so.2 (because that's their way of checking your supported version). If they create a new incompatible protocol 3, they will start looking for .so.3. Your module will no longer be found, and that's a good thing, because it will also not support the new protocol.

Why would it be impossible to fully statically link an application?

I'm trying to compile a statically linked binary with GCC and I'm getting warning messages like:
warning: Using 'getpwnam_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
I don't even know what getwnam_r does, but I assume it's getting called from inside some higher level API. I receive a similar message for gethostbyname.
Why would it not be possible to just statically link these functions in like every other function?
Function calls that need access to NSS or iconv need access will open other libs dynamically, since NSS needs plugins to work (the helper modules like pam_unix.so). When the NSS system dlopens these modules, there will be two conflicting versions of glibc - the one your program brought with it (statically compiled in), and the one dlopen()ed by NSS dependencies. Shit will happen.
This is why you can't build static programs using getpwnam_r and a few other functions.
AFAIK, it's not impossible to fully statically link an application.
The problem would be incompatibility with newer library versions which might be completely different. Say for example printf(). You can statically link it, but what if in a future that printf() implementation changes radically and this new implementation is not backward-compatible? Your appliction would be broken.
Please someone correct me if I'm wrong here.

Resources