I'm trying to compile a statically linked binary with GCC and I'm getting warning messages like:
warning: Using 'getpwnam_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
I don't even know what getwnam_r does, but I assume it's getting called from inside some higher level API. I receive a similar message for gethostbyname.
Why would it not be possible to just statically link these functions in like every other function?
Function calls that need access to NSS or iconv need access will open other libs dynamically, since NSS needs plugins to work (the helper modules like pam_unix.so). When the NSS system dlopens these modules, there will be two conflicting versions of glibc - the one your program brought with it (statically compiled in), and the one dlopen()ed by NSS dependencies. Shit will happen.
This is why you can't build static programs using getpwnam_r and a few other functions.
AFAIK, it's not impossible to fully statically link an application.
The problem would be incompatibility with newer library versions which might be completely different. Say for example printf(). You can statically link it, but what if in a future that printf() implementation changes radically and this new implementation is not backward-compatible? Your appliction would be broken.
Please someone correct me if I'm wrong here.
Related
Is it possible to link to a specific shared library with g++/cmake such that my application will not run if the exact version is not present on the target machine? Ultimately, I don't want to use any library versions I haven't directly tested with.
I've seen this question, but it doesn't handle the case of rejecting versions.
I understand that the dynamic linker does do this to some extent via the SONAME, e.g. libmylib.so.0 won't link to an application requiring libmylib.so.1. But is there a way to discriminate at higher version resolution than the SONAME supplies (e.g. only link if libmylib.so.1.5.3 is present)? Or is this just bad practice?
I have a libsomething.a file which is a static library with all dependencies included.
I need to be able to import this in Python as it is a Python C library. According to this, it is not possible to use a static library as a CPython library.
How can I take my .a file and make it a .so, keeping all static dependencies baked in?
Background: I am using Crowbar to build a CPython shared library which can be called from Python in AWS Lambda. Until now, it has worked flawlessly, but as soon as I added in dependencies which require OpenSSL, I get linker problems when running the code in Lambda.
The issue here is that the Amazon Linux image that is used to execute code has an ancient OpenSSL version. I have recreated the runtime environment, but the issue is that the old version of OpenSSL no longer exists in Amazon's yum repository. This means that installing openssl-devel pulls down OpenSSL 1.0.2k, where in the runtime the version of OpenSSL provided is 1.0.1.
This results in linking failing at runtime in Lambda. Therefore, I need a way to build a (mostly) statically linked shared library. The only shared libraries I want my SO to link from are libc and the kernel, with everything else statically compiled in.
In the Lambda execution environment, LD_LIBRARY_PATH is set to /usr/lib64:/lib64:./lib, so anything in the lib folder will be loaded, but only as a last result, and if I link against OpenSSL, I get the wrong version every time.
In Rust, I have the option of producing liblambda.a or liblambda.so, a static or a shared library. I'm assuming that producing a *.a and then converting into a shared library only linking to glibc and kernel dependencies.
No, you cannot do that conversion from static library to shared one (at least not in practice). Read How To Write Shared Libraries by Drepper.
One of the main reason is that shared libraries want (that it nearly need) to have position independent code (which static libraries usually don't have).
However, on Linux most libraries are free software. So why don't you recompile your library from source code into a shared library?
(you might perhaps recompile that specific version of OpenSSL from source)
I'm trying to log the calls made by an app prior to a crash, including libc calls. I've used the -finstrument-functions support in gcc with my own libs but I can't build glibc with this instrumentation.
I added -finstrument-functions to libc_extra_cflags in libc/configure but the build fails with "undefined reference to __libc_multiple_libcs" when linking ld.so.
Just running CFLAGS=-finstrument-functions ./configure doesn't work because the configure tests fail since they don't define __cyg_profile_func_enter/_exit.
I'm currently trying to figure out how to add instrumentation per module (stdlib, io, string, etc) and looking through libc/foo/Makefile's it should be possible using e.g. CFLAGS_stdlib = -finstrument-functions but the flag doesn't show up in gcc commands.
Is there a way to add per-module flags to the glibc build, or is glibc known not to work with -finstrument-functions ?
I'm trying to log the calls made by an app prior to a crash, including libc calls.
You can use ltrace to trace calls made by the application to any shared library, including GLIBC.
is glibc known not to work with -finstrument-functions
Pretty much.
If you think about it, what is your __cyg_profile_func_enter going to do? It can't call into libc, or you'll end up with infinite recursion. It's possible to use direct system calls, but it's far from trivial.
A third party vendor is releasing a prebuilt security library to me and I do not have access to it`s code or makefiles. This library is compiled against specific versions of openssl & protobuf. Problem is, the app I work on, chromium, is also using modified versions of these 2 libraries (well, technically boringssl is not openssl; but they share symbols). They are being compiled with the chromium source and being linked in statically. When I add the security library to chromium, I end up with 2 conflicting versions of the libraries and objects that are compiled against different headers. This of course leads to runtime crashes and unpredictable results. Is there anything I can do to make sure that everything is linked properly and symbols do not clash?
Is there anything I can do to make sure that everything is linked properly and symbols do not clash?
Your only real choices are:
Use dlopen(..., RTLD_LOCAL); on the 3rd-party library
Ask the vendor to give you a version built against Chromium tree.
Stop using this 3rd party library altogether.
The solution proposed by Petesh -- link openssl and protobuf statically into the 3rd party library and hide their symbols -- looks like another possibility, but has licensing implications. It looks like both protobuf and openssl allow for binary redistribution, so this may actually work, but IANAL.
I've built a Name Service Switch Module for Red Hat Linux.
Using strace, I've determined that the OS looks for the library in various directories, but only for files with the extension .so.2 (e.g. libnss_xxx.so.2, where xxx is the service name)
Why doesn't it look for .so or .so.1 libraries? Is there any guarantee that it won't stop looking for .so.2 libraries and start looking for .so.3 libraries in future?
EDIT: http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html, says that the 2 is 'a version number that is incremented whenever the interface changes'.
So I guess that:
The version of NSS requires version 2 of the libraries.
An OS update with an updated NSS might require a different version number.
Can someone confirm whether that is true?
You're assumption is generally true with a minor edit:
The version of NSS requires a version of the libraries with interface version 2.
An OS update with an updated NSS might require a different version number.
The version of an interface does not necessarily need to change with the version of the library, i.e. a newer version of the library might still provide the same interface.
There are two types of so files: shared libraries (loaded and scanned for symbols at compile time, loaded again and linked at program startup time) and modules (loaded and linked at run time). The idea of shared libraries is that your program requires a certain version of the library. This version is determined at compile time. Once the program is compiled, it should continue to work even if a new (incompatible) version of the library is installed. This means that the new version must be a different file, so old programs can still use the old library while newer (or more recently compiled) programs use the newer version.
To properly use this system, your program must somehow make sure that the library version it needs will continue to be installed. This is one of the tasks of a distribution's packaging system. The package containing your program must have a dependency on the required version of the library package.
However, you seem to be talking about modules. Things are different there. They do not carry such a version, because ld.so (which takes care of loading shared libraries) isn't the one loading them. Your program should be bundled with those modules, so the module versions are always compatible with the program using them. This works for most programs.
But it doesn't work if your program allows third party modules. So they can come up with their own versioning system. This seems to be what nss has done (I'm not familiar with it, though). This means that they have defined a protocol version (currently 2), which specifies what a module should look like: which symbols need to be defined, what arguments do functions expect, these sort of things. If you create a module following version 2 of the protocol, you should name your module .so.2 (because that's their way of checking your supported version). If they create a new incompatible protocol 3, they will start looking for .so.3. Your module will no longer be found, and that's a good thing, because it will also not support the new protocol.