Some people are linking shared object files so that they have no entries in their NEEDED list, and when linking those SO files to a binary, put the transitive dependencies of all SO files into the NEEDED list of the created binary instead, disregarding of whether the binary actually needs those SO files.
The former act of not linking required SO files to a certain SO file is called underlinking and the latter act of linking too much SO files to a certain binary is called overlinking.
I am in a discussion with my coworker whether this scheme of building an application and its SO files imposes any performance cost either when building or running an application or its SO files. For example, perhaps there is an additional cost in dynamic symbol resolution for the PLT? Can someone shed some light on this please?
It will certainly change the performance of symbol lookup, probably for the worse. ELF symbol resolution is a breadth-first search starting from the symbol table of the executable itself, then the symbol tables of the DT_NEEDED libraries of the executable, then those libraries' DT_NEEDEDs' symbol tables, etc. By overlinking the main executable you will probably force more symbol lookups to iterate through more libraries' symbol tables.
Related
I am maintaining a package that uses RcppEigen. The package itself has a modest amount of code (+- 1000 lines at the moment).
What I don't understand is that the file size of my library is very large, leading to a file size of 14MB for my <packagename>.so and 11MB for <packagename>.o.
I would imagine that the package would link dynamically to RcppEigen libraries (thus keeping the size of the binaries of my package relatively small). But my guess instead it links the libraries statically into my .o and .so files.
Am I correct that this is what happens?
Can I/should I avoid this?
If so, how?
I see here (RcppEigen.package.skeleton documentation) that NAMESPACE should include "a useDynLib directive"; it is also present in my NAMESPACE file)
(On a side note, when I submit to CRAN the large package size is NOTEd, but has not been cause for rejection.)
This is expected behavior. I have not checked, but I expect that the majority of packages using RcppEigen (or RcppArmadillo) get this NOTE. That's because Eigen (and Armadillo) is a header-only library, i.e. it is not dynamically linked. Instead the respective function is compiled into each *.o file. This is potentially even worse than static linking: If a function is used in multiple compilation units, it will end up in multiple *.o files, leading to multiple versions of the same function in the *.so. That is the price we all have to pay for the convenience of header-only libraries. Getting dynamic (or static) linking correct can be really difficult, in particular on Windows.
Concerning the useDynLib: If you look into the NAMESPACE file in your package, you should see a line like useDynLib(<packagename> [...]). That tells R to load the dynamic library associated with your package and is required for any R package using compiled code.
There's a well-known technique for interposing dynamically linked binaries: creating a shared library and and using LD_PRELOAD variable. But it doesn't work for statically-linked binaries.
One way is to write a static library that interpose the functions and link it with the application at compile time. But this isn't practical because re-compiling isn't always possible (think of third-party binaries, libraries, etc).
So I am wondering if there's a way to interpose statically linked binaries in the same LD_PRELOAD works for dynamically linked binaries i.e., with no code changes or re-compilation of existing binaries.
I am only interested in ELF on Linux. So it's not an issue if a potential solution is not "portable".
One way is to write a static library that interpose the functions and link it with the application at compile time.
One difficulty with such an interposer is that it can't easily call the original function (since it has the same name).
The linker --wrap=<symbol> option can help here.
But this isn't practical because re-compiling
Re-compiling is not necessary here, only re-linking.
isn't always possible (think of third-party binaries, libraries, etc).
Third-party libraries work fine (relinking), but binaries are trickier.
It is still possible to do using displaced execution technique, but the implementation is quite tricky to get right.
I'll assume you want to interpose symbols in main executable which came from a static library which is equivalent to interposing a symbol defined in executable. The question thus reduces to whether it's possible to intercept a function defined in executable.
This is not possible (EDIT: at least not without a lot of work - see comments to this answer) for two reasons:
by default symbols defined in executable are not exported so not accessible to dynamic linker (you can alter this via -export-dynamic or export lists but this has unpleasant performance or maintenance side effects)
even if you export necessary symbols, ELF requires executable's dynamic symtab to be always searched first during symbol resolution (see section 1.5.4 "Lookup Scope" in dsohowto); symtab of LD_PRELOAD-ed library will always follow that of executable and thus won't have a chance to intercept the symbols
What you are looking for is called binary instrumentation (e.g., using Dyninst or ptrace). The idea is you write a mutator program that attaches to (or statically rewrites) your original program (called mutatee) and inserts code of your choice at specific points in the mutatee. The main challenge usually revolves around finding those insertion points using the API provided by the instrumentation engine. In your case, since you are mainly looking for static symbols, this can be quite challenging and would likely require heuristics if the mutatee is stripped of non-dynamic symbols.
I have a bunch of .a files whose generation process is not controlled by me, nor are their sources. When I use them for linking, I want to know their dependencies (libA.a depends on libB.a if there is some symbol undefined in libA.a but defined in libB.a), so that I can put them in the correct order in the ld/gcc command line.
I don't want to do over linking (i.e. specify those libraries twice), because I want to persist those dependencies into BUILD file of bazel, so I want to know the precise dependency.
I wonder if there is some command line tool, given libA.a and libB.a, can tell whether libA.a depends on libB.a? If there is not such, how do I write such a script?
Note: my definition for dependency may not be 100% accurate. Let me know if there are other types of dependency other than defined/undefined symbols.
The simplest way is to process the output of nm libA.a and nm libB.a and look for U symbols, but there are many types of symbols listed in man nm, each of them have different semantic, so I am concerned I might miss some if I use such simplified approach.
I would use the approach beginning with U symbols. In practice, the uppercase symbol types are all you need to be concerned with (those are what you link against). I wrote scripts to print the exported and imported symbols, and for this case, it would be enough to do
exports libB.a >libB-exports
externs libA.a >libA-externs
comm libB-exports libA-externs >libA-needs-libB
to list symbols where libA would use a symbol from libB (the lists are sorted, so comm should "just work"). If those were shared libraries, the scripts would have to be modified (adding a -D option to `nm).
Further reading:
exports script to show which symbols are exported from a collection of object files
externs display all external symbols used by a collection of object files
download-link
In the middle of compilation, Linux kernel creates liba.a that contains many built-in.o and other object files from different directories, and use it as a major component of the final vmlinux linking. I have seen similar use of archive files in glibc compilation, and am now wondering why those projects use archive files and what would be the benefit for it.
As far as I know, archive files generated with ar are simply containers for individual files included in them. I do not see much benefit of using it other than reducing file search time for each of object files. Is this the reasoning behind the use of archive files in the middle of compilation?
If so, I would be surprised that file name search takes that significant to make kernel people care about, and I wonder how much the cost of not using archive files is, and if there is any alternative for the similar problem without spatial inefficiency of .a files.
Re: I do not see much benefit of using it other than reducing file search time for each of object files.
Your understanding of the benefit is not quite right. Archives reduce the workload of resolving individual symbols.
If you link a program out of many individual .o files, the linker has to consider them all at the same time. The references can go in any direction. The very last .o on the command line can call a function in the very first .o and vice versa.
This is not the case (by default, at least) with archives. With archives, functions in the earlier archives can only make references to symbols whose definitions appear in the later archives. (This is also related to the the traditional Unix convention why the -l linker options go at the end of the command line!!! Your .o files first, then the command line.)
This means that once an archive appears which defines a symbol, you can be sure that the later archives do not use that symbol any more. Which means that you can remove it from your data structures. You are basically "done" linking that particular library; it has satisfied the prior references, and all that remains is to satisfy ITS unresolved references. If you order the linking process right, and the software is nicely layered, you can minimize how many symbols are outstanding at any time.
Linux is more than 20 years old now and its build system has a long and rich history, just like the code. Archives were not used originally; I think that started in 2.6 only. Also, dependencies were once generated by a GNU awk script. People built the kernel on 25 Mhz 386 boxes with 4 megs of RAM, haha.
Archives are used today because there was a need for them with the kernel getting larger. It's not just for the heck of it!
A few reasons off the top of my head:
To expand on what Duck sez: the "link editor" ("ld"), a.k.a. "linker", ("man ld") takes a bunch of compiled object files (.o files) and libraries ("archive", as you call them) which can be "static" libraries (.a files) or "shared libraries" (.so files) and "links" them into an "executable" (a "program"). One tells ld which libraries to use by specifying multiple occurences of the -l options. Imagine having to specify a few thousand -l options, one for each component .o file, instead of just a couple of dozens or fewer, one for each library.
Code relating to one area of functionality can be put into one library for use and re-use by other code. For example, /usr/lib/libcrypt.* provides encryption capabilities, /usr/lib/libssl* provides supporting code for Secure Socket Layer, etc.
Also, I don't know which point in time Kaz meant when s/he said "Archives were not used originally..." but "archives", static libraries, were already in use as "recently" as 1983 (!). I did not encounter dynamic shared libraries until the early 90s.
I've got an application that loads .so files as plugins at startup, using dlopen()
The build environment is running on x86 hardware, but the application is being cross compiled for another platform.
It would be great if I could (as part of the automated build process) do a check to make sure that there aren't any unresolved symbols in a combination of the .so files and the application, without having to actually deploy the application.
Before I write a script to test symbols using the output of nm, I'm wondering if anyone knows of a utility that already does this?
edit 1: changed the description slightly - I'm not just trying to test symbols in the .so, but rather in a combination of several .so's and the application itself - ie. after the application loaded all of the .so's whether there would still be unresolved symbols.
As has been suggested in answers (thanks Martin v. Löwis and tgamblin), nm will easily identify missing symbols in a single file but won't easily identify which of those symbols has been resolved in one of the other loaded modules.
Ideally, a cross-nm tool is part of your cross-compiler suite. For example, if you build GNU binutils for cross-compilation, a cross-nm will be provided as well (along with a cross-objdump).
Could you use a recursive version of ldd for this? Someone seems to have written a script that might help. This at least tell you that all the dependency libs could be resolved, if they were specified in the .so correctly in the first place. You can guarantee that all the dependencies are referenced in the .so with linker options, and this plus recursive ldd would guarantee you no unresolved symbols.
Linkers will often have an option to make unresolved symbols in shared libraries an error, and you could use this to avoid having to check at all. For GNU ld you can just pass --no-allow-shlib-undefined and you're guaranteed that if it makes a .so, it won't have unresolved symbols. From the GNU ld docs:
--no-undefined
Report unresolved symbol references from regular object files.
This is done even if the linker is creating a non-symbolic shared
library. The switch --[no-]allow-shlib-undefined controls the
behaviour for reporting unresolved references found in shared
libraries being linked in.
--allow-shlib-undefined
--no-allow-shlib-undefined
Allows (the default) or disallows undefined symbols in shared
libraries. This switch is similar to --no-undefined except
that it determines the behaviour when the undefined symbols are
in a shared library rather than a regular object file. It does
not affect how undefined symbols in regular object files are
handled.
The reason that --allow-shlib-undefined is the default is that the
shared library being specified at link time may not be the
same as the one that is available at load time, so the symbols might
actually be resolvable at load time. Plus there are some systems,
(eg BeOS) where undefined symbols in shared libraries is normal.
(The kernel patches them at load time to select which function is most
appropriate for the current architecture. This is used for example to
dynamically select an appropriate memset function). Apparently it is
also normal for HPPA shared libraries to have undefined symbols.
If you are going to go with a post-link check, I agree with Martin that nm is probably your best bet. I usually just grep for ' U ' in the output to check for unresolved symbols, so I think it would be a pretty simple script to write.
The restrictions in nm turned out to mean that it wasn't possible to use for a comprehensive symbol checker.
In particular, nm would only list exported symbols.
However, readelf will produce a comprehensive list, along with all of the library dependencies.
Using readelf it was possible to build up a script that would:
Create a list of all of the libraries used,
Build up a list of symbols in an executable (or .so)
Build up a list of unresolved symbols - if there are any unresolved symbols at this point, there would have been an error at load time.
This is then repeated until no new libraries are found.
If this is done for the executable and all of the dlopen()ed .so files it will give a good check on unresolved dependencies that would be encountered at run time.