Dynamically loaded library remains loaded despite dlclose - linux

Today I'm looking for some enlightenment of the deep magic inside the dynamic loader. I'm debugging/troubleshooting a plugin system for a C++ application running on Linux. It loads plugins via dlopen (RTLD_NOW | RTLS_LOCAL) and releases them using dlclose. Nothing extraordinary - one would think.
However, I noticed that some plugins remain loaded even after dlclose is successfully called*. I concluded this after looking at the running process' memory map using pmap. Some libs get immediately removed from process memory and others keep lingering around apparently indefinitely.
Continuing on, the dlopen man page states:
The function dlclose() decrements the reference count on the dynamic
library handle handle. If the reference count drops to zero and no
other loaded libraries use symbols in it, then the dynamic library is
unloaded.
That means the problem boils down to these two possibilities; Either the reference counts aren't zero, or other loaded libs are using symbols from some - but not all - of the plugins.
I'm pretty sure (although not 100%) that the reference counts are zero. The application's plugin manager handles all plugins exactly the same. It also makes sure that a plugin doesn't get loaded multiple times. IMO loading & unloading should therefore behave the same for all plugins.
That leaves the second possibility: other loaded libs are using symbols from the plugins. Another typical case of 'That should never happen'. Although it's certainly possible. We're using gcc and default visibility and as far as I've seen nothing is stripped, so tons of symbols are being exported. Actually that worries me much more as these plugins should be independent.
Here then are my open questions at this point:
Are my conclusions so far correct?
Do you know a way to verifydlopen's reference count?
If there are internal symbols of my plugins (accidentally) used by other libs, is there a way to track down who is using which symbols?
My machine is:
Linux 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:44 UTC 2014 i686 i686 i686 GNU/Linux
* I should mention that all of the loading and unloading happens in the main thread, so there should be no multi threading issue here.

other loaded libs are using symbols from the plugins
If other libs are not linked to that shared library at link time, referring to symbols of a shared library does not prevent that shared library from being unloaded.
To debug the run-time linker set environment variable LD_DEBUG to all, e.g. LD_DEBUG=all ./my_app. See man ld.so for details.

Related

How does library size affects an application's load time and memory foot print?

Some other people in the office are discussing about reducing the memory footprint and load time by cutting out non-essential parts of an internal library into separate libraries, and load them on-demand.
I did some Googling tonight and learned that ld.so loads libraries with lazy binding, and both ld.so and dlopen loads the library with mmap. This seems to mean that those none-essential parts of the library are already loaded on-demand, i.e. as long as the functions are not used / data pages in the library are not read,
It doesn't take space and
It doesn't take time to load and allocate those pages.
So I thought they just need to make sure they don't actively touch all the non-essential components.
Is this true? Is this true (practically) for all posix systems?
Any library that directly links into your program or is a direct dependency still gets mapped and loaded at run time. To see what libraries will be open, you can use ldd some_program. Some of these will be indirect dependencies that will also be loaded. To see only what the direct depencies are you can use objdump -x my_program | grep NEEDED. Not only does every library have to be read from disk (can be relatively slow), but all the necessary symbols (sometimes many) from them must be mapped.
It may be that one dependency that isn't necessarily needed at startup, is pulling in the majority of your total library size. If you can split apart different capabilities into functional units that can be loaded as modules, it can be straight forward to reduce startup time and to some extent the memory footprint.
The easiest example I can come up with is an image viewer application where the GUI libraries are linked into the binary, but the actual loading of images is done by modules. (feh would be be an example of this using Imlib2 or optionally gdk-pixbuf for the image loading). This allows the main GUI to start up without ever loading libpng, libjpeg, libXPM, libcairo, librsvg, etc... This reduces the startup time because none of those need to be loaded, and may never be loaded, thus reducing the memory footprint.
In C this is commonly done with dlopen(), dlsym() and dlclose() (embedded systems using a flat binary format can use a specially linked module) but most toolkits will have their own module implementation (for instance Gtk+ uses glib's cross-platform module loading).
It doesn't have to be that cut and dry. You could just as easily split your initial GUI into parts and sequentially dlopen() them as an alternative to having a separate "splash screen". I don't know of any example of a toolkit that allows this but one could even open an X11 window with title and dimensions, then load the GUI and associated libraries into that window. You could load different binary format parsers depending on the file's extension/magic value or load different binary format generators depending on what format the user decides to Save_As.
Another alternative is to statically link library dependencies so that only the required symbols get loaded (as long as there are no licensing issues with doing so). Why load up 100+ widgets, when you only use 10? This works best with libraries that are built with static linking in mind such as musl-libc (glibc is notoriously useless for static linking). Compiling with the -flto compiler flag (or -combine -fwhole-program on older versions of gcc) along with -ffunction-sections and -fdata-sections and linking with --gc-sections typically helps (I normally see 15-50% size reduction) This is also a good option if you can't rely on the shared system libraries to be sane.

Which libraries appear in /proc/$PID/pmaps?

On Linux you can inspect /proc/$PID/pmaps to see the libraries loaded by a particular program, and a program can open /proc/self/pmaps to examine the libraries it itself has loaded.
I know pmaps will only contain dynamic libraries, and obviously the kernel can't predict which libraries we might dlopen at a later point, so I expect those aren't included in /proc/self/maps. But I'm unsure of a few other other scenarios:
Are libraries that have been linked at build time but we haven't called any function in yet included? My understanding is the Linux delays linking symbols until the first time they are used, so I'm not sure if they'll show up.
Does pmaps contain all the libraries used recursively? E.g. if I look at each library in pmaps and run ldd on it, and then run ldd on those, ad nauseum, I shouldn't find any new libraries that weren't in the original pmaps? I tried this on a couple binaries and it appears to be so but maybe I'm getting lucky.
Are libraries that have been linked at build time but we haven't called any function in yet included?
Yes: the runtime loader will mmap every library that your executable directly depends on, before your program starts running.
You can find the list of such libraries by running
readelf -d a.out | grep NEEDED
Does pmaps contain all the libraries used recursively?
Yes: if a library that you directly depend on itself depends on some other library, the runtime loader will mmap the recursive dependencies as well.
My understanding is the Linux delays linking symbols until the first time they are used
That is mosty correct for function symbols, but false for data symbols, which can't be resolved lazily.
Also, whether the symbols are resolved lazily or not depends on LD_BIND_NOW environment variable, and on an equivalent setting in the executable dynamic section, controlled by -znow linker flag.
None of that changes the mmap pciture though; if you have a DT_NEEDED entry for foo.so in your dynamic section, then foo.so will be mmaped (and will show in /proc/self/*map*) independent of lazy or non-lazy resolution.
/proc/$pid/maps is not only going to list libraries that are loaded, but also ALL other mapped memory segments.
Read this thread and the article in there:
Understanding Linux /proc/id/maps

Loading and Unloading shared libraries in Mac OSX

I am sorry if this question has been repeated before in this forum. I am having a problem where, Loading and Unloading of dylibs arent working as expected in Mac(esp the unloading part.).
The question is if I have an executable and if I load a shared library say A.dylib and then use the loaded shared library to load an library say B.dylib. When I try unloading the library B.dylib at a later stage, the there is no error code returned(the return int value is 0 - as I am using a regular dlopen and dlclose functions to load and unload libraries, 0 means unloaded successfully), but when I check to make sure using the activity monitor or lsof the b.dylib is still in the memory.
Now the we are porting this code for windows, linux & mac. Windows and Linux works as expected, but only mac is giving me problems.
I was reading in the mac developer library and found out that: " There are a couple of cases in which a dynamic library will never be unloaded:
1) the main executable links against it, 2) An API that does not supoort unloading (e.g. NSAddImage())
was used to load it or some other dynamic library that depends on it, 3) the dynamic library is in
dyld's shared cache."
In my case I dont fall either of the first 2 cases. I am suspecting on case3.
Here is my question:
1. What can I do to make sure I have case 3?
2. If yes, how to fix it?
3. If not, how to fix it?
4. Why is mac so different?
Any help in this regard is appreciated!
Thanks,
Jan
When you load a shared library into an executable, all of the symbols exported by that library are candidates to resolve symbols required by the executable, causing the library to remain loaded if the DYLD linker binds to an unintended symbol. You can list the symbols in a shared library by using nm, and you can set environment variables to enable debugging output for the dynamic linker (see this man page on dyld). You need to set the DYLD_PRINT_BINDINGS environment variable.
Most likely, you need to limit the exported symbols to a specific subset that is used by the executable, so that only those symbols you intend to use are bound. This can be done by placing the required symbols in a file and passing it to the linker via the -exported_symbols_list option. Without doing so, you can end up binding a symbol in the dyloaded library, and it will not be unloaded since they are required to resolve a symbol in the executable and won't unload when dlclose() is called.

Why does a shared object fail if it has extra symbols compared to the original

I have a stripped ld.so that I want to replace with the unstripped version (so that valgrind works). I have ensured that I have the same version of glib and the cross compiler.
I have compiled the shared object, calling 'file' on it shows that it is compiled correctly (the only difference with the original being the 'unstripped' and being about 15% bigger). Unfortunately, it then causes a kernel panic (unable to init) on start up. Stripping the newly compiled .so, readelf-ing it and diff-ing it with the original, shows that there were extra symbols in the new version of the .so . All of the old symbols were still present, so what I don't understand is why the kernel panics with those extra symbols there.
I would expect the extra symbols to have no affect on the kernel start up, as they should never be called, so why do I get a kernel panic?
NB: To be clear - I will still need to investigate why there are extra symbols, but my question is about why these unused symbols cause problems.
The kernel (assuming Linux) doesn't depend on or use ld.so in any way, shape or form. The reason it panics is most likely that it can't exec any of the user-level programs (such as /bin/init and /bin/sh), which do use ld.so.
As to why your init doesn't like your new ld.so, it's hard to tell. One common mistake is to try to replace ld.so with the contents of /usr/lib/debug/ld-X.Y.so. While that file looks like it's not much different from the original /lib/ld-X.Y.so, it is in fact very different, and can't be used to replace the original, only to debug the original (/usr/lib/debug/ld-X.Y.so usually contains only debug sections, but none of code and data sections of /lib/ld-X.Y.so, and so attempts to run it usually cause immediate SIGSEGV).
Perhaps you can set up a chroot, mimicking your embedded environment, and run /bin/ls in it? The error (or a core dump) this will produce will likely tell you what's wrong with your ld.so.

Why is the startup of an App on linux slower when using shared libs?

On the embedded device I'm working on, the startup time is an important issue. The whole application consists of several executables that use a set of libraries. Because space in FLASH memory is limited we'd like to use shared libraries.
The application workes as usual when compiled and linked with shared libraries and the amount of FLASH memory is reduced as expected.
The difference to the version that is linked to static libs is that the startup time of the application is about 20s longer and I have no idea why.
The application runs on an ARM9 CPU at 180 MHz with Linux 2.6.17 OS,
16 MB FLASH (JFFS File System) and 32 MB RAM.
Bacause shared libraries have to be linked to at runtime, usually by dlopen() or something similar. There's no such step for static libraries.
Edit: some more detail. dlopen has to perform the following tasks.
Find the shared library
Load it into memory
Recursively load all dependencies (and their dependencies....)
Resolve all symbols
This requires quite a lot of IO operations to accomplish.
In a statically linked program all of the above is done at compile time, not runtime. Therefore it's much faster to load a statically linked program.
In your case, the difference is exaggerated by the relatively slow hardware your code has to run on.
This is a fine example of the classic tradeoff of speed and space.
You can statically link all your executables so that they are faster but then they will take more space
OR
You can have shared libraries that take less space but also more time to load.
So decide what you want to sacrifice.
There are many factors for this difference (OS, compiler e.t.c) but a good list of reasons can be found here. Basically shared libraries were created for space reasons and much of the "magic" involved to make them work takes a performance hit.
(As a historical note the original Netscape navigator on Linux/Unix was a statically linked big fat executable).
This may help others with similar problems:
The reason why startup took so long in my case was, that the default setting of the GCC is to export all symbols inside of a library.
A big improvement is to set a compiler setting "-fvisibility=hidden".
All symbols that the lib has to export have to be augmented with the statement
__attribute__ ((visibility("default")))
see gcc wiki
and the very fine article how to write shared libraries
Ok, I have learned now that the usage of shared libraries has it's disadvatages concerning speed. I found this article about dynamic linking and loading enlighting. The loading process seems to be much lengthier than I have expected.
Interesting.. typically loading time for a shared library is unnoticeable from a fat app that is statically linked. So I can only surmise that the system is either very slow to load a library from flash memory, or the library that is loaded is being checked in some way (eg .NET apps run a checksum for all loaded dlls, reducing startup time considerably in some cases). It could be that the shared libraries are being loaded as-needed, and unloaded afterwards which could indicate a configuration problem.
So, sorry I can't help say why, but I think its an issue with your ARM device/OS. Have you tried instrumenting the startup code, or statically linking with 1 of the most commonly-used libraries to see if that makes a large difference. Also put the shared libs in the same directory as the app to reduce the time it takes to search the FS for the lib.
One option which seems obvious to me, is to statically link the several programs all into a single binary. That way you continue to share as much code as possible (probably more than before), but you will also avoid the overhead of the dynamic linker AND save the space of having the dynamic linker on the system at all.
It's pretty easy to combine several executables into the same one, you normally just examine argv and decide which routine to call based on that.

Resources