How to compile with make but also include all dependencies - linux

I'm compiling a C++ program on linux, and I can run make and it all compiles, but when I need to downgrade or change one of it's dependencies for another program, it breaks. I was wondering if it was possible to create a standalone executable, with dependencies bundled inside? There's not many dependencies, so size isn't an issue.

So, what you're asking is, can you link with static versions of libraries (which are included in the program directly) instead of dynamic versions of libraris (shared libraries) which are kept external to your program.
The answer is "yes", but it's not always straightforward. First you have to ensure you actually have the static versions of the libraries installed in your system: the static and dynamic libraries are different files and often the "standard" installation provides only the dynamic library.
If you're already compiling code against those libraries you probably already have the static libraries installed because, at least on GNU/Linux systems, the static libraries are often included in the "dev" packages along with the header files etc. needed to compile code.
To make this work you need to modify your linker command line. If you have a sufficiently new version of the binutils package (which provides the linker), you can change your link line to replace arguments like -lssl -lcrypto with arguments like -l:libssl.a -l:libcrypto.a (don't forget the colon after the -l) and that should do it.

Related

Which libraries should go to a pkg-config file as a dependencies?

I'm writing a shared library that itself depends on boost and pcl libraries.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It's been a long time since I last time studied these things and I'm a bit confused how this worked again on Linux. When my test app links to my lib I have to add all these pcl and boost libs again to the build even though the lib already has been linked against these libs.
But when I look at the deps of libQtGui.so, for example, it has tens of all kinds of libs it links to, but I don't need to make my app link to those libs...only -lQtGui is enough.
I have just used CMake and link_libraries to add boost and pcl libs.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It depends on API of your library:
if public (i.e. installable) headers of your lib use boost/pcl (i.e. have #inclue <boost/...>) (in other words you used PUBLIC (or INTERFACE) named keywords when link your library against boost/pcl in CMake+target_link_libraries) -- then yes you need to add 'em;
otherwise, it depends on what exactly you have at the end -- i.e. does your DSO has DT_NEEDED entries for boost/pcl libs (most likely) or not (you can check it w/ ldd <your-lib>.so). For the last case, you also need to add your dependencies to the *.pc files.
Also, in case of binary dependency from boost/pcl (dunno if the latter has any DSO or not) please make sure you specify exact location of the linked libs -- cuz a user may have multiple (co-existed) boost installations (potentially incompatible) or can do upgrade (later) to other (binary incompatible) version (and you can't really do smth w/ it)… It is important to be linked to the same (or at least binary compatible, which is kinda hard to guarantee for boost) library as you did…
I have just used CMake and link_libraries to add boost and pcl libs.
Please read smth about "Modern CMake" and stop using link_libraries :-) -- use target_link_libraries instead…

In Ubuntu (14.04), is there an equivalent to /etc/ld.so.conf.d for the linker?

This is a question about centrally-located path specs, like PATH, LD_LIBRARY_PATH, and LIBRARY_PATH.
I know that there are two ways of specifying shared library paths for the loader: add them to LD_LIBRARY_PATH, or add files to /etc/ld.so.conf.d/. I also know that the latter is considered the more modern and preferred way to do it.
I also know that you can specify standard library paths for the linker by editing LIBRARY_PATH. Is this still the "modern" way to do it, or is there now a "ld.so.conf.d-style" alternative that I should be using?
EDIT: People are asking "why", so:
I'm using a Python package (Theano) that dynamically generates and compiles CUDA and C++ code when run. One of the libraries it links to is NVidia's cuDNN. I don't know why Theano's developer's have it link to the static lib and not the dynamic lib.
There isn't any equivalent to ld.so.conf.d/ for static libraries. You still just specify the standard linker search paths via the LIBRARY_PATH environment variable, and additional paths through command-line flags to the linker.
To be clear:
LIBRARY_PATH: Used by the linker at compile time. Is used by the linker to find both static and dynamic libraries.
LD_LIBRARY_PATH: Used by the loader at run time to find dynamic libraries.
static libraries are resolved at (static) link time and by definition don't have any runtime aspects.
My opinion is that you should avoid using static libraries and always prefer shared libraries.

Can I install both shared .so and static .a versions of a library?

My questions is related to this: Creating both static and shared C++ libraries
I'm compiling a library in order to install it in ~/local on two different systems. It seems that every time I do this I end up with linker problems that take me hours to figure out. The specific library I'm looking at is primesieve. In that library, it's the default to build static libraries only. Unfortunately the example code count_primes.cpp wouldn't link with the static version of the library on one of my systems, for whatever reason. Eventually I figured out how to build the shared version and the code now compiles nicely, with no ugly hacks necessary.
Given the above, it seems to be that compiling both static and shared versions is a good idea if you're working with multiple systems and want the best chance of having your code compile. Is this true? Are there reasons not to build both versions? I realize that this is a bit of a subjective question but it's a serious programming issue that I think many people here have probably encountered.
PS.
This is what I ended up using to compile and install both shared and static versions of primesieve to ~/local:
make
make lib
make install PREFIX=~/local
make clean
make lib SHARED=yes
make install PREFIX=~/local
The make clean is because of this. I then added this to my .bash_profile:
export LIBRARY_PATH=$LIBRARY_PATH:~/local/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/local/lib
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:~/local/include
Alternatively, without changing the environment variables I was able to compile the example program count_primes.cpp like this:
g++ -I ~/local/include/ -L ~/local/lib/ -lprimesieve count_primes.cpp
To use a static library you can just include it in the compilation as if it were a regular object file, e.g.
g++ -o foo foo.cpp /path/to/mylib.a
Of course, this means static linking too.
You can still statically link with a dynamic library, so there's not much use for static libraries really.
There is no reason not to build both. Neither library will "do" anything. The shared library will only be loaded if it is in a path viable to the dynamic linker (like you did by adding it to your LD library path). The static one won't be used unless you explicitly link against it - but that is only done at compile (link) time.

Installing package from source on an initial ram filesystem

I'm trying to install multiple packages into an initial ram file system. I'm using uclibc as my C library. This could be a stupid question but...
Would the compiled program also need a C library installed onto the initramfs?
Am I right in thinking that when a program is compiled from source, it is compiled into some sort of executable? Will the application on the initramfs be ready to run once I have make installed (with the correct prefix and providing dependencies are met )?
Whether a compiled program needs a C library - or any kind of library, for that matter - depends on how it was linked.
In general, if your program was linked statically then it does not have any external dependencies - it only needs a working kernel. The executable code of any library that it depends on will have been incorporated into the final executable.
If, on the other hand, it is linked dynamically, then it still needs the shared object files of the libraries it depends on. On Linux, most library shared objects (also known as shared libraries) follow the convention of having a filename with either a .so extension or, in general, a *.so.* format. For example /lib/libssl3.so and /lib/libncurses.so.5.9 are both shared libraries on my system.
It is also possible to have an executable that is statically linked against some libraries and dynamically linked against others. A common case where this happens is when rare or proprietary libraries are linked in statically, while standard system libraries are linked in dynamically.

Loading multiple shared libraries with different versions

I have an executable on Linux that loads libfoo.so.1 (that's a SONAME) as one of its dependencies (via another shared library). It also links to another system library, which, in turn, links to a system version, libfoo.so.2. As a result, both libfoo.so.1 and libfoo.so.2 are loaded during execution, and code that was supposed to call functions from library with version 1 ends up calling (binary-incompatible) functions from a newer system library with version 2, because some symbols stay the same. The result is usually stack smashing and a subsequent segfault.
Now, the library which links against the older version is a closed-source third-party library, and I can't control what version of libfoo it compiles against. Assuming that, the only other option left is rebuilding a bunch of system libraries that currently link with libfoo.so.2 to link with libfoo.so.1.
Is there any way to avoid replacing system libraries wiith local copies that link to older libfoo? Can I load both libraries and have the code calling correct version of symbols? So I need some special symbol-level versioning?
You may be able to do some version script tricks:
http://sunsite.ualberta.ca/Documentation/Gnu/binutils-2.9.1/html_node/ld_26.html
This may require that you write a wrapper around your lib that pulls in libfoo.so.1 that exports some symbols explicitly and masks all others as local. For example:
MYSYMS {
global:
foo1;
foo2;
local:
*;
};
and use this when you link that wrapper like:
gcc -shared -Wl,--version-script,mysyms.map -o mylib wrapper.o -lfoo -L/path/to/foo.so.1
This should make libfoo.so.1's symbols local to the wrapper and not available to the main exe.
I can only come up with a work-around. Which would be to statically link a version of the "system library" that you are using. For your static build, you could make it link against the same old version as the third-party library. Given that it does not rely on the newer version...
Perhaps it is also possible to avoid these problems with not linking to the third-party library the ordinary way. Instead, your program could load it at execution time. Perhaps then it could be shadowed against the rest. But I don't know much about that.

Resources