How to static link libexpat.so.1 with GCC? - linux

I want to build statically program with GCC/G++ without shared dependencies. but i don't know how to do that.
With below command in Netbeans IDE i can build with shared dependency, but in some OS can not find this library(i don't want to install on new system)
-Wl,--dynamic-linker=/usr/lib/libexpat.so.1

To statically link a program, you need a static library, which is a library with a filename finished in .a.
Linker, by default, if using default search paths (as you do with /usr/lib), will select the .so library version and will do a dynamic link of it, so if you want to specify that you want some static library, you'll need to specify the full path name of it, instead of using the -l option. So,
gcc -o your_program mod_a.o mod_b.o ... /usr/lib/libexpat.a
is better than
gcc -o your_program mod_a.o mod_b.o ... -lexpat
(the latter will select the file /usr/lib/libexpat.so instead, which should be a link to /usr/lib/libexpat.so.1, which is normally the soname of the library, and is also a symbolic link of /usr/lib/libexpat.so.1.xx.xx)
NOTE
In the examples, I'm trying to call the linker through the compiler, as the default c runtime and libraries are automatically selected by the compiler when calling this way. If you prefer to call directly the linker, the procedure doesn't change, but then you have to add the C runtime module and the standard c library yourself.
NOTE 2
If you want to statically link everything, then you have to use static versions of all the libraries you are going to use (they are normally installed in the same directory as the dynamic ones, so you have to specify the full pathname of all in the command line) To cope with this in a permanent development system, you can make symbolic links to them all from another path and then specify that directory as the search path for your projects that must be statically linked.
If you allways want some library to be statically linked, just erase the .so link (not the .so.X and the .so.X.YY links, they are not tried by the compiler) in /usr/lib, and the .a file will be selected by default by the compiler. Of course, if you want this made for every library, you can erase all the .so links, but you'll end with larger executables (much larger) than the original dynamically linked versions.

Related

How to force .so dependency to be in same directory as library

I have a libA.so that depends on libB.so and having trouble finding it even though it's in the same directory.
ldd libA.so
linux-vdso.so.1 (0x00007fff50bdb000)
libB.so => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4aeb902000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4aebadb000)
I'm wondering if there is a way to make libA.so always look for libB.so in the same directory as this will be the case for my application? I know updating LD_LIBRARY_PATH is an option as well but wanted to reduce the amount of work required.
The .dynamic section of an ELF file (.so libraries on Linux use ELF format) contains information to help the library find its dependencies. .dynamic entries with type DT_NEEDED contain the names of other .so files for the dynamic linker to find, but they do not contain any information on where to find those files. For that, as you mentioned, you can use LD_LIBRARY_PATH, but the ELF format also provides a way to specify it in the file itself.
A .dynamic entry with type DT_RUNPATH gives the dynamic linker a path to a directory where the dynamic linker should look for DT_NEEDED files. DT_RUNPATH allows a special variable, $ORIGIN, which refers to the file's current directory. This allows you to use relative paths, without requiring the user to invoke an executable from a specific working directory.
You use the -rpath linker flag to specify a DT_RUNPATH entry. In order to pass the literal string $ORIGIN, however, you must wrap it in single quotes to prevent your shell from interpreting it as an environment variable.
Assuming you are using gcc, you should use add this argument to the link step:
-Wl,-rpath,'$ORIGIN'
From 'man 8 ld.so':
If a shared object dependency does not contain a slash, then it
is searched for in the following order:
o Using the directories specified in the DT_RPATH dynamic
section attribute of the binary if present and DT_RUNPATH
attribute does not exist. Use of DT_RPATH is deprecated.
o Using the environment variable LD_LIBRARY_PATH, unless the
executable is being run in secure-execution mode (see below),
in which case this variable is ignored.
o Using the directories specified in the DT_RUNPATH dynamic
section attribute of the binary if present. Such directories
are searched only to find those objects required by DT_NEEDED
(direct dependencies) entries and do not apply to those
objects' children, which must themselves have their own
DT_RUNPATH entries. This is unlike DT_RPATH, which is applied
to searches for all children in the dependency tree.
o From the cache file /etc/ld.so.cache, which contains a
compiled list of candidate shared objects previously found in
the augmented library path. If, however, the binary was
linked with the -z nodeflib linker option, shared objects in
the default paths are skipped. Shared objects installed in
hardware capability directories (see below) are preferred to
other shared objects.
o In the default path /lib, and then /usr/lib. (On some 64-bit
architectures, the default paths for 64-bit shared objects are
/lib64, and then /usr/lib64.) If the binary was linked with
the -z nodeflib linker option, this step is skipped.
The key here is to use the DT_RUNPATH, which can be embedded into the binary you are creating. You can link it so that it points to the same directory, as you wanted.
See this post about how to do this: https://stackoverflow.com/a/67131878
On the top of the answers provided, it is also possible to change DT_RUNPATH using patchelf:
patchelf --set-rpath . <your-binary>
Obviously be careful to understand security implications of allowing to load libraries from local directory.
patchelf allows to change DT_RUNPATH without recompiling or re-linking so may be convenient if you want to do this and do not have source or do not want to deal with re-compiling (which may be actually painful process on distros like Alpine).

If there are multiple directories in LDFLAGS, how does the linker know where to look first?

If I have two libraries with the same library name but stored in different directories (and they may contain different code) and I list both directories in the LDFLAGS variable in a makefile, how does the linker know where to look first and which library to use?
LDFLAGS+= \
-L${INSTALL_DIR}/lib\
-L${EVO_INSTALL_DIR}/lib\
Will it look in the INSTALL_DIR path first or in the EVO_INSTALL_DIR path?
INSTALL_DIR. It will look in the order they are listed.
By the way, it's your linker (probably the same program as your compiler) that's making this choice, not the Makefile. Make (which is reading your Makefile) only runs the build tools.

Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work?

I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context:
-- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so.
-- An application is linked against libsome1.so.
-- This application uses libdl.so to dynamically load another module, say libmagic.so.
-- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION.
-- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1).
-- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen.
I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol##VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol##VER_2)... Nothing seems to work!!! Help!!!!!!
Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?
Try compiling both libsome1.so and libsome2.so to add symbol versioning, each with their own version (use the --version-script option to ld). Then link the application and libmagic.so using the new libraries. Then, libsome1.so and libsome2.so should be completely separate.
Problems can still occur if there are unversioned references to symbols. Such references can be satisfied by versioned definitions (so that it is possible to add symbol versioning to a library without breaking binary compatibility). If there are multiple symbols of the same name, it can sometimes be hard to predict which one will be used.
Regarding tools, nm -D does not display any information about symbol versioning. Try objdump -T or readelf -s instead.

Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [duplicate]

I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context:
-- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so.
-- An application is linked against libsome1.so.
-- This application uses libdl.so to dynamically load another module, say libmagic.so.
-- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION.
-- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1).
-- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen.
I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol##VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol##VER_2)... Nothing seems to work!!! Help!!!!!!
Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?
Try compiling both libsome1.so and libsome2.so to add symbol versioning, each with their own version (use the --version-script option to ld). Then link the application and libmagic.so using the new libraries. Then, libsome1.so and libsome2.so should be completely separate.
Problems can still occur if there are unversioned references to symbols. Such references can be satisfied by versioned definitions (so that it is possible to add symbol versioning to a library without breaking binary compatibility). If there are multiple symbols of the same name, it can sometimes be hard to predict which one will be used.
Regarding tools, nm -D does not display any information about symbol versioning. Try objdump -T or readelf -s instead.

How to link to shared lib from shared lib with relative path?

I'm working on a Firefox plugin that uses external libraries to render 3D graphics on the browser.
The problem is that I want the plugin to use external libraries packed with it without changing the LD_LIBRARY_PATH variable.
The libraries are installed in a position relative to the plugin (a shared library too), while the actual executable (i.e. the browser) can be located somewhere entirely else.
I'm testing it on Ubuntu (no problem at Windows version of the plugin)
My dependencies are OpenSceneGraph libraries and static compilation will make the plugin really big (not an option if there is another one).
Use the rpath option when linking and specify the 'special' path $ORIGIN.
Example:
-Wl,-R,'$ORIGIN/../lib'
Here's a site that elaborates on using $ORIGIN:
http://www.itee.uq.edu.au/~daniel/using_origin/
You could maybe use the -L flag during the compilation to specify the relative path where the linker can find your shared objects.
If you have already generated your lib, you can link by directly invoking the ldcommand.
Tips : You can easily check if some symbols are defined in a lib using the unix command nm. This is a useful way to check that the linking is well-done.
(If I were you, I would just change temporaly the LD_LIBRARY_PATH as you said in your post. Why don't you want to do this ?)
It's wrong to use relative rpath for security reason,
You should use libdl functions (dlopen, etc)

Resources