Can I change 'rpath' in an already compiled binary? - linux

I have an old executable that's scheduled for the scrap heap, but it's not there yet. It relies on some libs that have been removed from my environment, but I have some stub libs someplace where it works fine. Id like to point this executable to these stub libs. Yes, i could set LD_LIBRARY_PATH, but this executable is called from many scripts, and many users and I'd love to fix it in one spot.
I don't have source for this, and would be hard to get it. I was thinking - can I edit this file, using an ELF aware editor, and add a simple PATH to rpath to have it hit the new libs? Is this possible, or once you create an ELF binary, you fix things to locations and they can't be moved?

There is a more universal tool than chrpath called patchelf. It was originally created for use in making packages for Nix and NixOS (packaging system and a GNU/Linux distribution).
In case there is no rpath in a binary (here called rdsamp), chrpath fails:
chrpath -r '$ORIGIN/../lib64' rdsamp
rdsamp: no rpath or runpath tag found.
On the other hand,
patchelf --set-rpath '$ORIGIN/../lib64' rdsamp
succeeds just fine.

There is a tool called chrpath which can do this - it's probably available in your distribution's packages.

Just like #user7610 said, the right way to go is the patchelf tool.
But, I feel that I can give a more comprehensive answer, covering all the commands one needs to do exactly that.
For a comprehensive article on the subject, click here
First of all, many developers talk about RPATH, but they actually mean RUNPATH. These are two different optional dynamic sections, and the loader handles them very differently. You can read more about the difference between them in the link I mentioned before.
For now, just remember:
If RUNPATH is set, RPATH is ignored
RPATH is deprecated and should be avoided
RUNPATH is preferred because it can be overridden by LD_LIBRARY_PATH
See the current R[UN]PATH
readelf -d <path-to-elf> | egrep "RPATH|RUNPATH"
Clear the R[UN]PATH
patchelf --remove-rpath <path-to-elf>
Notes:
Removes both RPATH and RUNPATH
Add values to R[UN]PATH
patchelf [--force-rpath] --set-rpath "<desired-rpath>" <path-to-elf>
Notes:
<desired-path> is a colon separated directories list, e.g: /my/libs:/my/other/libs
If you specify --force-rpath, sets RPATH, otherwise sets RUNPATH

This worked for me, replacing XORIGIN with $ORIGIN.
chrpath -r '\$\ORIGIN/../lib64' httpd

Related

CMake: Don't set rpath for a single library used in link

What I'd like to do is configure my CMakeLists file so that while building my project the linker uses a copy of a shared library (.so) that resides in my build tree to link the executable against but then does not set the rpath in the linked executable so that the system must provide the library when the loader requests it.
Specifically, I want to link against libOpenCL.so during build time on a build farm that doesn't have libOpenCL.so installed as a system library. To do this, libOpenCL.so is in the project build tree and referenced using an absolute path in the CMakeLists file. This absolute path is to ensure that if the system does happen to have libOpenCL.so installed then it is not used.
However, when running the final executable, CMake has added the absolute path to the rpath which stops the system version of libOpenCL.so being picked up by the library loader and used.
Seems simple but I can't quite figure it out.
Thanks!
I know this answer is super late. I faced the same requirement as yours.
Either we need is whitelist approach where we set CMAKE_BUILD_RPATH explicitly with what we need. Or we need a blacklist approach where we tell cmake, which RPATHs we don't want in the executable. Way to remove RPath from build tree is not documented yet: https://gitlab.kitware.com/cmake/cmake/issues/16825
The solution I took is:
Set RUNPATH instead of RPATH. You can achieve this by the statement:
SET(CMAKE_EXE_LINKER_FLAGS "-Wl,--enable-new-dtags")
When RUNPATH is present, RPATH is ignored.
RUNPATH - same as RPATH, but searched after LD_LIBRARY_PATH, supported only on most recent UNIX
Then I can achieve the overriding the library using the environment variable LD_LIBRARY_PATH.
According to the CMake Wiki this should not be a problem:
By default if you don't change any RPATH related settings, CMake will link the executables and shared libraries with full RPATH to all used libraries in the build tree. When installing, it will clear the RPATH of these targets so they are installed with an empty RPATH.
So you might try to simply install it?

Why I cannot override search path of dynamic libraries with LD_LIBRARY_PATH?

Edit: I resolved this issue, the solution is below.
I am building a code in a shared computing cluster dedicated for scientific computing, thus I can only control files in my home folder. Although I am using fftw as an example, I would like to understand the specific reason, why my attempt to setup LD_LIBRARY_PATH does not work.
I build the fftw and fftw_mpi libraries in my home folder like this
./configure --prefix=$HOME/install/fftw --enable-mpi --enable-shared
make install
It builds fine, but in install/fftw/lib, I find that the freshly built libfftw3_mpi.so links to wrong version of fftw library.
$ ldd libfftw3_mpi.so |grep fftw
libfftw3.so.3 => /usr/lib64/libfftw3.so.3 (0x00007f7df0979000)
If I now try to set the LD_LIBRARY_PATH correctly pointing to this directory, it still prefers the wrong library:
$ export LD_LIBRARY_PATH=$HOME/install/fftw/lib
$ ldd libfftw3_mpi.so |grep fftw
libfftw3.so.3 => /usr/lib64/libfftw3.so.3 (0x00007f32b4794000)
Only if I explicitly use LD_PRELOAD, I can override this behavior. I don't think LD_PRELOAD is a proper solution though.
$ export LD_PRELOAD=$HOME/install/fftw/lib/libfftw3.so.3
$ ldd libfftw3_mpi.so |grep fftw
$HOME/install/fftw/lib/libfftw3.so.3 (0x00007f5ca3d14000)
Here is what I would have expecting, a small test done in Ubuntu desktop, where I installed fftw to /usr/lib first, and then override this search path with LD_LIBRARY_PATH.
$ export LD_LIBRARY_PATH=
$ ldd q0test_mpi |grep fftw3
libfftw3.so.3 => /usr/lib/x86_64-linux-gnu/libfftw3.so.3
$ export LD_LIBRARY_PATH=$HOME/install/fftw-3.3.4/lib
$ ldd q0test_mpi |grep fftw3
libfftw3.so.3 => $HOME/install/fftw-3.3.4/lib/libfftw3.so.3
In short: Why is libfft3_mpi library still finding the wrong dynamic fftw3 library? Where is this searchpath hard coded in a such way that it is prioritized over LD_LIBARY_PATH? Why is this is not the case in another computer?
I am using intel compilers 13.1.2, mkl 11.0.4.183 and openmpi 1.6.2 if this matters.
Edit: Thanks for all the answers. With help of those, we were able to isolate the problem to RPATH, and from there, the cluster support was able to figure out the problem. I accepted the first answer, but both answers were good.
The reason, why this was so hard to figure out, is that we did not know that the compilers were actually wrapper scripts, adding things to compiler command line. Here a part of a reply from the support:
[The] compilation goes through our compiler wrapper. We do RPATH-ing
by default as it helps most users in correctly running their jobs
without loading LD-LIBRARY_PATH etc. However we exclude certain
library paths from default RPATH which includes /lib, /lib64 /proj
/home etc. Earlier the /usr/lib64 was not excluded by mistake
(mostly). Now we have added that path in the exclusion list.
From http://man7.org/linux/man-pages/man8/ld.so.8.html
When resolving shared object dependencies, the dynamic linker first
inspects each dependency string to see if it contains a slash (this
can occur if a shared object pathname containing slashes was
specified at link time). If a slash is found, then the dependency
string is interpreted as a (relative or absolute) pathname, and the
shared object is loaded using that pathname.
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
o (ELF only) Using the directories specified in the DT_RPATH dynamic
section attribute of the binary if present and DT_RUNPATH
attribute does not exist. Use of DT_RPATH is deprecated.
o Using the environment variable LD_LIBRARY_PATH. Except if the
executable is a set-user-ID/set-group-ID binary, in which case it
is ignored.
o (ELF only) Using the directories specified in the DT_RUNPATH
dynamic section attribute of the binary if present.
o From the cache file /etc/ld.so.cache, which contains a compiled
list of candidate shared objects previously found in the augmented
library path. If, however, the binary was linked with the -z
nodeflib linker option, shared objects in the default paths are
skipped. Shared objects installed in hardware capability
directories (see below) are preferred to other shared objects.
o In the default path /lib, and then /usr/lib. (On some 64-bit
archiectures, the default paths for 64-bit shared objects are
/lib64, and then /usr/lib64.) If the binary was linked with the
-z nodeflib linker option, this step is skipped.
with readelf readelf -d libfftw3_mpi.so you can check if your lib contains such a attribute in the dynamic section.
with export LD_DEBUG=libs you can debug the search path used to find your libs
with chrpath -r<new_path> <executable> the rpath can be changed
I see two possible reasons for this.
First, libfftw3_mpi.so may be linked with /usr/lib64/ as RPATH. In that case, providing LD_LIBRARY_PATH will have no effect. To check if it is your case, run readelf -d libfftw3_mpi.so | grep RPATH and see if it has /usr/lib64/ as a library path. If it does, use chrpath utility to change or remove it.
Alternatively, you may be running a system that does not support LD_LIBRARY_PATH at all (like HP-UX).

How to set RUNPATH without setting RPATH

I built a binary using premake (gmake) that links dynamically to another. When I then tried to run the binary, it complained that it can't find the dynamic library.
ldd on the binary and of course the dynamic library is => Not Found!
Of course I can export LD_LIBRARY_PATH=<path of the dynamic library> but I don't want that.
I would like that the binary to work out of the box, on different machines (assuming the dynamic library location doesn't change of course)
1- How do people do this? Do they set RPATH all the time through come linker flags?
From what I gathered, RUNPATH can be over-ridden by LD_LIBRARY_PATH but that's not the case for RPATH.
There's the -rpath and --enable-new-dtags options that will instruct gcc (or the linker to be more precise) to set both RUNPATH and RPATH to the same value, but that's not what I want really, and I can't even see the point of that.
2- What is the point of that?
3- Am I missing something? how can I set RUN_PATH only, so that in general the dependencies are found automatically (in RUN_PATH) unless instructed to search a specific path using LD_LIBRARY_PATH first.
on my laptop the paths are probably different from yours, but the "-d" option should do it.
Usage: c:/strawberry/c/bin/../lib/gcc/x86_64-w64-mingw32/4.7.3/../../../../x86_64-w64-mingw32/bin/ld.exe [options] file...
Options:
-d, -dc, -dp Force common symbols to be defined
"to set both RUNPATH and RPATH to the same value"
That is only for backward compatibility purposes for old ld.sos which did not yet know RUNPATH.
New ld.sos according to its man page:
Using the directories specified in the DT_RPATH dynamic section
attribute of the binary if present and DT_RUNPATH attribute does not
exist. Use of DT_RPATH is deprecated.
So using both RUNPATH and RPATH is in fact the best option of all and it behaves as if only has been set.
Moreover when I try it on Fedora 36 x86_64 it even no longer sets RPATH:
$ :|gcc -fPIC -shared -Wl,-rpath,$PWD,--enable-new-dtags -x c -;readelf
-Wd a.out|grep -i path 0x000000000000001d (RUNPATH) Library runpath: [/tmp]
Also Fedora 36 started to even default to this behavior (--enable-new-dtags).

why after setting LD-LIBRARY_PATH and ld.so.cache properly, there are still library-finding problems?

I have a certain shared object library in a special directory which I
make sure special directory is in $LD_LIBRARY_PATH
make sure this directory has read and execute permisions for all
make sure appropriate library directory is in ld.so.conf and that root has done a ldconfig
(verify by checking for library using ldconfig -p as normaluser.
make sure it is has no soname problems (i.e. create a few symlinks if necessary)
Now, say I compile a program that needs that special library, a program packaged in a typical Open Source manner which ./configure && make, etc) and it says -lspecialibrary cannot be found, an error which a lack of any of the above checks would also probably throw.
A workaround I have done is to symlink the library to /usr/local/lib64 and suddenly the library has ben found. Also when compiling a relatively simple package, I manually add -L/path/to/spec/lib and that also has worked. But I regard those two methods as hacks, so I was looking for any clues as to why my list of checks aren't good enough to find a library.
(I particularly find the $LD_LIBRARY_PATH of shallow use. In fact I can exclude certain libraries from it, and they will still be found in a compilation process).
$LD_LIBRARY_PATH and ldconfig are only used to locate libraries when running programs that need libraries, i.e. they are used by the loader not the compiler. Your program depends on libspeciallibrary.so. When running your program $LD_LIBRARY_PATH and ldconfig are consulted to find libspeciallibary.so.
These methods are not used by your compiler to find libraries. For your compiler, the -L option is the right way to go. Since your package uses the autotools, you should set the $LDFLAGS environment variable:
LDFLAGS=-L/path/to/lib ./configure && make
This is also documented in the configure help:
./configure --help

Globally use Google's malloc?

I'd like to experiment with Google's tcmalloc on Linux... I have a huge project here, with hundreds of qmake generated Makefile's... I'd like to find a way to get gcc to globally link against tcmalloc (like it does with libc)... Is this possible? Or will I have to edit every Makefile?
(I'd prefer not to edit all the pro files as there are hundreds of them)
(Also, we've already tried the LD_PRELOAD method and it's not working quite right)...
How do your makefiles access the compiler (gcc/g++/cc/c++)?
If it's just by name (g++), and not by explicit path (/usr/bin/g++), you can simply create a replacement g++ in whatever directory you prefer, and prepend that directory to your path.
E.g.: Create a ~/mytmpgccdir/g++ file:
#!/bin/tcsh -f
exec /usr/bin/g++ -Lfoo -lfoo $*:q
Adding whatever extras (-Lfoo -lfoo) you like, either before or after the other arguments ($*:q).
Then pre-pend it to your path and make normally.
#tcsh version
% set path = ( ~/mytmpgccdir/ $path:q )
% make clean
% make
p.s. If it is by explicit name, you may be able to override it on the command line. Something like: make all GCC=~/mytmpgccdir/gcc
p.p.s If you do use LD_PRELOAD, you might want a script like this to setenv LD_PRELOAD before running your program. Otherwise it's easy to wind up LD_PRELOAD'ing on every command like /bin/ls, make, g++, etc.
First, check the qmake documentation. There is an easy way to specify (in a .pro file) that a certain library should always be linked in.
Also, since you are just experimenting, simply use LD_PRELOAD - no recompilation necessary:
LD_PRELOAD="/usr/lib/foo/libtcmalloc.so" ./your_program
You do not have to have linked "your_program" against google's tcmalloc library.

Resources