Define location of libgcc_s.so.1 - linux

First of all I am in a Debian VPS without SUDO permissions, without possibility of installing anything.
I want to run a program:
./program
And it informs me that it needs libgcc_s.so.1:
ERROR: ld.so: object '/lib/snoopy.so' from /etc/ld.so.preload cannot be preloaded (wrong ELF class: ELFCLASS64): ignored.
./program: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory
My question is, is there a way to run the program without having to install gcc-multilib (without sudo)?
I thought that maybe I could download gcc-multilib locally and specify the path in execution, but I don't know how.

Depending on how you compiled your program and on the system you are running it on, you should be a able to tell the system to load/find additional libraries in a specific directory using LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/custom/path/
From the manpage of ld.so
LD_LIBRARY_PATH
A list of directories in which to search for ELF libraries at
execution time. The items in the list are separated by either
colons or semicolons, and there is no support for escaping
either separator.
This variable is ignored in secure-execution mode.
Within the pathnames specified in LD_LIBRARY_PATH, the dynamic
linker expands the tokens $ORIGIN, $LIB, and $PLATFORM (or the
versions using curly braces around the names) as described
above in Rpath token expansion. Thus, for example, the fol‐
lowing would cause a library to be searched for in either the
lib or lib64 subdirectory below the directory containing the
program to be executed:
$ LD_LIBRARY_PATH='$ORIGIN/$LIB' prog
(Note the use of single quotes, which prevent expansion of
$ORIGIN and $LIB as shell variables!)
Here from the error message it seems you are trying to run a 32bit binary on a 64bit system. Adding the correct 32bit libraries should allow you to run your program.
Another option would be to "simply" compile your binary for a 64bit system. But that might be problematic if you not on a native 64bit system and would probably force you to cross compile.

Related

Shared Library on Linux is not found

I have built some shared libraries on Ubuntu Linux 16.0.2 from source.
They are 64-bit libs.
I manually copied them to /usr/local/lib.
I verified that the /usr/local/lib path is indeed in one of the .conf files that ld.so.conf includes.
I then ran: sudo ldconfig to update the cache.
But then when I try to run my executable which tries to dynamically load one of the .so files that I previously copied into /usr/local/lib using dlopen, it fails.
In my code, I have:
dlopen ("foobar.so", RTLD_LAZY);
Can anybody tell me what I am doing wrong?
The dynamic linker normally doesn't access the paths recursively included from /etc/ld.so.conf directly, but it uses a cache.
You can update the cache with
sudo ldconfig
See ldconfig(8) for more details.
for dlopen to work, there's no directory list of places to find the shared object. So doing dlopen("somefile", ...); probably won't work.
You don't need to use any path or to put the shared object (or to comply with naming conventions) to use a shared object via dlopen(3). That's only a requirement of the dynamic linker that loads and links all the shared libraries at launch time: linux-vdso.so.1 (in 64bit)
To test, just put the shared in your local dir and try to open it with it's basename, like you post.
For a system library, there are more requirements, like defining a soname for the library, which is used by the loader to load the library and to construct the cache database index, so if you have no idea of what I'm talking about, you will not be able to use automatic loading procedure. If you want to see if an executable file has all the libraries it needs and where the loader finds them out, just run ldd(1) with the executable as argument, and you'll se the dependencies for automatic loading and how the dynamic linker resolves the paths.

Why I cannot override search path of dynamic libraries with LD_LIBRARY_PATH?

Edit: I resolved this issue, the solution is below.
I am building a code in a shared computing cluster dedicated for scientific computing, thus I can only control files in my home folder. Although I am using fftw as an example, I would like to understand the specific reason, why my attempt to setup LD_LIBRARY_PATH does not work.
I build the fftw and fftw_mpi libraries in my home folder like this
./configure --prefix=$HOME/install/fftw --enable-mpi --enable-shared
make install
It builds fine, but in install/fftw/lib, I find that the freshly built libfftw3_mpi.so links to wrong version of fftw library.
$ ldd libfftw3_mpi.so |grep fftw
libfftw3.so.3 => /usr/lib64/libfftw3.so.3 (0x00007f7df0979000)
If I now try to set the LD_LIBRARY_PATH correctly pointing to this directory, it still prefers the wrong library:
$ export LD_LIBRARY_PATH=$HOME/install/fftw/lib
$ ldd libfftw3_mpi.so |grep fftw
libfftw3.so.3 => /usr/lib64/libfftw3.so.3 (0x00007f32b4794000)
Only if I explicitly use LD_PRELOAD, I can override this behavior. I don't think LD_PRELOAD is a proper solution though.
$ export LD_PRELOAD=$HOME/install/fftw/lib/libfftw3.so.3
$ ldd libfftw3_mpi.so |grep fftw
$HOME/install/fftw/lib/libfftw3.so.3 (0x00007f5ca3d14000)
Here is what I would have expecting, a small test done in Ubuntu desktop, where I installed fftw to /usr/lib first, and then override this search path with LD_LIBRARY_PATH.
$ export LD_LIBRARY_PATH=
$ ldd q0test_mpi |grep fftw3
libfftw3.so.3 => /usr/lib/x86_64-linux-gnu/libfftw3.so.3
$ export LD_LIBRARY_PATH=$HOME/install/fftw-3.3.4/lib
$ ldd q0test_mpi |grep fftw3
libfftw3.so.3 => $HOME/install/fftw-3.3.4/lib/libfftw3.so.3
In short: Why is libfft3_mpi library still finding the wrong dynamic fftw3 library? Where is this searchpath hard coded in a such way that it is prioritized over LD_LIBARY_PATH? Why is this is not the case in another computer?
I am using intel compilers 13.1.2, mkl 11.0.4.183 and openmpi 1.6.2 if this matters.
Edit: Thanks for all the answers. With help of those, we were able to isolate the problem to RPATH, and from there, the cluster support was able to figure out the problem. I accepted the first answer, but both answers were good.
The reason, why this was so hard to figure out, is that we did not know that the compilers were actually wrapper scripts, adding things to compiler command line. Here a part of a reply from the support:
[The] compilation goes through our compiler wrapper. We do RPATH-ing
by default as it helps most users in correctly running their jobs
without loading LD-LIBRARY_PATH etc. However we exclude certain
library paths from default RPATH which includes /lib, /lib64 /proj
/home etc. Earlier the /usr/lib64 was not excluded by mistake
(mostly). Now we have added that path in the exclusion list.
From http://man7.org/linux/man-pages/man8/ld.so.8.html
When resolving shared object dependencies, the dynamic linker first
inspects each dependency string to see if it contains a slash (this
can occur if a shared object pathname containing slashes was
specified at link time). If a slash is found, then the dependency
string is interpreted as a (relative or absolute) pathname, and the
shared object is loaded using that pathname.
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
o (ELF only) Using the directories specified in the DT_RPATH dynamic
section attribute of the binary if present and DT_RUNPATH
attribute does not exist. Use of DT_RPATH is deprecated.
o Using the environment variable LD_LIBRARY_PATH. Except if the
executable is a set-user-ID/set-group-ID binary, in which case it
is ignored.
o (ELF only) Using the directories specified in the DT_RUNPATH
dynamic section attribute of the binary if present.
o From the cache file /etc/ld.so.cache, which contains a compiled
list of candidate shared objects previously found in the augmented
library path. If, however, the binary was linked with the -z
nodeflib linker option, shared objects in the default paths are
skipped. Shared objects installed in hardware capability
directories (see below) are preferred to other shared objects.
o In the default path /lib, and then /usr/lib. (On some 64-bit
archiectures, the default paths for 64-bit shared objects are
/lib64, and then /usr/lib64.) If the binary was linked with the
-z nodeflib linker option, this step is skipped.
with readelf readelf -d libfftw3_mpi.so you can check if your lib contains such a attribute in the dynamic section.
with export LD_DEBUG=libs you can debug the search path used to find your libs
with chrpath -r<new_path> <executable> the rpath can be changed
I see two possible reasons for this.
First, libfftw3_mpi.so may be linked with /usr/lib64/ as RPATH. In that case, providing LD_LIBRARY_PATH will have no effect. To check if it is your case, run readelf -d libfftw3_mpi.so | grep RPATH and see if it has /usr/lib64/ as a library path. If it does, use chrpath utility to change or remove it.
Alternatively, you may be running a system that does not support LD_LIBRARY_PATH at all (like HP-UX).

My library does not load on other systems but works fine on the system it was compiled on

My project includes a library and example projects for how to use it. I place the library in the "bin" folder along with all executable examples. I can run the example projects on the machine where they were compiled but when I try to run them on another machine I get:
./example: error while loading shared libraries: libMyLib.so: cannot
open shared object file: No such file or directory
This makes no sense since the library is in the same folder. What is causes it to ignore the library on other machines?
Just because the library is in the same directory as the executable doesn't mean it will look there for it. By default on linux, executables will only look in a limited set of directories, set by ldconfig and the LD_LIBRARY_PATH environment variable.
One trick that is very useful is to link your program with the extra linker option
-Wl,-rpath,'$ORIGIN'
which will cause the executable to also look in the directory the executable is in for shared objects.
You can usually set this by adding to your Makefile:
LDFLAGS := -Wl,-rpath,'$$ORIGIN'
Note the double-$ here -- make will interpret this as a make variable which expands to just $
The current directory is not necessarily a place where the dynamic linker will look for dynamic libraries. The directory where the executable is much less.
You might want to check ldconfig to see where it looks for them.

Problems using a shared library

I am following the explanation in this page and this page trying to build and use shared libraries on Ubuntu Linux.
I am building the libraries and application using a cross-compiler on my PC, than copying the files to the target system and running there.
Finally, I am at the stage where all symlinks are defined correctly and the I am able to run the application - but not in the required form.
Let's say that I have a shared library libtest.so.1.0 in a directory /home/ysap/libs. I then created the symlinks libtest.so.1 and libtest.so in the same directory, both pointing to the library file.
In the directory /home/ysap/apps I have an application program app.e that uses the test library.
Now, to run the application, I can type:
> LD_LIBRARY_PATH=/home/ysap/libs ./app.e
and the application runs nicely. However, I'd like to eliminate the assignment, so I tried typing:
> export LD_LIBRARY_PATH=/home/ysap/libs
> ./app.e
but unfortunately I get an error message, saying:
./app.e: error while loading shared libraries: libtest.so.1: cannot open shared object file: No such file or directory
I also tried typing:
> ldconfig -n /home/ysap/libs
and
> sudo ldconfig -n /home/ysap/libs
but it does not help.
What am I doing wrong? How can I make app.e run w/o the variable assignment?
Update 1:
The application uses the mmap() call, so it has to be run with sudo priviledge. The actual invocation line is:
> sudo LD_LIBRARY_PATH=/home/ysap/libs ./app.e
Is it possible that the export-ed variable is not updated in the sudo environment?
Update 2:
Output of ldd ./app.e:
libtest.so.1 => /home/ysap/libs/libtest.so.1 (0xb6faa000)
libgcc_s.so.1 => /lib/arm-linux-gnueabi/libgcc_s.so.1 (0xb6f85000)
libc.so.6 => /lib/arm-linux-gnueabi/libc.so.6 (0xb6ea4000)
/lib/ld-linux.so.3 (0xb6fb7000)
The sudo problem is as #duskwuff states, but if you want to compile an application, and not need to modify the LD_LIBRARY_PATH variable, when linking the application you can use the $ORIGIN variable, which is recognized by most recent versions of linux.
If all the libraries are in the current directory, then when you link the application, you use the extra option:
-Wl,-R'$ORIGIN'
You need to quote the option to prevent it being expanded by the shell when compiling.
If you're putting it into a Makefile then you use:
-Wl,-R\$$ORIGIN
the $$ is for make to use a $, the \ is to prevent the shell that is invoked from the command line expanding the variable before passing it into the command.
You can use any symbolic path reference, so if you had a structure where binaries were in bin/ and libraries were in lib/, you can use $ORIGIN/../lib.
This works for dlopen as well, so it will find libraries when they are being dynamically loaded at run-time
Loading libraries from a user-specified path is a security risk, so sudo always strips out all LD_ environment variables, including LD_LIBRARY_PATH.

How do I prepend a directory the library path when loading a core file in gdb on Linux

I have a core file generated on a remote system that I don't have direct access to. I also have local copies of the library files from the remote system, and the executable file for the crashing program.
I'd like to analyse this core dump in gdb.
For example:
gdb path/to/executable path/to/corefile
My libraries are in the current directory.
In the past I've seen debuggers implement this by supplying the option "-p ." or "-p /=."; so my question is:
How can I specify that libraries be loaded first from paths relative to my current directory when analysing a corefile in gdb?
Start gdb without specifying the executable or core file, then type the following commands:
set solib-absolute-prefix ./usr
file path/to/executable
core-file path/to/corefile
You will need to make sure to mirror your library path exactly from the target system. The above is meant for debugging targets that don't match your host, that is why it's important to replicate your root filesystem structure containing your libraries.
If you are remote debugging a server that is the same architecture and Linux/glibc version as your host, then you can do as fd suggested:
set solib-search-path <path>
If you are trying to override some of the libraries, but not all then you can copy the target library directory structure into a temporary place and use the solib-absolute-prefix solution described above.
I'm not sure this is possible at all within gdb but then I'm no expert.
However I can comment on the Linux dynamic linker. The following should print the path of all resolved shared libraries and the unresolved ones.
ldd path/to/executable
We need to know how your shared libraries were linked with your executable. To do this, use the following command:
readelf -d path/to/executable | grep RPATH
Should the command print nothing, the dynamic linker will use standard locations plus the LD_LIBRARY_PATH environment variable to find the shared libraries.
If the command prints some lines, the dynamic linker will ignore LD_LIBRARY_PATH and use the hardcoded rpaths instead.
If the listed rpaths are absolute, the only solution I know is to copy (or symlink) your libraries to the listed locations.
If the listed rpaths are relative, they will contain a $ORIGIN which will be replaced at run time by the path of the executable. Move either the executable or the libraries to match.
For further informations, you could start with:
man ld.so
I found this excerpt on developer.apple.com
set solib-search-path path
If this variable is set, path is a
colon-separated list of directories to
search for shared libraries.
solib-search-path' is used after
solib-absolute-prefix' fails to
locate the library, or if the path to
the library is relative instead of
absolute. If you want to use
solib-search-path' instead of
solib-absolute-prefix', be sure to
set `solib-absolute-prefix' to a
nonexistant directory to prevent GDB
from finding your host's libraries.
EDIT:
I don't think using the above setting prepends the directories I added, but it does seem to append them, so files missing from my current system are picked up in the paths I added. I guess setting the solib-absolute-prefix to something bogus and adding directories in the solib-search-path in the order I need might be a full solution.
You can also just set LD_PRELOAD to each of the libraries or LD_LIBRARY_PATH to the current directory when invoking gdb. This will only cause problems if gdb itself tries to use any of the libraries you're preloading.
One important note:
if you're doing a cross compiling and trying to debug with gdb, then
after you've done file ECECUTABLE_NAME if you see smth. like :
Using host libthread_db library "/lib/libthread_db.so.1"
then check whether you have libthread_db for your target system. I found a lot of similar problems on the web. Such problem cannot be solved just using "set solib-", you has to build libthread_db using your cross-compiler as well.

Resources