My library does not load on other systems but works fine on the system it was compiled on - linux

My project includes a library and example projects for how to use it. I place the library in the "bin" folder along with all executable examples. I can run the example projects on the machine where they were compiled but when I try to run them on another machine I get:
./example: error while loading shared libraries: libMyLib.so: cannot
open shared object file: No such file or directory
This makes no sense since the library is in the same folder. What is causes it to ignore the library on other machines?

Just because the library is in the same directory as the executable doesn't mean it will look there for it. By default on linux, executables will only look in a limited set of directories, set by ldconfig and the LD_LIBRARY_PATH environment variable.
One trick that is very useful is to link your program with the extra linker option
-Wl,-rpath,'$ORIGIN'
which will cause the executable to also look in the directory the executable is in for shared objects.
You can usually set this by adding to your Makefile:
LDFLAGS := -Wl,-rpath,'$$ORIGIN'
Note the double-$ here -- make will interpret this as a make variable which expands to just $

The current directory is not necessarily a place where the dynamic linker will look for dynamic libraries. The directory where the executable is much less.
You might want to check ldconfig to see where it looks for them.

Related

How to setup build and run of executable, that is installed, depending on shared libraries

I want to build an executable foobar on Linux, that depends on shared libraries such as libfoobar.so. So I do
gcc foobar.o -Xlinker -rpath relative path from executable to library-lfoobar -o foobar
I have to provide the path to the library so that the linker does not complain, even though the library is not needed at link time.
Then the executable and libraries get installed to system directories (which are unrelated to the initial locations and to each other).
After that, I can't run it, because the path to the library, used to build is no longer valid, and the library is not found.
What is the best way to setup linker options so that they work both at link time and at runtime?
I cannot use absolute paths, they are a bad idea in general to use during build, and also not possible during runtime, as different users will have different installation roots. Only the relative paths at build time and runtime, are guaranteed to be always the same.

Shared Library on Linux is not found

I have built some shared libraries on Ubuntu Linux 16.0.2 from source.
They are 64-bit libs.
I manually copied them to /usr/local/lib.
I verified that the /usr/local/lib path is indeed in one of the .conf files that ld.so.conf includes.
I then ran: sudo ldconfig to update the cache.
But then when I try to run my executable which tries to dynamically load one of the .so files that I previously copied into /usr/local/lib using dlopen, it fails.
In my code, I have:
dlopen ("foobar.so", RTLD_LAZY);
Can anybody tell me what I am doing wrong?
The dynamic linker normally doesn't access the paths recursively included from /etc/ld.so.conf directly, but it uses a cache.
You can update the cache with
sudo ldconfig
See ldconfig(8) for more details.
for dlopen to work, there's no directory list of places to find the shared object. So doing dlopen("somefile", ...); probably won't work.
You don't need to use any path or to put the shared object (or to comply with naming conventions) to use a shared object via dlopen(3). That's only a requirement of the dynamic linker that loads and links all the shared libraries at launch time: linux-vdso.so.1 (in 64bit)
To test, just put the shared in your local dir and try to open it with it's basename, like you post.
For a system library, there are more requirements, like defining a soname for the library, which is used by the loader to load the library and to construct the cache database index, so if you have no idea of what I'm talking about, you will not be able to use automatic loading procedure. If you want to see if an executable file has all the libraries it needs and where the loader finds them out, just run ldd(1) with the executable as argument, and you'll se the dependencies for automatic loading and how the dynamic linker resolves the paths.

What is the best way to look for ELF Shared Libraries under linux in C

I am currently working on a userland ELF File loader in C. LD_LIBRARY_PATH does not seem to be an option for me, as it does not seem to be set by default on my system(x86_64 openSUSE). What is the best way to get all of the directories where libraries are stored ?
/usr/lib64 and /lib64 for 64-bit binaries or /usr/lib and /lib for 32-bit binaries, than paths taken from /etc/ld.so.conf and included configs
From man ldconfig
ldconfig creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories, /lib and /usr/lib (on some 64-bit architectures such as x86-64, /lib and /usr/lib are the trusted directories for 32-bit libraries, while /lib64 and /usr/lib64 are used for 64-bit libraries).
The cache is used by the run-time linker, ld.so or ld-linux.so.
...
/etc/ld.so.conf File containing a list of directories, one per line, in which to search for libraries.
Note that this info is for openSUSE, other distros may use different paths.
LD_LIBRARY_PATH is the standard environment variable used for users to add and load their own libraries when they are unable or have no access to the system directories to install shared libraries.
There's a file which is normally read by ldconfig at boot time (it reads /etc/ld.so.conf to create a binary DBMsomewhat file /etc/ld.so.cache, with a hash table to quick access the paths to use when loading library shared objects, and that is used by the dynamic loader (there's only one such thing, as a kernel tool, so it is not dependant on which distribution you run, but just on the kernel version you use ---it has changed somewhat, but not as much as the kernel does---)
To know which sonames (soname is the common name used by a shared object to refer to an interface, which is what is required to warranty that a shared object will be compatible with the library) are being used by the dynamic loader, just run
ldconfig -p
and you'll get all the sonames registered, and the path to the library actually loaded for that soname.
If you want to know which libraries will be loaded by some specific executable by the dynamic loader, just execute this:
ldd your_executable
and It will print the sonames that executable needs and where in the system they are located.
What ldconfig(8) does, is to search al the directories included in the file /etc/ld.so.conf for shared object files, and look all the ones whose name matches the soname stored in the file, and include a reference to the file named for the soname found. Once the table is completed, the file /etc/ld.so.cache is created and used by /lib64/ld-linux-x86-64.so.2 which is the shared module in charge of user mode loading the rest of shared libraries used by a program.
There's no problem in having a local $HOME/lib directory to store your locally developed shared libraries, but as that directory will not be normally included in /etc/ld.so.conf, you'll need to create LD_LIBRARY_PATH=${HOME}/lib and be careful to export it, and never try to use it as root user, as for root user that env variable is disabled.
EDIT 1
By the way, if you need to load on demand a shared library (this is possibly what you need, probably), read about dlopen(3) and friend functions, as that is the method used by most programs to dynamically load modules you haven't heard about before compilation of main program. You'll need to load the module, look for the symbols you need (dlsym(3) or dlfunc(3)) store the references given by the module, and finally call them.

how to use my own dynamic library in linux (Makefile)

I have a c++ project (g++/raw Makefile) designed for linux, I used to statically link everything which worked fine for ages. Now I want to build binaries both statically and dynamically linked. The following command is used in my Makefile to build the dynamic library (say libtest):
$(CXX) -shared -Wl,-soname,libtest.so.1 -o libtest.so.1.0.0 $(LIBTEST_OBJS)
The output is libtest.so.1.0.0 which has the so name libtest.so.1
I found at least a symbolic link libtest.so --> libtest.so.1.0.0 is required to link my client program that actually use the above generated libtest.so.1.0.0 library.
Here my question is if I want to build my software, what is the standard way of managing the above symbolic link? Clearly I don't want this extra stuff in my source directory, but it is required to build my client binary, shall I create it as a temp link for building the client then just remove it when done? or shall I create a directory to host the generate .so library and its links and leave everything there until I do "make install" to install them into other specified directories? Will be cool to now what is the standard way of doing this.
Or maybe the way how I generate libraries is incorrect? shall I just generate libtest.so (as actual library, not a link) to link my executable, then rename the library and create those links when doing ``make install''?
any input will be appreciated. :)
Certainly don't generate libtest.so as an actual link. Typically installing the shared library development files installs the .h files and creates a symbolic link libtest.so as part of some install script you have to write.
If you're not installing the development files, but only using the library in your build process of your binary, you just create the symbolik link from your makefile.
There's not that much of a standard here, some prefer to build artifacts to a separate build directory,
some don't care if it's built in the source directory. I'd build to a separate directory though, and keep the source directory clean of any .o/.so/executable files.
You might find useful information here
My suggestion is to use libtool which handles situations like this.

How do I prepend a directory the library path when loading a core file in gdb on Linux

I have a core file generated on a remote system that I don't have direct access to. I also have local copies of the library files from the remote system, and the executable file for the crashing program.
I'd like to analyse this core dump in gdb.
For example:
gdb path/to/executable path/to/corefile
My libraries are in the current directory.
In the past I've seen debuggers implement this by supplying the option "-p ." or "-p /=."; so my question is:
How can I specify that libraries be loaded first from paths relative to my current directory when analysing a corefile in gdb?
Start gdb without specifying the executable or core file, then type the following commands:
set solib-absolute-prefix ./usr
file path/to/executable
core-file path/to/corefile
You will need to make sure to mirror your library path exactly from the target system. The above is meant for debugging targets that don't match your host, that is why it's important to replicate your root filesystem structure containing your libraries.
If you are remote debugging a server that is the same architecture and Linux/glibc version as your host, then you can do as fd suggested:
set solib-search-path <path>
If you are trying to override some of the libraries, but not all then you can copy the target library directory structure into a temporary place and use the solib-absolute-prefix solution described above.
I'm not sure this is possible at all within gdb but then I'm no expert.
However I can comment on the Linux dynamic linker. The following should print the path of all resolved shared libraries and the unresolved ones.
ldd path/to/executable
We need to know how your shared libraries were linked with your executable. To do this, use the following command:
readelf -d path/to/executable | grep RPATH
Should the command print nothing, the dynamic linker will use standard locations plus the LD_LIBRARY_PATH environment variable to find the shared libraries.
If the command prints some lines, the dynamic linker will ignore LD_LIBRARY_PATH and use the hardcoded rpaths instead.
If the listed rpaths are absolute, the only solution I know is to copy (or symlink) your libraries to the listed locations.
If the listed rpaths are relative, they will contain a $ORIGIN which will be replaced at run time by the path of the executable. Move either the executable or the libraries to match.
For further informations, you could start with:
man ld.so
I found this excerpt on developer.apple.com
set solib-search-path path
If this variable is set, path is a
colon-separated list of directories to
search for shared libraries.
solib-search-path' is used after
solib-absolute-prefix' fails to
locate the library, or if the path to
the library is relative instead of
absolute. If you want to use
solib-search-path' instead of
solib-absolute-prefix', be sure to
set `solib-absolute-prefix' to a
nonexistant directory to prevent GDB
from finding your host's libraries.
EDIT:
I don't think using the above setting prepends the directories I added, but it does seem to append them, so files missing from my current system are picked up in the paths I added. I guess setting the solib-absolute-prefix to something bogus and adding directories in the solib-search-path in the order I need might be a full solution.
You can also just set LD_PRELOAD to each of the libraries or LD_LIBRARY_PATH to the current directory when invoking gdb. This will only cause problems if gdb itself tries to use any of the libraries you're preloading.
One important note:
if you're doing a cross compiling and trying to debug with gdb, then
after you've done file ECECUTABLE_NAME if you see smth. like :
Using host libthread_db library "/lib/libthread_db.so.1"
then check whether you have libthread_db for your target system. I found a lot of similar problems on the web. Such problem cannot be solved just using "set solib-", you has to build libthread_db using your cross-compiler as well.

Resources