Finding my Linux shared libraries at runtime - linux

I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this:
core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux)
tests.exe - an app use to test the DLL (uses google test)
All of my binaries must live in one folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably
To illustrate, Here's the basic project tree. We use CMake. After I build I've got
mysdk
|---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core")
|---/tests (source code + CMakeLists.txt)
|---/core (source code + CMakeLists.txt)
|---/build (all build ouput, CMake output, etc)
|---tests (build output)
|---core (build output)
The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder.
I tried adding CMake's "install" command, to each of my CMakeLists.txt files (e.g. install(TARGETS core DESTINATION bin). I then then executed sudo make install after my normal build. This put all my binaries in /usr/local/bin with no errors. But when I ran tests from there, it failed to find libcore.so, even though it was sitting right there in the same folder
tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory
I read up on the LD_LIBRARY_PATH environment variable and so tried adding that folder (/usr/local/bin) into it and running. I can see I've properly altered LD_LIBRARY_PATH but it still doesn't work. "tests" still can't find libcore.so. I even tried changing the PATH environment variable as well. Same result.
In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of /mysdk/build) and running tests from there. To my surprise it ran.
Then I realized why: Instead of loading the local copy of libcore.so it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of libcore.so made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really was baked in.
I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions:
What is the correct way to control where my app looks for my shared libraries?
Is there a relationship between my project build structure and how my binaries must then appear when installed?
Am I even close to the right way of doing this?
Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.

You can run ldd file to print the shared object dependencies for file. It will tell you where are its dependencies being read from.
You can export the environment variable LD_LIBRARY_PATH with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to LD_LIBRARY_PATH and then run ldd again (make sure you export the variable).
Also, make sure the dependencies have the right permissions.

Updating LD_LIBRARY_PATH is an option. Another option is using RPATH. Please check the example.
https://github.com/mustafagonul/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
# Project
project(005-executable-with-shared-library)
# Directories
set(example_BIN_DIR bin)
set(example_INC_DIR include)
set(example_LIB_DIR lib)
set(example_SRC_DIR src)
# Library files
set(library_SOURCES ${example_SRC_DIR}/library.cpp)
set(library_HEADERS ${example_INC_DIR}/library.h)
set(executable_SOURCES ${example_SRC_DIR}/main.cpp)
# Setting RPATH
# See https://cmake.org/Wiki/CMake_RPATH_handling
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR})
# Add library to project
add_library(library SHARED ${library_SOURCES})
# Include directories
target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR})
# Add executable to project
add_executable(executable ${executable_SOURCES})
# Linking
target_link_libraries(executable PRIVATE library)
# Install
install(TARGETS executable DESTINATION ${example_BIN_DIR})
install(TARGETS library DESTINATION ${example_LIB_DIR})
install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR})

Related

C++ executable fails to link to shared library after scp

So I am working on a project that is intended to run on a remote server. I develop the program on a local pc, compile it, then upload it to the remote server. Both the local pc and the remote server are run on CentOS 7.7.
The program is developed using the CLion IDE, configured with CMake. The program depends a few shared libraries, which are supposed to link to the executable according to what I wrote in CMake. At my local PC, I can compile and run the program perfectly. However, after I scp the whole directory of the project to the remote server, the executable fails to run. It cannot find any of the ".so" files, according to what ldd says.
This is my CMakeList.txt, with every path being relative path, instead of absolute path.
cmake_minimum_required(VERSION 3.15)
project(YS_Test)
set(CMAKE_CXX_STANDARD 11)
set(SOURCE_PATH_ src)
file(GLOB SOURCE_FILES_ ${SOURCE_PATH_}/*.*)
set(PROJECT_LIBS_ libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so)
include_directories(api/include)
link_directories(api/lib/linux)
add_executable(YS_Test ${SOURCE_FILES_})
target_link_libraries(YS_Test ${PROJECT_LIBS_})
Please do not tell me to set LD_LIBRARY_PATH to fix my issue. The program worked fine on my local pc without LD_LIBRARY_PATH, so I expect it to run on the remote server without LD_LIBRARY_PATH. I would like to know what is really going on here, instead of a work around. Thanks!
If I understand your problem correctly, you want to ship your compiled YS_Test program along with some dependencies and have it run on a remote server. By default an executable will only look in the directories configured in /etc/ld.so, which will not include the deploy path.
Note: Typically you do not deploy your entire build directory but only the compiled artifacts and dependencies. For this answer I will assume you deploy the binary and its dependencies to the same directory.
You have two options:
Require users of your program to set LD_LIBRARY_PATH, either by themselves or by a wrapper script. This variable will instruct the dynamic linker to look in the specified directories as well. Even if you do not like this solution, it is by far the most common approach.
Add -Wl,-rpath='$ORIGIN' to your linker options. This will add a DT_RUNPATH attribute to the executable's dynamic section. As you are using CMake you can also set this using BUILD_RPATH and/or INSTALL_RPATH target properties.
The ld.so manpage describes this attribute as follows:
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
...
Using the directories specified in the DT_RUNPATH dynamic section
attribute of the binary if present.
The $ORIGIN part expands to the directory containing the program or shared
object.
If you really insist on shipping your build directory (eg during development), you can take a look at the CMake BUILD_RPATH_USE_ORIGIN property (and its usual global counterpart CMAKE_BUILD_RPATH_USE_ORIGIN), this will embed relative paths into binaries instead of absolute paths.
As you don't want a workaround (#Botje has given you two already), I will try an explanation instead. In your development machine, if you use this command:
ldd YS_Test
You will see all the shared libraries used by your program, with their corresponding paths. The libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so are found at your 'api/lib/linux' directory, but resolved with full absolute paths. If you do the same at your server, some shared objects can't be resolved because they aren't at the same location.
If you use one of these commands (not sure which are available in Centos):
chrpath --list YS_Test
or
patchelf --print-rpath YS_Test
You will see the RPATH or RUNPATH tags embedded in your program. This is the path used by the Linux linker to locate dependencies that are outside the standard ld locations. You may find extended explanations on Internet about this, like this one or the Wikipedia article.
Breaking my promise, I give you a third workaround: use patchelf or chrpath at your server after scp to change the embedded RPATH tag, pointing it relative to $ORIGIN (which represents the program location).

SCons: When adding a Node to the LIBS variable, how do I make it use just the file without the directory?

I have SCons code in which I am using SConscripts to build different directories separately. In my Src directory, my SConscript builds a shared library, and then returns the resulting Node as the Python variable libMyLibrary. I typically use the install option to copy this library to a directory that is on my system's LD_LIBRARY_PATH (I'm using OpenSUSE).
So far, so good. Now, in another directory, Src/Test, another SConscript imports libMyLibrary and builds some Programs using code like this:
env.Program('myProgram', 'myProgram.cpp', LIBS=[env['LIBS'], libMyLibrary])
The program then gets installed to my local bin folder. This code does track the library dependency and build the program, but the problem is that since the library is in a sub-directory (Src), that sub-directory gets included in the linker command. Here is an abbreviated example of the linker command that SCons generates:
g++ -o Src/Test/myProgram Src/Test/myProgram.o Src/libMyLibrary.so
I believe this happens because the Node,libMyLibrary, is essentially a path. The problem is that when I try to run the program, it is not looking for libMyLibrary.so in my library folder, but rather Src/libMyLibrary.so, and of course it doesn't find it.
I do NOT want the libraries I build to be installed in sub-directories of my install folder.
I already add the Src folder to LIBPATH, so SCons adds the -LSrc option to the linker command, but that doesn't solve the problem. My preference would be that when I add a Node, the path should automatically get parsed out to add the appropriate -L and -l options.
I know that I can get around this problem by adding the string 'MyLibrary' to the LIBS variable instead of the libMyLibrary Node, but then I have to explicitly tell SCons that each Program Depends() on libMyLibrary. It seems very inefficient to short-circuit SCons's built-in dependency tracking this way. Does anyone know the correct, SCons-y way to do this?
I'm referring to your latest comment: It looks to me as if this is not really a SCons problem, but more a general linker question (XY problem). Are you perhaps simply searching for RPATH? Please also check this old SO question: scons executable + shared library in project directory

uic can't find shared library

I am trying to make a Qt5 part of my source tree, so I haven't installed it on my machine, just copied it from source control. I am having a problem when I try to run uic.exe:
stiopa#stiopa-VirtualBox:~/ct/LinuxLibs/Qt/bin > ./uic
./uic: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory
I am still getting the same error even when I copy the libQt5Core library to bin directory. How is uic looking for shared libraries? Is there any environment variable I need to set to fix it?
This is yet another case of not putting the dependent shared libraries in a defined location that is supported by the program.
If you're planning on doing the 'copy the files to the same directory as the executable', the fast solution is to reference the directory in the library load path; e.g. if the binary is in $HOME/foo, you do:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}${LD_LIBRARY_PATH:+:}$HOME/foo
This adds or makes $HOME/foo the run-time-linker's load path. As a result, any programs you run will look in this directory for libraries, as well as the default set for the OS (defined by the ld.so configuration), as well as the paths that are defined within the application itself (the rpath).
If you're going to follow this route, what you can do is to move the binary to target.bin, create a target bash script, which invokes the bin file automatically; e.g.
#!/bin/bash -p
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}${LD_LIBRARY_PATH:+:}$(dirname $0)
exec $0.bin "$*"
A secondary mechanism which will permit you to change the search location for a binary; without requiring an environment variable insert is to modify the binary so that it searches in different locations than it usually does; this takes advantage of some features in the run-time linker (which looks for libraries).
There is a program called chrpath, which can be added by various package managers, which allows you to edit the rpath directly. In this case; you can change the additional search path of the binary using:
chrpath -r '$ORIGIN' foo
This means that the program will look in the same directory as the binary for .so files, thus allowing it to run.

Problems with porting a fortran program from ubuntu to windows

I previously had some troubles updating old code that still needed a not supported compiler and expensive libraries to a version with gfortran in Eclipse on Windows. I let it rest for a while and recently I took a whole other approach, rebuilding the program from scratch, developping on a ubuntu machine, but now I want to bring it back to a windows machine so that my co-workers can contribute on it.
The status:
Program compiles, runs and gives good results on an ubuntu machine with the GCC GNU compiler
Windows 7 machine, 64bit
Cygwin installation (for gnu fortran) with lapack and liblapack-devel (however, I don't use these, because I compile blas and lapack manually)
(C:/cygwin/lib added to windows Path)
Original Issue:
The program compiles in cygwin (by calling the make-command, calling the make command with the makefile situated here: http://thijsvandenbrande.be/phd/hamfemInstall/makefile
This returns the file hamfem.exe which returns the following error when runned by double-clicking on it in windows: The program can't start because cyglapack-0.dll is missing from your computer. Try reinstalling the program to fix this problem.
When running the executable from cygwin, by calling the ./hamfem.exe command the executable starts to run. However, I want a solution so that I can give this executable to my co-workers so that they can change the input files (located in a folder in that has a relative path to the executable).
Going further on the comments below, I tried the next things:
Adding the exact path to the C:\cygwin\lib\lapack\cyglapack-0.dll file in windows path and even rebooting afterwards doesn't help.
adding a -static to the makefile before calling the library, resulting in dependency errors because I use two commands of the lapack library that depend on quite a lot of other commands (DPBTRF and DPBTRS). These commands are used in the mainprog.f90 module. The error: /usr/lib/gcc/i686-pc-cygwin/4.7.3/../../../liblapack.a(dpbtrf.f.o): In function 'dpbtrf':
/usr/src/debug/lapack-3.4.2-1/SRC/dpbtrf.f:277: undefined reference to 'dtrsm_'
and a couple of more lines stating the dependencies.
add the liblapack.a file to the src folder, but compiler always goes back to the lapack in cygwin
On the website of lapack you can normally download the functions with their dependencies (example DPBTRF), but these are not available anymore. Does anyone have another idea how to include these two functions and their dependencies in a static library-file that I can compile beforehand and add to the src-folder?
Current (semi-)Fix
The next thing worked (a bit) for me: following the instructions on http://gcc.gnu.org/wiki/GfortranBuild to manually build libblas.a and liblapack.a in the /usr/src folder of Cygwin and refering to this folder in the makefile. The updated makefile can be found here: http://thijsvandenbrande.be/phd/hamfemInstall/makefileNew
The code compiles nicely on Windows by running the make command from cygwin (next step in the process, running it out of Eclipse) and i get a .exe file that can be run by double clicking it and that keeps running if I move it with its folder to another location. Because this process is quite labour intensive, figuring it all out, I added the answer here below, stating the commands you have to parse to cygwin in order to make it work.
For your information: my file structure looks like this (after the build, I move the .exe file one folder up, both in the linux version as the windows version):
hamfem.exe
in
input.txt
NGCR_building01.txt
out
(empty folder for output files of the routine)
src
hamfem.f90 (main file)
mainprog.f90 (file that contains the commands from lapack)
...(a bunch of other modules)
makefile
I figured things out myself, with some pointers from all over stackoverflow. In order for others to help them resolve similar issues, I would like my work method here so that the question is fully documented.
The issue can be resolved by clean building the Lapack library !and the Blas library on your local machine in cygwin and pasting the liblapack.a and libblas.a file to the library folder that you refer to in the makefile. The errors that were casted by calling Lapack staticly where a result of some routines of Blas used in the two commands.
These are the steps I followed:
download the lapack.tgz and blas.tgz files from the website and past them in the C:\Cygwin\usr\src folder
Extract these files with the following commands in cygwin:
cd /usr/src
tar -xvzf lapack.tgz
tar -xvzf blas.tgz
Build the two library files with the commands shown below in Cygwin. Compiling Lapack can take a while and will result in some errors in the end because of some missing links in the test files. These tests are run for accuracy tools. A more detailed look into the make.inc file is needed to resolve these issues.
cd $HOME
cd /usr/src/BLAS
make
mv blas_LINUX.a ../libblas.a
cd ../lapack-3.4.2
mv make.inc.example make.inc
make
mv liblapack.a ../liblapack.a
check the makefile included in this repository for the correct linking to the libraries. These should say /usr/src and -static -llapack -lblas, the other options are for the linux compiler.

How to manage development and installed versions of a shared library?

In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.

Resources