I know the "search path" of an executable or library can be set using the -rpath linker option. This is useful when dependencies are installed in non-standard locations not normally considered by the object loader, and usually considered superior to setting the LD_LIBRARY_PATH environment variable.
However, in my specific case, the path is different between integration test machines and production machines (how little sense that makes is, unfortunately, not up for discussion.)
I would really prefer using the same package for both the integration test and the production platform. My idea is to provide an installation script that selects the proper target directory, and then patches the library / executable RPATH to the correct value.
At which point I found that I don't know...
How can a library / executable RPATH be patched to a different value?
I assume there is some command line tool available for that, but I don't know it.
In case it is of importance, I am using a CMake setup to build and create the .tgz package (cpack -G TGZ). "Real" packages (RPG, DEB) are, unfortunately, not an option.
You are looking for, patchelf and/or chrpath
Related
So I am working on a project that is intended to run on a remote server. I develop the program on a local pc, compile it, then upload it to the remote server. Both the local pc and the remote server are run on CentOS 7.7.
The program is developed using the CLion IDE, configured with CMake. The program depends a few shared libraries, which are supposed to link to the executable according to what I wrote in CMake. At my local PC, I can compile and run the program perfectly. However, after I scp the whole directory of the project to the remote server, the executable fails to run. It cannot find any of the ".so" files, according to what ldd says.
This is my CMakeList.txt, with every path being relative path, instead of absolute path.
cmake_minimum_required(VERSION 3.15)
project(YS_Test)
set(CMAKE_CXX_STANDARD 11)
set(SOURCE_PATH_ src)
file(GLOB SOURCE_FILES_ ${SOURCE_PATH_}/*.*)
set(PROJECT_LIBS_ libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so)
include_directories(api/include)
link_directories(api/lib/linux)
add_executable(YS_Test ${SOURCE_FILES_})
target_link_libraries(YS_Test ${PROJECT_LIBS_})
Please do not tell me to set LD_LIBRARY_PATH to fix my issue. The program worked fine on my local pc without LD_LIBRARY_PATH, so I expect it to run on the remote server without LD_LIBRARY_PATH. I would like to know what is really going on here, instead of a work around. Thanks!
If I understand your problem correctly, you want to ship your compiled YS_Test program along with some dependencies and have it run on a remote server. By default an executable will only look in the directories configured in /etc/ld.so, which will not include the deploy path.
Note: Typically you do not deploy your entire build directory but only the compiled artifacts and dependencies. For this answer I will assume you deploy the binary and its dependencies to the same directory.
You have two options:
Require users of your program to set LD_LIBRARY_PATH, either by themselves or by a wrapper script. This variable will instruct the dynamic linker to look in the specified directories as well. Even if you do not like this solution, it is by far the most common approach.
Add -Wl,-rpath='$ORIGIN' to your linker options. This will add a DT_RUNPATH attribute to the executable's dynamic section. As you are using CMake you can also set this using BUILD_RPATH and/or INSTALL_RPATH target properties.
The ld.so manpage describes this attribute as follows:
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
...
Using the directories specified in the DT_RUNPATH dynamic section
attribute of the binary if present.
The $ORIGIN part expands to the directory containing the program or shared
object.
If you really insist on shipping your build directory (eg during development), you can take a look at the CMake BUILD_RPATH_USE_ORIGIN property (and its usual global counterpart CMAKE_BUILD_RPATH_USE_ORIGIN), this will embed relative paths into binaries instead of absolute paths.
As you don't want a workaround (#Botje has given you two already), I will try an explanation instead. In your development machine, if you use this command:
ldd YS_Test
You will see all the shared libraries used by your program, with their corresponding paths. The libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so are found at your 'api/lib/linux' directory, but resolved with full absolute paths. If you do the same at your server, some shared objects can't be resolved because they aren't at the same location.
If you use one of these commands (not sure which are available in Centos):
chrpath --list YS_Test
or
patchelf --print-rpath YS_Test
You will see the RPATH or RUNPATH tags embedded in your program. This is the path used by the Linux linker to locate dependencies that are outside the standard ld locations. You may find extended explanations on Internet about this, like this one or the Wikipedia article.
Breaking my promise, I give you a third workaround: use patchelf or chrpath at your server after scp to change the embedded RPATH tag, pointing it relative to $ORIGIN (which represents the program location).
I am making a configure.ac file which checks for library dependency.
The complete code is,
AC_CONFIG_AUX_DIR([build-aux])
AC_INIT([myprogram], [0.1], [])
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_CHECK_LIB([curl], [curl_easy_setopt], [echo "libcurl library is present" > /dev/tty], [echo "libcurl library is not present" > /dev/tty] )
AC_CHECK_LIB([sqlite3], [sqlite3_open], [echo "sqlite3 library is present" > /dev/tty], [echo "sqlite library is not present" > /dev/tty] )
AC_CHECK_LIB([pthread], [pthread_create], [echo "pthread library is present" > /dev/tty], [echo "pthread library is not present" > /dev/tty] )
AC_CHECK_LIB([crypto], [SHA256], [echo "crypto library is present" > /dev/tty], [echo "crypto library is not present" > /dev/tty] )
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
"myprogram" is a program which needs to be installed in numerous user pcs.So, dependency check needs to be done in the begining, to find whether those four libraries are installed.
In the systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present. If I create a soft link to libcurl.so , then it is telling correctly that libcurl is present.
ln -s /usr/lib/i386-linux-gnu/libcurl.so.1.0.0 /usr/lib/i386-linux-gnu/libcurl.so.Same holds good for other libraries as well.
Actually, I want to automate this process. Is there a way to do this, without manually making a soft link?.I mean, by making changes in the configure.ac file itself, so that configure will run in any machine without the need for making soft link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.Sometimes it will not create the linker name.That is why these complications are happening.So the program which checks for the linker name, thinks that the library is not installed.
In systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present.
Right, this is the behavior I would expect. What's going on here is that AC_CHECK_LIB emits a program with the symbol you gave it to try and link (in this case curl_easy_setopt), does a compilation step and a link step to make sure the linker can link. On a typical Linux distro you'll want to make sure that some package called libcurl-dev (or something like that) is installed, so you'll have the header files and the libcurl.so symlink installed.
But I want to automate this process. Is there a way to do this, without manually making a soft link?
Installation of the libcurl-dev package can be easily automated. It can be accomplished several ways, depending on how you want to do it. Linux packaging systems (e.g. rpmbuild, debhelper, etc.) have ways of pulling in build dependencies before building if they aren't installed. Configuration management tools that you use to set up the build machine (e.g. ansible, SaltStack, etc.) could install it. The dependency should be listed in the release documentation at a minimum, so that if someone who has no access to these tools (or doesn't care to use them) can figure it out and build.
I wouldn't create a symlink in configure.ac -- it would likely break any future install of libcurl-dev. Furthermore you would have to run configure with elevated privileges (e.g. sudo) to create the link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.
Actually, I don't ever remember seeing anything like this. Typically when a DSO gets installed to the ldconfig "trusted directories" (e.g. /usr/lib, etc.) ldconfig gets run so the real library (e.g. libcurl.so.1.0.0) gets a symlink (libcurl.so.1) in the same directory, but not the development symlink (libcurl.so).
EDIT: Adding responses to comments
But why ./configure also expects development symlink s(libcurl.so, libcrypto.so etc)
Because configure can be told to run the linker, as you discovered with AC_CHECK_LIB, and if those symlinks aren't there, the link will fail.
configure checks whether the binary can run in the system, and not whether a program which uses these libraries can be build.
configure also has runtime tests as well as compile and link time tests, so it can to some limited testing if the output of compilation can run. configure's primary role is to ensure that prerequisites are installed/configured so make will work, so testing that tools, headers, libraries are installed and work in some fashion is what configure mostly does. The runtime tests will not work in some environments (cross-compilation), so lots of packages don't use them.
If I am not wrong, ./configure cannot be used for checking whether a binary can run in a system, as it is used in the case of building a program only.
configure can do some runtime testing of things configure has built as mentioned in the link above (e.g. AC_RUN_IFELSE).
If ./configure succeeds, then the binary can run in the machine.
But reverse is not true. That is , evenif ./configure fails, the binary may run, as it does not depened on development symlink(eg: libcurl.so).Am I right ?
Which binary are you referring to? The test created as part of AC_RUN_IFELSE or the output of make? If configure suceeeds, the output of make still might not work. That's what make check is for. If configure fails, it's likely make won't work, and you won't get to the part where you can test the output of make.
If the scenario is a missing libcurl.so, and configure fails to link the AC_TRY_LINK test, how's that same link step going to work for your executable then, because it's also going to depend on libcurl.so for the link step? It does depend on that file (just for the link step), because you may have multiple libcurl.so.x libraries installed.
By binary...I mean the program that has been successfully build in some other system having all the dependencies installed.What I was telling is that the binary will run in a machine even if the development symlink(libcurl.so) is not there.
Sure, it's already gone past the link step and is linked to say libcurl.so.x and whatever other dependencies it may have.
I have an application that relies on Qt, GDCM, and VTK, with the main build environment being Qt. All of these libraries are cross-platform and compile on Windows, Mac, and Linux. I need to deploy the application to Linux after deploying on Windows. The versions of vtk and gdcm I'm using are trunk versions from git (about a month old), more recent than what I can get apt-get on Ubuntu 11.04, which is my current (and only) Linux deployment target.
What is the accepted method for deploying an application that relies on these kinds of libraries?
Should I be statically linking here, to avoid LD_LIBRARY_PATH? I see conflicting reports on LD_LIBRARY_PATH; tutorials like this one suggest that it's the 'right way' to modify the library path to use shared libraries through system reboots. Others suggest that I should never set LD_LIBRARY_PATH. In the default version of GDCM, the installation already puts libraries into the /usr/local/lib directory, so those libraries get seen when I run ldd <my program>. VTK, on the other hand, puts its libraries into /usr/local/lib/vtk-5.9, which is not part of the LD_LIBRARY_PATH on most user's machines, and so is not found unless some change is made to the system. Copying the VTK files into '/usr/local/lib' does not allow 'ldd' to see the files.
So, how can I make my application see VTK to use the libraries?
On windows, deploying the dlls is very straightforward, because I can just include them in the installer, and the application finds them because they are in the local directory. That approach does not work in Linux, so I was going to have the users install Qt, GDCM, and VTK from whatever appropriate source and use the default locations, and then have the application point to those default locations. However, since VTK is putting things into a non-standard location, should I also expect users to modify LD_LIBRARY_PATH? Should I include the specific versions of the libraries that I want and then figure out how to make the executable look in the local directory for those libraries and ignore the ones it finds in the library path?
Every "serious" commercial application I have ever seen uses LD_LIBRARY_PATH. They invariably include a shell script that looks something like this:
#!/bin/sh
here="${0%/*}" # or you can use `dirname "$0"`
LD_LIBRARY_PATH="$here"/lib:"$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
exec "$0".bin "$#"
They name this script something like .wrapper and create a directory tree that looks like this:
.wrapper
lib/ (directory full of .so files)
app1 -> .wrapper (symlink)
app1.bin (executable)
app2 -> .wrapper (symlink)
app2.bin (executable)
Now you can copy this whole tree to wherever you want, and you can run "/path/to/tree/app1" or "/path/to/tree/app2 --with --some --arguments" and it will work. So will putting /path/to/tree in your PATH.
Incidentally, this is also how Firefox and Chrome do it, more or less.
Whoever told you not to use LD_LIBRARY_PATH is full of it, IMHO.
Which system libraries you want to put in lib depends on which Linux versions you want to officially support.
Do not even think about static linking. The glibc developers do not like it, they do not care about supporting it, and they somehow manage to break it a little harder with every release.
Good luck.
In general, you're best off depending on the 'normal' versions of the libraries for whatever distribution you're targetting (and saying you don't support dists that don't support recent enough versions of the lib), but if you REALLY need to depend on a bleeding edge version of some shared lib, you can link your app with -Wl,-rpath,'$ORIGIN' and then install a copy of the exact version you want in the same directory as your executable.
Note that if you use make, you'll need $$ in the makefile to get a single $ into the argument that is actually sent to the linker. The single qutoes are needed so the shell doesn't munge things...
Well, there are two options for deploying Linux application.
The correct way:
make a package for your app and for the libraries, if they are so special, that they can't be installed from standard repositories
There are two major package formats. RPM and DEB.
The easy way:
make a self-extract file that will install the "windows way" into /opt.
You can have libraries in the same directory as the executable, it's just not the preferred way.
I have some shared/dynamic libraries installed in a sandbox directory. I'm building some applications which link agains the libraries. I'm running into what appears to be a difference between OSX and Linux in this regard and I'm not sure what the (best) solution is.
On OSX the location of library itself is recorded into the library, so that if your applications links against it, the executable knows where to look for the library at runtime. This works like expected with my sandbox, because the executable looks there instead of system wide install paths.
On Linux I can't get this to work. Apparently the library location is not present in the library itself. As I understand it you have to add the folders which contain libraries to /etc/ld.so.conf and regenerate the ld cache by running ldconfig.
This doesn't seem to do the trick for me because my libraries are located inside a users home directory. It looks like ldconfig doesn't like that, which makes sense actually.
How can I solve this? I don't want to move the libraries out of my sandbox.
On Linux, run your program with the environment variable LD_LIBRARY_PATH set to your sandbox dir.
(I remember having used a flag -R to include library paths in the binary, but either it has been removed from gcc or it was only available on BSD systems.)
On Linux you should set LD_RUN_PATH to your sandbox dir. This is better than setting LD_LIBRARY_PATH because you're telling the linker where the library is at link time, rather than telling the shared library loader at run time.
See: Link
I'm trying to build a Win32 DLL from an audio-DSP related Linux library (http://breakfastquay.com/rubberband/). There are makefiles and config scripts for Linux, but no help for Windows. The author provides a Win32 binary of a sample app using the library, and I see a number of "#ifdef MSVC" and "#ifdef WIN32" scattered around, so I don't think I'm starting completely from scratch but I'm stuck nevertheless.
As my programming knowledge in either platform is rather limited, I'd appreciate any help.
First of all, what is the right way to get started here? Visual Studio? Cygwin? Initially I started off creating a Win32 DLL project in Visual Studio, adding the source files, thinking about adding a .def file, etc, but at some point I felt like this was going nowhere.
As for Cygwin, this was the first time using it, and I don't even know if this is the sort of thing that Cygwin is designed for. Is it?
On Cygwin, I ran ./configure and got stuck at something like this:
"checking for SRC... configure: error: Package requirements (samplerate) were not met: No package 'samplerate' found"
After looking through the log, it appears that pkg-config is looking for samplerate.pc. How do I handle packages in Windows? libsamplerate is just an open source library, and I have source and a DLL for this. But I'm not sure how to use them to satisfy the dependency requirements for librubberband (which is what I'm trying to build)
I'm completely lost at this point and if anyone can give me a nudge in the right direction... and, is there an easier way to do this?
Many thanks in advance.
If you're still stuck on this I can throw a little light.
You may have to build everything from sources (or have the libraries installed in your environment). You're using Cygwin, I would recommend MinGW and MSYS too, but sometimes it's just not possible to use this combination to build the program or library.
So if using Cygwin, first ensure that you have a proper environment installed. This is that you have the correct development headers installed.
Then download libsndfile. Extract the sources to a directory and from the Cygwin bash shell navigate to that directory. There perform:
./configure
make
make install prefix=/cygdrive/c/cygwin
Notice that I use a prefix, that prefix should point to the directory Cygwin is installed in order to correctly install the libraries (the same happens to MinGW and MSYS, the prefix should point to the MinGW installation directory). Maybe using the usr directory in the prefix works too, I've never tried it.
Now download FFTW, as it will be needed for libsamplerate and rubberband. Same procedure as with libsndfile: extract, configure, make & make install using the prefix. Now copy the header files of FFTW (in the example they'd be in /cygdrive/c/cygwin/include) to the include directory in the usr directory (in the example /cygdrive/c/cygwin/usr/include).
Next SRC (libsamplerate), same procedure.
Then the Vamp plugin SDK. In order to compile the it you may need to edit the file src\vamp-hostsdk\PluginLoader.cpp, deleting RTLD_LOCAL from a dlopen() call (it's safe, it's already the default behaviour).
Also, you may need to install it by hand (in my experiences it didn't like the prefix). Or set the environmental variable PKG_CONFIG_PATH pointing to the paths of pkgconfig, e.g.:
set PKG_CONFIG_PATH=/cygdrive/c/cygwin/lib/pkgconfig:/usr/local/lib/pkgconfig
Now, create a file called ladspa.h in the include directory with the contents of the LADSPA header
Finally, configure and build rubberband, it should find everything it needs.
To build in MSYS using MinGW follow the same procedure, using the according prefix. Using Visual Studio is another alternative, but you may need to use some of the pre-built libraries (for example for libsndfile) as building Linux libraries natively in Windows may be complicated or even impossible (without hacking the source code) in VS.
Anyway, the autor of rubberband provides binaries; I think you should consider use them instead of going through all of this.
Linux to w32 is mostly a tricky thing.
For each of your dependencies, download the source and:
./configure
make
sudo make install
Also, I recommend you to use MinGW + msys in place of CygWin (as the latter produces executables that depend on its libraries). However in your situtation, use the VS approach -- 't will save you a lot of time.