Standard linux `make install` of an application, linking to correct libs - linux

I am working on an application that consists of a number of binaries, scripts, and libs. Under development this far, I've built and run inside my repository:
myapp:
bin/
include/
lib/
scripts/
src/
Makefile
src/ contains code for several modules, either libs or binaries. Each have their makefiles.
Running make from myapp/ sets up environment variables for target install directories, then recursively runs make install (which uses the environment variables) for each submodule in src/.
This installs the binaries, includes, and libs in the relevant subdirectory of myapp/, since that is how the environment variables are setup.
Now I am reaching a time where I want to install system-wide, presumably in /usr/local. I am also interested in keeping the ability to build and install locally in myapp/ while developing. It is convenient to be able to run the binaries in myapp/bin/ without having to install them system wide first.
My first plan was to keep the default make target creating the installables (binaries, libs, includes, scripts) under myapp, then have a new install target in myapp/Makefile which would copy these installables in /usr/local/ (requiring sudo).
My problem is that under development, the binaries need to know where the libs are. I have been linking to the libs in myapp/lib/ with -Wl,-rpath=/path/to/myapp/lib. However this will not be appropriate with system installed binaries, these should refer to /usr/local/lib/ instead.
I can see several solutions, but none very good:
make install rebuilds instead of just copying, with the environment variable target directories set in /usr/local instead of myapp/. Drawback: I think this will require sudo for the whole rebuild process, instead of only for the install.
remove linking with -Wl,-rpath, and instead set LD_LIBRARY_PATH to include myapp/lib while in development, but not otherwise. Apparently this is considered harmful. I could easily forget to unset it when I want to run system wide, and the local libs would wrongly be used.
remove linking with -Wl,-rpath, and require to install the libs system wide before building the binaries locally in myapp/. This is cumbersome, I would like to keep the ability to clone my repo and build locally in one step.
Others have probably had this very problem, and I would like to know if there is a standard solution.
This was interesting, but does not deal with my issue of linking libs.

Related

Create a ready to use install package on linux with cmake or gcc, including shred dependencies

I have completed a small project that uses several libraries. the CMakeList.txt looks like this:
cmake_minimum_required(VERSION 3.5)
project(tf_ct_log C)
set(CMAKE_C_STANDARD 99)
include_directories(include /usr/local/include/hiredis /usr/include/openssl)
link_directories(/usr/lib/x86_64-linux-gnu)
set(HDR include/ct_logger.h)
add_executable(tf_ct_logger src/main.c src/ct_logger.c ${HDR})
find_package(OpenSSL REQUIRED)
find_package(Threads REQUIRED)
find_library(PostgreSQL REQUIRED)
find_library(jansson REQUIRED)
target_link_libraries(tf_ct_logger OpenSSL::SSL jansson pthread pq)
I would like to be able to build a package that can be installed in another machine, without downloading any dependencies. with ldd, I 've got all dependencies of the application and copied those files (libxyz.so...) into a subdirectory deps in my project. How can I create that package using those dependencies so that the end user will just use the object files of my project along with the dependencies libraries to create the executable?
It gets really hairy real quick when you need to create a native package for multiple flavors of installers (Debian, RH, Arch, etc.), especially if customization is involved.
If you just need a clean reproducible to get it on a box and run -- I would strongly suggest looking towards packaging it as a Docker container.
You start from some lightweight Linux distro container (Alpine is the latest trend), derive it into one with C and C++ runtime and anything else you depend on and call this "my prod container". From that you derive one with C++ compiler and debugger installed and call it "my dev container".
We actually wrote a little memo a while back, while making our open source hobby usable for others.
You will probably still need to clean up your CMake file to an extent that the "install" target works (mine is here).
I may have not expressed myself correctly. I just wanted to compile and build executables from my project, then find a way to copy it on another machine without having to install all dependencies before running the app.
The solution I found (with the help of a more experienced developer) was as follows:
1- Get all dependencies using ldd
2- Copy dependencies in a directory dependencies
3- On the target environment, copy content of dependencies into /usr/local/lib/myapp/
4- On the target environment, goto /etc/ld.so.conf.d/
5- create the file myapp.conf with one line in it: /usr/local/lib/myapp
6- run ldconfig
Then, the executable I created on my development machine runs here smoothly!
Of course all the steps I described up there must be listed in a script for automation

Finding my Linux shared libraries at runtime

I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this:
core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux)
tests.exe - an app use to test the DLL (uses google test)
All of my binaries must live in one folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably
To illustrate, Here's the basic project tree. We use CMake. After I build I've got
mysdk
|---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core")
|---/tests (source code + CMakeLists.txt)
|---/core (source code + CMakeLists.txt)
|---/build (all build ouput, CMake output, etc)
|---tests (build output)
|---core (build output)
The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder.
I tried adding CMake's "install" command, to each of my CMakeLists.txt files (e.g. install(TARGETS core DESTINATION bin). I then then executed sudo make install after my normal build. This put all my binaries in /usr/local/bin with no errors. But when I ran tests from there, it failed to find libcore.so, even though it was sitting right there in the same folder
tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory
I read up on the LD_LIBRARY_PATH environment variable and so tried adding that folder (/usr/local/bin) into it and running. I can see I've properly altered LD_LIBRARY_PATH but it still doesn't work. "tests" still can't find libcore.so. I even tried changing the PATH environment variable as well. Same result.
In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of /mysdk/build) and running tests from there. To my surprise it ran.
Then I realized why: Instead of loading the local copy of libcore.so it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of libcore.so made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really was baked in.
I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions:
What is the correct way to control where my app looks for my shared libraries?
Is there a relationship between my project build structure and how my binaries must then appear when installed?
Am I even close to the right way of doing this?
Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.
You can run ldd file to print the shared object dependencies for file. It will tell you where are its dependencies being read from.
You can export the environment variable LD_LIBRARY_PATH with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to LD_LIBRARY_PATH and then run ldd again (make sure you export the variable).
Also, make sure the dependencies have the right permissions.
Updating LD_LIBRARY_PATH is an option. Another option is using RPATH. Please check the example.
https://github.com/mustafagonul/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
# Project
project(005-executable-with-shared-library)
# Directories
set(example_BIN_DIR bin)
set(example_INC_DIR include)
set(example_LIB_DIR lib)
set(example_SRC_DIR src)
# Library files
set(library_SOURCES ${example_SRC_DIR}/library.cpp)
set(library_HEADERS ${example_INC_DIR}/library.h)
set(executable_SOURCES ${example_SRC_DIR}/main.cpp)
# Setting RPATH
# See https://cmake.org/Wiki/CMake_RPATH_handling
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR})
# Add library to project
add_library(library SHARED ${library_SOURCES})
# Include directories
target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR})
# Add executable to project
add_executable(executable ${executable_SOURCES})
# Linking
target_link_libraries(executable PRIVATE library)
# Install
install(TARGETS executable DESTINATION ${example_BIN_DIR})
install(TARGETS library DESTINATION ${example_LIB_DIR})
install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR})

How to make Cmake globally available

I just installed Cmake from git clone wget http://www.cmake.org/files/v2.8/cmake-2.8.3.tar.gz in a new folder on a Linux server. The compilation worked but cmake command is not recognized from other paths. Should I copy the entire contents of cmake-2.8.0 folder to usr/local/bin? Or is the contents of bin folder that need to be copied?
Thanks
On Linux and other Unix-based systems, a common arrangement is to install packages to /opt and add relevant entries to the PATH environment variable to make them available. This is intended for packages not provided by the native package manager or distribution. By choosing an appropriate directory structure, this can be done in a way which also allows different versions to be installed simultaneously and the user can pick which one they want by adding the relevant directory to the PATH.
For the specific case of CMake asked about in the question, you can use a directory structure like /opt/cmake/<version> and then add the relevant /opt/cmake/<version>/bin directory to your PATH (e.g. /opt/cmake/3.8.2/bin for the 3.8.2 CMake release). You can even just download the official pre-built CMake tarballs, unpack them and move the top level directory into the /opt/cmake area as the particular version you downloaded. I've used this successfully on Linux, MacOS and Solaris, as I'm sure have many others.
Note that once you've run CMake on a particular source tree, the cmake executable doesn't need to be on the PATH any more. If cmake needs to be re-run, the build will do so itself and it records the full path to the cmake executable in its own cache, so the PATH isn't even consulted (this is essential in ensuring the same version of CMake continues to be used for all builds regardless of the PATH, since PATH can change between login sessions, etc.). You would only need cmake on your PATH if you intend to invoke cmake manually or for the first time you run it on a source tree, but in both of these cases you can always just use the full path to the cmake executable if you preferred.
I should also add that the entire set of files provided in the CMake package are required, not just the bin directory. CMake makes extensive use of files in its other directories, such as the various modules it comes with. If you are building CMake from source, you may want to build the package target so you get a relocatable tarball or similar which will contain everything that should be included when you provide a CMake package on your system.
After the build, use 'sudo make install'. This will make sure the correct libraries and binaries are copied to their proper places.
Usually this will install the binary to /usr/local/bin.
Make sure the PATH variable has this included.
sudo make install did not copy to /usr/local/bin/ for some reason, so I copied the content of CMAKE /bin. to usr/local/bin an it worked.
cp –a bin/. /usr/local/bin/

Removing source code directory after compiling libraries

I have downloaded the source code from git for some third-party libraries, which contains a makefile, and so I have run make to compile the libraries. During compilation, this seems to add various libraries to directories such as /usr/bin. My questions is: now that I have compiled the code and the libraries have been written to other locations on my machine, can I delete the original folder with the source code in? Or will I still need this to run these libraries?
Generally, a source code installation need 3 steps: ./configure; make; make install
make install will copy the compiled out binaries and libraries to target system directories. After that, removing the source code folder will not impact the binaries and libraries execution since they have been deployed on your system.

How to set cabal extra dirs for all packages in a sandbox

I'm currently working on a Haskell project that uses lots of native code. This means that include files and libraries have to be accessible to cabal. I'm doing that by --extra-lib-dirs and --extra-include-dirs command-line flags.
I'm also using cabal sandboxes feature to avoid global dependency hell.
The trouble is that cabal often needs to reinstall some of my packages and thus rebuilds them, which requires native include files and libraries. So I have to specify --extra-lib-dirs and --extra-include-dirs at the command line when building any of my packages at all, even for those that don't require native code, which is very annoying.
I know I can use extra-lib-dirs and extra-include-dirs in .cabal files, but that ones don't allow relative paths and I prefer not committing files with absolute paths on my computer to a centralized repository.
So I wonder, is there any way to add directories to extra-lib-dirs or extra-include-dirs for all the packages in a sandbox? Or maybe globally for a computer?
You can simply create a local cabal.config in the directory where your sandbox is located. (Don't modify cabal.sandbox.config, as that file is auto-generated.)

Resources