Why spack isn't adding external packages to environment filesystem views? - spack

Suppose I am in my_env:
spack activate my_env
Also suppose that I have external packages, e.g. a system-installed openmpi.
And then I generate filesystem views:
spack env view regenerate
Then I get this warning:
Skipping external package: openmpi
And indeed the binaries of openmpi are not symlinked into my filesystem view. My question is: is there a particular reason it is done like that? And is there a way to tell spack that it should also put external packages in the filesystem view?
[Notice that contrary to the filesystem view, external packages are included when generating modules through spack env loads]

The external packages are skipped because there are 3 different types of views that are supported: symlink, hardlink and copy.
When spack copies binaries it seeks to make them relocatable, but with system/external binaries it is a bit of the wild west. So externals are skipped to keep the behavior consistent, and to prevent potential problem binaries that spack did not create inside copy and hard link views. This could be changed in the future. If it is a big enough concern then please file an issue on github.
As you've noticed, loads load system binaries automatically. This is because loads files are just loading modules and spack creates a module for every package in the environment including externals.

Related

Finding my Linux shared libraries at runtime

I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this:
core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux)
tests.exe - an app use to test the DLL (uses google test)
All of my binaries must live in one folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably
To illustrate, Here's the basic project tree. We use CMake. After I build I've got
mysdk
|---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core")
|---/tests (source code + CMakeLists.txt)
|---/core (source code + CMakeLists.txt)
|---/build (all build ouput, CMake output, etc)
|---tests (build output)
|---core (build output)
The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder.
I tried adding CMake's "install" command, to each of my CMakeLists.txt files (e.g. install(TARGETS core DESTINATION bin). I then then executed sudo make install after my normal build. This put all my binaries in /usr/local/bin with no errors. But when I ran tests from there, it failed to find libcore.so, even though it was sitting right there in the same folder
tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory
I read up on the LD_LIBRARY_PATH environment variable and so tried adding that folder (/usr/local/bin) into it and running. I can see I've properly altered LD_LIBRARY_PATH but it still doesn't work. "tests" still can't find libcore.so. I even tried changing the PATH environment variable as well. Same result.
In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of /mysdk/build) and running tests from there. To my surprise it ran.
Then I realized why: Instead of loading the local copy of libcore.so it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of libcore.so made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really was baked in.
I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions:
What is the correct way to control where my app looks for my shared libraries?
Is there a relationship between my project build structure and how my binaries must then appear when installed?
Am I even close to the right way of doing this?
Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.
You can run ldd file to print the shared object dependencies for file. It will tell you where are its dependencies being read from.
You can export the environment variable LD_LIBRARY_PATH with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to LD_LIBRARY_PATH and then run ldd again (make sure you export the variable).
Also, make sure the dependencies have the right permissions.
Updating LD_LIBRARY_PATH is an option. Another option is using RPATH. Please check the example.
https://github.com/mustafagonul/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
# Project
project(005-executable-with-shared-library)
# Directories
set(example_BIN_DIR bin)
set(example_INC_DIR include)
set(example_LIB_DIR lib)
set(example_SRC_DIR src)
# Library files
set(library_SOURCES ${example_SRC_DIR}/library.cpp)
set(library_HEADERS ${example_INC_DIR}/library.h)
set(executable_SOURCES ${example_SRC_DIR}/main.cpp)
# Setting RPATH
# See https://cmake.org/Wiki/CMake_RPATH_handling
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR})
# Add library to project
add_library(library SHARED ${library_SOURCES})
# Include directories
target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR})
# Add executable to project
add_executable(executable ${executable_SOURCES})
# Linking
target_link_libraries(executable PRIVATE library)
# Install
install(TARGETS executable DESTINATION ${example_BIN_DIR})
install(TARGETS library DESTINATION ${example_LIB_DIR})
install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR})

rpmbuild not including symlink shared object as a Provides

I am building an RPM which is essentially just packaging a set of vendor provided .so binaries so that they are available for our internal application, which is also installed via RPM.
One of the libraries comes with multiple versions - libexample_base.so and libexample_debug.so (among others). I am currently trying to package these so that they are all included in the RPM (so that developers can switch between them if needed), but picks libexample_base.so as the default version by creating a symlink during %install, which then gets packaged as a file in the RPM.
%install
[... Copy the files from the tarball to the buildroot ...]
pushd %{buildroot}%{_libdir}
ln -sf libexample_base.so libexample.so
popd
This works great... except for one problem. It's using automatic dependency generation, and while it's providing all of the shared objects for which is has actual files, it's not providing libexample.so, despite the symlink being in %files and installing properly. Unfortunately, the vendor libraries do not provide SONAME entries, and as they are binary blobs I can't readily add them, so RPM is depending on the actual file names. All of the downstream RPMs require libexample.so, and since this RPM is not listing it as a requires, they are refusing to install due to missing dependencies, even though the do actually work (ldconfig can find libexample.so without issue).
Any ideas on how to prompt rpmbuild to parse the symlink as a provides?
After some further research, I have determined that what I am trying to do is not possible, as well as why. The core problem comes down to rpmbuild's behavior, which was built to handle correctly generated shared objects with SONAME entries. It explicitly does not list symlinks to shared objects that end in .so as provides because with normal behavior, those point to a versioned shared object (.so.#.#), and your application is supposed to depend on those versioned objects - the symlink is meant to be included in the devel package just to give your linker a way to find the latest one.
I'm running into a second case that is used as a backup for when things aren't done correctly - for both RPM and GCC, when no SONAME is present, it uses the file name. When I run GCC, the filename is libexample.so (the symlink to the real one), and it has no SONAME, so it just configures itself to link against libexample.so; rpmbuild sees this and sets that as a requires for the application RPM. However, since rpmbuild is explicitly excluding that name from the library rpm since it looks like a devel link, there is no way to reconcile the two rpms.
The solution? For now, I'm using my workaround, just making a copy of the file - when there's a physical file with that name, it works. The correct way is to fix the shared objects, although I'm going to need to go to the upstream vendor to get that done.

Standard linux `make install` of an application, linking to correct libs

I am working on an application that consists of a number of binaries, scripts, and libs. Under development this far, I've built and run inside my repository:
myapp:
bin/
include/
lib/
scripts/
src/
Makefile
src/ contains code for several modules, either libs or binaries. Each have their makefiles.
Running make from myapp/ sets up environment variables for target install directories, then recursively runs make install (which uses the environment variables) for each submodule in src/.
This installs the binaries, includes, and libs in the relevant subdirectory of myapp/, since that is how the environment variables are setup.
Now I am reaching a time where I want to install system-wide, presumably in /usr/local. I am also interested in keeping the ability to build and install locally in myapp/ while developing. It is convenient to be able to run the binaries in myapp/bin/ without having to install them system wide first.
My first plan was to keep the default make target creating the installables (binaries, libs, includes, scripts) under myapp, then have a new install target in myapp/Makefile which would copy these installables in /usr/local/ (requiring sudo).
My problem is that under development, the binaries need to know where the libs are. I have been linking to the libs in myapp/lib/ with -Wl,-rpath=/path/to/myapp/lib. However this will not be appropriate with system installed binaries, these should refer to /usr/local/lib/ instead.
I can see several solutions, but none very good:
make install rebuilds instead of just copying, with the environment variable target directories set in /usr/local instead of myapp/. Drawback: I think this will require sudo for the whole rebuild process, instead of only for the install.
remove linking with -Wl,-rpath, and instead set LD_LIBRARY_PATH to include myapp/lib while in development, but not otherwise. Apparently this is considered harmful. I could easily forget to unset it when I want to run system wide, and the local libs would wrongly be used.
remove linking with -Wl,-rpath, and require to install the libs system wide before building the binaries locally in myapp/. This is cumbersome, I would like to keep the ability to clone my repo and build locally in one step.
Others have probably had this very problem, and I would like to know if there is a standard solution.
This was interesting, but does not deal with my issue of linking libs.

How to set cabal extra dirs for all packages in a sandbox

I'm currently working on a Haskell project that uses lots of native code. This means that include files and libraries have to be accessible to cabal. I'm doing that by --extra-lib-dirs and --extra-include-dirs command-line flags.
I'm also using cabal sandboxes feature to avoid global dependency hell.
The trouble is that cabal often needs to reinstall some of my packages and thus rebuilds them, which requires native include files and libraries. So I have to specify --extra-lib-dirs and --extra-include-dirs at the command line when building any of my packages at all, even for those that don't require native code, which is very annoying.
I know I can use extra-lib-dirs and extra-include-dirs in .cabal files, but that ones don't allow relative paths and I prefer not committing files with absolute paths on my computer to a centralized repository.
So I wonder, is there any way to add directories to extra-lib-dirs or extra-include-dirs for all the packages in a sandbox? Or maybe globally for a computer?
You can simply create a local cabal.config in the directory where your sandbox is located. (Don't modify cabal.sandbox.config, as that file is auto-generated.)

How to manage development and installed versions of a shared library?

In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.

Resources