Changing .bashrc after manually installing a tool - linux

I am working off of a cluster and since I do not have sudo privileges there, I had to install a toolkit at a different path ~/bin/tool_kit. This path now contains the following directories: bin, include and lib. This may be a very newbie question, but what changes do I make to my .bashrc so that I am able to use this toolkit.
For example, the $PATH variable might be augmented like:
export PATH = ~/bin/tool_kit/bin:$PATH. How do I include lib and include?

bin is the only location you definitely need to do anything with:
# using end of the PATH, unless you know you want to override like-named system binaries
PATH=$PATH:$HOME/bin/tool_kit/bin
No export is needed here, as the PATH variable is already in the environment.
If and only if your software didn't compile in a rpath pointing to the anticipated runtime library locations, you may also wish to set an LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH+$LD_LIBRARY_PATH:}$HOME/bi‌​n/tool_kit/lib
include files are used only when compiling other software, and are not generally needed at runtime.

Related

Python mechanism to load site-packages relative to program/script location?

For a set of programs written in most languages (C for instance) a script can normally run those programs without any sort of interference between dynamic link libraries and with no special hand holding so long as they are all found on PATH. That is, the following will work:
#!/bin/bash
prog1
prog2
prog3
However, if these three programs are written in Python and they import conflicting package versions then to run each one successfully it must either be installed into a virtualenv or each must have a separate site-packages directory which is referenced by PYTHONPATH. Either way they need a set up and possibly a tear down before running. That is, for virtualenv:
#!/bin/bash
source $PROG1_ROOT/bin/activate
prog1
deactivate
source $PROG2_ROOT/bin/activate
prog2
deactivate
source $PROG3_ROOT/bin/activate
prog3
deactivate
and for separate site-packages:
#!/bin/bash
export PYTHONPATH=$PROG1_ROOT/lib/python3.6/site-packages
prog1
export PYTHONPATH=$PROG2_ROOT/lib/python3.6/site-packages
prog2
export PYTHONPATH=$PROG3_ROOT/lib/python3.6/site-packages
prog3
This problem results because
import pkg_resources
(at least through Python3.6) cannot reliably import the proper versions when multiple versions of a package share the same site-package directory, even if __requires__ precedes it listing all the version restrictions.
It occurs to me that if PYTHONPATH, or some equivalent, could be specified relative to the program instead of the $PWD, and some consistency in directory layout was observed, then it would only have to be set once. That is, if prog1 is in $PROG1_ROOT/bin and its libraries are in $PROG1_ROOT/lib/python3.6/site-packages, then setting PYTHONPATH to "../lib/python3.6/site-packages" would work not only for prog1, but also for prog2, prog3, and for as many more as are needed through progN.
However, PYTHONPATH is normally provided as an absolute path, and relative paths are I believe with respect to $PWD, not to the python program (prog1). Is there some other Python path variable which has the desired property? Failing that, is there some type of file which could be dropped into $PROG1_ROOT/bin which would be normally picked up by a python program when it starts and which could direct it to use $PROG1_ROOT/lib/python3.6/site-packages? It would be OK to have either the relative or absolute path in that file, although the former would still be preferred because then one could move the entire PROG1_ROOT directory tree to another location in the file system without having to rewrite this special file. I really want to avoid solutions which would require modifying prog1 etc. themselves (ie, prog1 in the example).
Thanks.
EDITED:
I wrote this:
https://sourceforge.net/projects/python-devirtualizer/
to implement some of these ideas. At this point it is Linux (or at least POSIX) specific. It slightly modifies python scripts in a package's "bin" directory by changing the first line, and it "wraps" everything in that directory with a replacement native binary which injects a custom PYTHONPATH into the true target's environment. That binary looks up its location using a function from libSDL2 and then specifies the PYTHONPATH relative to that. So far it has worked pretty well, and the "programs" in installed python packages (the "bin" directory's contents) are run based on PATH just like any other program, no futzing about with PYTHONPATH in the shell.
Making search paths relative to the executable is a Very Bad Idea (TM). Move the executable or libraries around, all hell breaks loose. Some enterprising miscreant might notice the path settings and place a script just right to get their own doctored libraries (or just flawed old versions) to be used. And so on.
Clean up the misbehaving scripts. Chances are that by using old versions they are vulnerable to by now fixed security boo-boos, or other misbehaviours. Or find a way to load the stuff in the script itself.

Add software bin or just add soft link for executable file in bin when install software on linux?

I’m not root for the linux server,
so I choose to install softwares in my $HOME/local/bin, I already added the $HOME/local/bin directory to the PATH environment variable, wrote in my .bashrc.
Some softwares install this way like:
tar xvzf ncurses-5.9.tar.gz
cd ncurses-5.9
./configure --prefix=$HOME/local
make
make install
cd ..
So it will directly install in my $HOME/local/bin.
But for some softwares, after download like sbt-1.2.1.zip (based on java), and decompression, shows just a file fold sbt, it contains three foldsbin conf lib, and in its bin, contains one executable file named sbt and java9-rt-export.jar sbt-launch-lib.bash sbt-launch.jar sbt.bat.
Here I wonder:
I should just soft link this executable sbt file path under my $HOME/local/bin, then source my .bashrc?
Or, after decompression, add one line in my .bashrc export PATH="downloadpath/sbt/bin:$PATH"?
Since just one executable downloadpath/sbt/bin, so I'm not sure it is right to add whole bin fold path, if software's bin fold contains executable files (one or many), I think this situation is more convenient for just add it's bin in .bashrc, but even so, I'm not sure its right?
I'm not familiar with installation software, now I usually know way
but not why. Here I shows two ways (more ways not be showed here) to
install, executable file always be written in bin or src? But some
softwares no bin just src but no executable files in it...
Slurm also can use modules to install software, conda also other way, but I want to
confirm these traditional ways I mentioned (that two) still can be
used on slurm or conda?
However, any suggestion even one aspect's reminding will be grateful!
For precompiled software, or, in general, software that does not offer configure scripts or (C)make files, it is ofter better to leave them in their target directory and adapt the *PATH (PATH to binaries, but also LD_LIBRARY_PATH, LIBRARY_PATH to libraries and CPATH to include files and MANPATH to the man page) environment variables.
The reason is that the software might be configured to read files with hardcoded paths, relative to the position of the executable, such as libraries, etc.
In your case, you might also need to setup the CLASSPATH env variable to the directory with the jar files.
To ease software installation, you can use tools such as easybuild that can help, and even create user modules just like the system module installed by the system administrators.
There is something wrong in my opinion with your setup. If you don`t have root account on your server, is not better to test what you have to test, in a more safe environment - for example a vm/container on your developement machine ?
However, in your situation maybe it can be better to start sbt by using a separate bash script than modifying your .bashrc

Finding my Linux shared libraries at runtime

I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this:
core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux)
tests.exe - an app use to test the DLL (uses google test)
All of my binaries must live in one folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably
To illustrate, Here's the basic project tree. We use CMake. After I build I've got
mysdk
|---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core")
|---/tests (source code + CMakeLists.txt)
|---/core (source code + CMakeLists.txt)
|---/build (all build ouput, CMake output, etc)
|---tests (build output)
|---core (build output)
The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder.
I tried adding CMake's "install" command, to each of my CMakeLists.txt files (e.g. install(TARGETS core DESTINATION bin). I then then executed sudo make install after my normal build. This put all my binaries in /usr/local/bin with no errors. But when I ran tests from there, it failed to find libcore.so, even though it was sitting right there in the same folder
tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory
I read up on the LD_LIBRARY_PATH environment variable and so tried adding that folder (/usr/local/bin) into it and running. I can see I've properly altered LD_LIBRARY_PATH but it still doesn't work. "tests" still can't find libcore.so. I even tried changing the PATH environment variable as well. Same result.
In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of /mysdk/build) and running tests from there. To my surprise it ran.
Then I realized why: Instead of loading the local copy of libcore.so it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of libcore.so made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really was baked in.
I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions:
What is the correct way to control where my app looks for my shared libraries?
Is there a relationship between my project build structure and how my binaries must then appear when installed?
Am I even close to the right way of doing this?
Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.
You can run ldd file to print the shared object dependencies for file. It will tell you where are its dependencies being read from.
You can export the environment variable LD_LIBRARY_PATH with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to LD_LIBRARY_PATH and then run ldd again (make sure you export the variable).
Also, make sure the dependencies have the right permissions.
Updating LD_LIBRARY_PATH is an option. Another option is using RPATH. Please check the example.
https://github.com/mustafagonul/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
# Project
project(005-executable-with-shared-library)
# Directories
set(example_BIN_DIR bin)
set(example_INC_DIR include)
set(example_LIB_DIR lib)
set(example_SRC_DIR src)
# Library files
set(library_SOURCES ${example_SRC_DIR}/library.cpp)
set(library_HEADERS ${example_INC_DIR}/library.h)
set(executable_SOURCES ${example_SRC_DIR}/main.cpp)
# Setting RPATH
# See https://cmake.org/Wiki/CMake_RPATH_handling
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR})
# Add library to project
add_library(library SHARED ${library_SOURCES})
# Include directories
target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR})
# Add executable to project
add_executable(executable ${executable_SOURCES})
# Linking
target_link_libraries(executable PRIVATE library)
# Install
install(TARGETS executable DESTINATION ${example_BIN_DIR})
install(TARGETS library DESTINATION ${example_LIB_DIR})
install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR})

How can i use a private installation of wxWidgets

I have two installations of wxWidgets, one in /usr/... and one private.
I don't have permissions to change the central directory.
My question is how can i make my private installation the active one?
If i only add it to the PATH setenv, it dosn't work correctly since i'm missing the LIBS.
The usual way to do something like that is to add the folder with your "private" libs to your LD_LIBRARY_PATH environment variable (assuming you're using bash and your shared libraries are in "/home/myLogin/wxWidgets/lib"),
export LD_LIBRARY_PATH="/home/myLogin/wxWidgets/lib:$LD_LIBRARY_PATH"
From The Linux Documentation Project - 3.3.1. LD_LIBRARY_PATH
You can temporarily substitute a different library for this particular execution. In Linux, the environment variable LD_LIBRARY_PATH is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories

Can I avoid exporting LD_LIBRARY_PATH by hardcoding library paths in the executable?

I'm zipping a pre-built (no source/object files) binary application for distribution. The binary application requires a couple of libraries not included by default. The only way I seem to be able to get the application to start on the end-user is by including a run.sh that sets the library path to the current directory:
export LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH
./MyApp.out
However, I'd really like to allow the user to just unzip the zip and doubleclick MyApp.out (without the shell script). Can I edit MyApp.out to search the current directory for the library? I've done something similar on OSX using install_name_tool, but that tool isn't available here.
You want to set the rpath. See this answer. So link using
gcc yourobjects*.o -L/some/lib/dir/ -lsome -Wl,-rpath,.
But you might want even to use -Wl,-rpath,$PWD or perhaps -Wl,-rpath,'$ORIGIN'. See this.
You could also (and this should work for a pre-built executable) configure your /etc/ld.so.conf by adding a line there with an absolute path (of the directory containing the lib), then running ldconfig -v ... See ldconfig(8)
I would suggest adding /usr/local/lib into /etc/ld.so.conf and making a symlink from /usr/local/lib/libfoo.so to e.g. $HOME/libfoo.so etc... (then run ldconfig ...). I don't think adding a user specific directory to /etc/ld.so.conf is reasonable ...
PS. What you really want is to package your application (e.g. as a *.deb package for Debian or Ubuntu, or an *.rpm for Fedora or Redhat). Package management systems handle dependencies!

Resources