How to do runtime linking in make using LDFLAGS -R option, or some other way - linux

This is a question on run time linking in make, in general.
I am trying to install tmux from source, on a linux system. It has dependency on "libevent" which I have installed in home dir. I am not the root on this system so I can't install it in system wide area.
DIR=$HOME/libevent
./configure --prefix=$HOME/site/tmux/ CFLAGS="-I$DIR/include" LDFLAGS="-L$DIR/lib/"
Though above command works but I need to have $HOME/libevent included in the LD_LIBRARY_PATH all the time for tmux to work. I think there should be a better way.
I need a run time linking so that I don't have to mess with LD_LIBRARY_PATH. I read here http://www.ilkda.com/compile/Environment_Variables.htm that, this can be achieved using "-R" option.
./configure --prefix=$HOME/site/tmux/ CFLAGS="-I$DIR/include" LDFLAGS="-L$DIR/lib/" LDFLAGS="-R$DIR/lib/"
But this is not working and produces the following error:
configure: error: "libevent not found"
Can someone let me know how to do run time linking in make while running configure script.

LDFLAGS="-L$DIR/lib/" LDFLAGS="-R$DIR/lib/"
The sets LDFLAGS to -L$DIR/lib/, and then immediately overrides it with -R$DIR/lib/, not unlike x = 1; x = 2; results in x == 2.
What you want is: LDFLAGS="-L$DIR/lib/ -R$DIR/lib/"
"libevent not found"
I trusted you to read the man page, but you didn't. The -R flag means RUNPATH to linker on Solaris, but it means something else to Linux linker.
What you want then is:
LDFLAGS="-L$DIR/lib/ -Wl,--rpath=$DIR/lib/"

Related

shared library not found during compilation

So I got several shared libraries that I am trying to permanently install on my Ubuntu system but I am having some difficulty with it.
I want to install the libraries and the headers in a separate folder under /usr/local/lib and /usr/local/include (for example a folder named agony) so it would be clean and removing them would just require that I delete those folders. so it looks something like this:
/usr/local/lib/agony/libbtiGPIO.so
/usr/local/lib/agony/libbtiDSP.so
...
/usr/local/include/agony/GPIO.h
/usr/local/include/agony/DSP.h
...
And I added a file here /etc/ld.so.conf.d/agony.conf which include a line describing the path to the library folder:
$ cat /etc/ld.so.conf.d/agony.conf
/usr/local/lib/agony
and I perform sudo ldconfig to update the library database.
So to double check if the library is found I do ldconfig -p | grep bti* and
I see the following result:
$ ldconfig -p | grep bti
...
libbtiGPIO.so (libc6,x86-64) => /usr/local/lib/agony/libbtiGPIO.so
libbtiDSP.so (libc6,x86-64) => /usr/local/lib/agony/libbtiDSP.so
...
At this point I should be able to use the libraries without specifying the library path. But When I attempt to compile an application without providing the library path (-L) it fails. However, when I supply gcc with the library path ex:
gcc source.c -L /usr/local/lib/agony output -lbtiGPIO -lbtiDSP
it works!!
I don't want to use LD_LIBRARY_PATH environment variable because this library is going to be used everywhere on the system and I don't want other compilers to worry about providing LD_LIBRARY_PATH.
What am I doing wrong here?
At this point I should be able to use the libraries without specifying the library path
Here lies the confusion.
You have built your shared library libbtiGPIO.so (just sticking with that one),
placed it in /usr/local/lib/agony, and updated the ldconfig database accordingly.
The effect of that is when you run a program that has been linked with libbtiGPIO
then the dynamic linker (/lib/x86_64-linux-gnu/ld-2.21.so, or similar) will know where to look
to load that library into the process and you will not need to tell it by setting an LD_LIBRARY_PATH in the environment.
However, you haven't done anything that affects the list of default library
search directories that are hardwired into your build of gcc, that it passes to
the linker (/usr/bin/ld) when you link a program with libbtiGPIO in the first place.
That list of default search directories is what you will find if your do a verbose
build of your program - gcc -v ... - and then pick out the value of LIBRARY_PATH
from the output, e.g.
LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/5/:\
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/:\
/usr/lib/gcc/x86_64-linux-gnu/5/../../../../lib/:\
/lib/x86_64-linux-gnu/:\
/lib/../lib/:\
/usr/lib/x86_64-linux-gnu/:\
/usr/lib/../lib/:\
/usr/lib/gcc/x86_64-linux-gnu/5/../../../:\
/lib/:\
/usr/lib
/usr/local/lib/agony is not one of those and to make it one of those you
would have to build gcc from source yourself. Hence, in order to link your
program with libbtiGPIO you still need to tell ld where to find it with
-L/usr/local/lib/agony -lbtiGPIO.
man, you misunderstand the procedure of complier and link.
First, libbtiGPIO.so is a shared link library not a static link library. it is important to know those difference .
Then you need to know something else. changing ld.so.conf.d/*.conf and run sudo ldconfig, it affects the procedure of link. in other words, if you don't add agony.conf and sudo ldconfig, you will receive a error when you run ./a.out rather than gcc source.c -L ...., the gcc command can run successfully even thougth you don't ldconfig.
Finally,if you don't pollute the LD_LIBRARY_PATH environment variable, you have to add -L ... options in your gcc command. What'more, if you don't want to input too many words in your shell frequently, you can learn to use Makefile.

Sys_error using ocamlmklib on an object file

I am compiling a theorem prover on cygwin and I get this error:
$ make
ocamlmklib -o bin/minisatinterface minisat/core/Solver.o minisat/simp/SimpSolver
.o bin/Ointerface.o -lstdc++
** Fatal error: Error while reading minisat/core/Solver.o: Sys_error("Invalid ar
gument")
Makefile:49: recipe for target `bin/libminisatinterface.a' failed
make: *** [bin/libminisatinterface.a] Error 2
It is not clear what kind of invalid argument is here?
The only documentation I have found for ocamlmklib did not help on understanding the error message. Could it not read the file itself or there is a problem with the contents? ls does list the file:
$ ls -l minisat/core/Solver.o
-rw-r--r-- 1 gbuday mkpasswd 2096 jan. 22 10.42 minisat/core/Solver.o
update: if I remove Solver.o I get a different error message:
** Fatal error: Cannot find file "minisat/core/Solver.o"
So the above error message is about the contents of the object file.
I happen to know that this specifically has to do with the build of the ATP Satallax, which can be used with Isabelle Sledgehammer, and I was asked to look at this.
I have no expertise with make files and ocaml. My success at building Satallax v2.7 came purely from following the instruction in INSTALL, with some minimal ability at guessing at what error codes meant, which I mainly needed when building Satallax v2.6 over a year ago.
The first important thing to do is make sure that the tar file is unzipped while working in a Cygwin terminal, rather than under Windows with something like WinZip.
Assuming that you're working in a Cygwin terminal, these are the notes which I made. After that I'll include text from the Satallax INSTALL, and few comments.
Sources: http://www.ps.uni-saarland.de/~cebrown/satallax/
0) tar xvzf satallax-2.7.tar.gz
1) Cygwin Package (these are also for other's like Leo-II):
zlib-devel, make, OCaml devel, gcc devel, g++ devel, libstdc++6-devel
Ubuntu 12 Packages:
sudo apt-get install build-essential
zlibg-dev using the Ubuntu Software Center
ocaml and g++ if they don't come with "build-essential"
2) Put eprover.exe in the path so that ./configure can find it.
a) There are the following lines in the configure files, which shows
that it's configured to find picomus, eprover has to be in the path
or `which eprover` has to be edited.
# Optionally set picomus to your picomus executable
picomus=${PWD}/picosat-936/picomus
# Optionally set eprover to your E theorem prover executable
eprover=`which eprover`
3) Follow the instructions in INSTALL.
a) export MROOT=`pwd` takes care of this next note, which I had to do
for v2.6, info I keep in here in case I need it in the future.
b) export MROOT=<minisat-dir>, where you replace "minisat-dir" with the
/cygdrive/e\E_2\binp\isaprove\satallax-2.6\cygwin\minisat
3) OLD v2.6 NOTE: If you get an error, delete the old source and try
untaring the sources again.
My build of v2.7 went through without problems, other than the test giving errors.
With Satallax v2.7, there is now the requirement that the build find the eprover. Note STEP 3 of INSTALL tells you to modify configure, or put eprover.exe in the path before the build. I put it in the path, which for me is
E:\E_2\dev\Isabelle2013-2\contrib\e-1.8\x86-cygwin
The INSTALL file then gives short instructions:
* Short Instructions
cd minisat
export MROOT=`pwd`
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../../picosat-936
./configure
make
cd ..
./configure
make
./test | grep ERROR
After downloading all needed packages, and putting eprover.exe in the path, it built without errors for me other than the test, but the executable works when used by Isabelle Sledgehammer.
STEP 3 of INSTALL talks about providing the location of the picomus executable, but I'm pretty sure that there's not need to do that because picosat-936\picomus.exe gets built in this build.
If you watch the build messages, it'll tell you what it's looking for and what it finds.
For completeness, I include the text from INSTALL, except for the instructions related to what's pertinent for Coq.
There are a number of requirements in order to compile Satallax.
In short, you need make, ocaml, g++ and the zlib header files.
In Debian and derived Linux systems, you can get these from
the build-essential and zlib1g-dev packages. You need
ocamlopt to obtain a standalone executable.
If you're not the administrator of the computer on which you're installing,
you can quote the previous paragraph to the administrator.
* Short Instructions
cd minisat
export MROOT=`pwd`
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../../picosat-936
./configure
make
cd ..
./configure
make
./test | grep ERROR
./bin/satallax.opt is the native code executable to use.
See test for examples of how to use it.
* Long Instructions
STEP 1:
Compile minisat (see minisat/README)
cd minisat
export MROOT=<minisat-dir> (or setenv in cshell)
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../..
STEP 2 (Optional. Only needed to extract proof information for proof terms.) :
Build picosat (including picomus):
cd picosat-936
./configure
make
cd ..
STEP 3:
If desired, edit the configure script to give the location of your picomus
and eprover executables. (If the executables are not found by the configure script,
you will need to give the location of the executables to satallax via the command line
options -P <picomus> -E <eprover> if they are needed.)
Run the configure script for Satallax.
./configure
STEP 4:
make
uses ocamlopt to make a standalone executable
./bin/satallax.opt
and uses ocamlc to make a bytecode executable
./bin/satallax
that depends on ocamlrun
STEP 5:
Test satallax using the examples in the script file:
./test
As long as you don't see a line with the word ERROR, it should be working.

MPI - error loading shared libraries

The problem I faced has been solved here:
Loading shared library in open-mpi/ mpi-run
I know not how, setting LD_LIBRARY_PATH or specifying -x LD_LIBRARY_PATH fixes the problem, when my installation itself specifies the necessary -L arguments. My installation is in ~/mpi/
I have also included my compile-link configs.
$ mpic++ -showme:version
mpic++: Open MPI 1.6.3 (Language: C++)
$ mpic++ -showme
g++ -I/home/vigneshwaren/mpi/include -pthread -L/home/vigneshwaren/mpi/lib
-lmpi_cxx -lmpi -ldl -lm -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
$ mpic++ -showme:libdirs
/home/vigneshwaren/mpi/lib
$ mpic++ -showme:libs
mpi_cxx mpi dl m rt nsl util m dl % Notice mpi_cxx here %
When I compiled with mpic++ <file> and ran with mpirun a.out I got a (shared library) linker error
error while loading shared libraries: libmpi_cxx.so.1:
cannot open shared object file: No such file or directory
The error has been fixed by setting LD_LIBRARY_PATH. The question is how and why? What am i missing? Why is LD_LIBRARY_PATH required when my installation looks just fine.
libdl, libm, librt, libnsl and libutil are all essential system-wide libraries and they come as part of the very basic OS installation. libmpi and libmpi_cxx are part of the Open MPI installation and in your case are located in a non-standard location that must be explicitly included in the linker search path LD_LIBRARY_PATH.
It is possible to modify the configuration of the Open MPI compiler wrappers and make them pass the -rpath option to the linker. -rpath takes a library path and appends its to a list, stored inside the executable file, which tells the runtime link editor (a.k.a. the dynamic linker) where to search for libraries before it consults the LD_LIBRARY_PATH variable. For example, in your case the following option would suffice:
-Wl,-rpath,/home/vigneshwaren/mpi/lib
This would embed the path to the Open MPI libraries inside the executable and it would not matter if that path is part of LD_LIBRARY_PATH at run time or not.
To make the corresponding wrapper add that option to the list of compiler flags, you would have to modify the mpiXX-wrapper-data.txt file (where XX is cc, c++, CC, f90, etc.), located in mpi/share/openmpi/. For example, to make mpicc pass the option, you would have to modify /home/vigneshwaren/mpi/share/openmpi/mpicc-wrapper-data.txt and add the following to the line that starts with linker_flags=:
linker_flags= ... -Wl,-rpath,${prefix}/lib
${prefix} is automatically expanded by the wrapper to the current Open MPI installation path.
In my case, I just simply appends
export LD_LIBRARY_PATH=/PATH_TO_openmpi-version/lib:$LD_LIBRARY_PATH
For example
export LD_LIBRARY_PATH=/usr/local/openmpi-1.8.1/lib:$LD_LIBRARY_PATH
into $HOME/.bashrc file and then source it to active again source $HOME/.bashrc.
I installed mpich 3.2 using the following command on Ubuntu.
sudo apt-get install mpich
When I tried to run the mpi process using mpiexec, I got the same error.
/home/node1/examples/.libs/lt-cpi: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
Configuring LD_LIBRARY_PATH didn't fix my problem.
I did a search for the file 'libmpi.so.0' on my machine but couldn't find it. Took me some time to figure out that 'libmpi.so.0' file is named as 'libmpi.so' on my machine. So I renamed it to 'libmpi.so.0'.
It solved my problem!
If you are having the same problem and you installed the library through apt-get, then do the following.
The file 'libmpi.so' should be in the location '/usr/lib/'. Rename the file to 'libmpi.so.0'
mv /usr/lib/libmpi.so /usr/lib/libmpi.so.0
After that MPI jobs should run without any problem.
If 'libmpi.so' is not found in '/usr/lib', you can get its location using the following command.
whereis libmpi.so
first, run this command
$ sudo apt-get install libcr-dev
if still have this problem then configure LD_LIBRARY_PATH like this:
export LD_LIBRARY_PATH=/usr/local/mpich-3.2.1/lib:$LD_LIBRARY_PATH
then add it to ~/.bashrc before this line:
[ -z "$PS1" ] && return
Simply running
$ ldconfig
appears to me as a better way to solve the problem (taken from a comment on this question). In particular, since it avoids misuse of the LD_LIBRARY_PATH environment variable. See here and here, for why I believe it's misused to solve the problem at hand.

why after setting LD-LIBRARY_PATH and ld.so.cache properly, there are still library-finding problems?

I have a certain shared object library in a special directory which I
make sure special directory is in $LD_LIBRARY_PATH
make sure this directory has read and execute permisions for all
make sure appropriate library directory is in ld.so.conf and that root has done a ldconfig
(verify by checking for library using ldconfig -p as normaluser.
make sure it is has no soname problems (i.e. create a few symlinks if necessary)
Now, say I compile a program that needs that special library, a program packaged in a typical Open Source manner which ./configure && make, etc) and it says -lspecialibrary cannot be found, an error which a lack of any of the above checks would also probably throw.
A workaround I have done is to symlink the library to /usr/local/lib64 and suddenly the library has ben found. Also when compiling a relatively simple package, I manually add -L/path/to/spec/lib and that also has worked. But I regard those two methods as hacks, so I was looking for any clues as to why my list of checks aren't good enough to find a library.
(I particularly find the $LD_LIBRARY_PATH of shallow use. In fact I can exclude certain libraries from it, and they will still be found in a compilation process).
$LD_LIBRARY_PATH and ldconfig are only used to locate libraries when running programs that need libraries, i.e. they are used by the loader not the compiler. Your program depends on libspeciallibrary.so. When running your program $LD_LIBRARY_PATH and ldconfig are consulted to find libspeciallibary.so.
These methods are not used by your compiler to find libraries. For your compiler, the -L option is the right way to go. Since your package uses the autotools, you should set the $LDFLAGS environment variable:
LDFLAGS=-L/path/to/lib ./configure && make
This is also documented in the configure help:
./configure --help

Force gcc to compile 32 bit programs on 64 bit platform

I've got a proprietary program that I'm trying to use on a 64 bit system.
When I launch the setup it works ok, but after it tries to update itself and compile some modules and it fails to load them.
I'm suspecting it's because it's using gcc and gcc tries to compile them for a 64 bit system and therefore this program cannot use these modules.
Is there any way (some environmental variables or something like that) to force gcc to do everything for a 32 bit platform. Would a 32 bit chroot work?
You need to make GCC use the -m32 flag.
You could try writing a simple shell script to your $PATH and call it gcc (make sure you don't overwrite the original gcc, and make sure the new script comes earlier in $PATH, and that it uses the full path to GCC.
I think the code you need is just something like /bin/gcc -m32 $* depending on your shell (the $* is there to include all arguments, although it might be something else – very important!)
You may get a 32-bit binary by applying Alan Pearce's method, but you may also get errors as follows:
fatal error: bits/predefs.h: No such file or directory
If this is the case and if you have apt-get, just install gcc-multilib
sudo apt-get install gcc-multilib
For any code that you compile directly using gcc/g++, you will need to add -m32 option to the compilation command line, simply edit your CFLAGS, CXXFLAGS and LDFLAGS variables in your Makefile.
For any 3rd party code you might be using you must make sure when you build it to configure it for cross compilation. Run ./configure --help and see which option are available. In most cases you can provide your CFLAGS, CXXFLAGS and LDFLAGS variables to the configure script. You also might need to add --build and --host to the configure script so you end up with something like
./configure CFLAGS=-m32 CXXFLAGS=-m32 LDFLAGS=-m32 --build=x86_64-pc-linux-gnu --host=i686-pc-linux-gnu
If compilation fails this probably means that you need to install some 32 bit development packages on your 64 bit machine

Resources