What env variable should does node use to load libraries? - node.js

I am getting this error on server:
node: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory
however a simple find shows:
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
I already have the following set in .bashrc:
export PATH=$PATH:~/.local/bin:/usr/lib
export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib:~/.local/lib
export LIBRARY_PATH=/usr/lib:/usr/local/lib:~/.local/lib
yet node can't find the existing libstd library.
I have done
source .bashrc
and also
echo $LIBRARY_PATH
This is a really common problem I searched, but most solutions recommended an install in my case the file is installed node just doesn't see it.

Try installing missing lib32stdc++ by apt-get install lib32stdc++6
Set LD_DEBUG for better diagnose.
If the LD_DEBUG variable is set then the Linux dynamic linker will dump debug information which can be used to resolve most loading problems very quickly. To see the available options just run any program with the variable set to help.
Valid options for the LD_DEBUG environment variable are:
libs display library search paths
reloc display relocation processing
files display progress for input file
symbols display symbol table processing
bindings display information about symbol binding
versions display version dependencies
all all previous options combined
statistics display relocation statistics
unused determined unused DSOs
help display this help message and exit

Related

wrong soname of shared library loaded at runtime Ubuntu

I'm using eclipse cdt to compile and run C++ application.
My_main_program needs specifically libjpeg.so.62.
My Ubuntu system previously have libjpeg.so.9 at /usr/local/lib/. I happened to compiled and run using libjpeg.so.9 before run-time compatibility errors was raised.
Then I deleted all libjpeg.* and installed libjpeg.la, libjpeg.so, libjpeg.so.62 and libjpeg.so.62.0.0 from source. Then I run ldconfig.
I can build the project. The problem is the dynamic linker keeps searching for libjpeg.so.9 and throwing
'error while loading shared libraries: libjpeg.so.9: cannot open shared object file: No such file or directory'
at run-time.
This problem is killing me.
I have checked that the symlink of libjpeg.so is correct.
Please help!
I can build the project. The problem is the dynamic linker keeps searching for libjpeg.so.9 and throwing
'error while loading shared libraries: libjpeg.so.9: ... No such file ...
You need to understand a couple of things:
A shared library may have SONAME dynamic tag (visible with readelf -d foo.so | grep SONAME).
If an executable is linked against such a library, the SONAME is recorded as a NEEDED dynamic tag (in the executable), regardless of what the library itself is called. That is, you can name the library foo.so, foo.so.1234, or anything else. IF the library has SONAME of libbar.so.7, then the executable will require libbar.so.7, no matter what [1].
On to your problem. Your executable fails to load libjpeg.so.9, therefor we conclude that it is being linked (at build time) with a shared library which has SONAME: libjpeg.so.9.
I deleted all libjpeg.* and installed libjpeg.so.62
You must not have deleted the libjpeg.so that is used at executable build time (which is somewhere other than /usr/local/lib). That library still has SONAME: libjpeg.so.9, and is causing you grief.
You can find out which libraries are being used at link time by passing -Wl,-t flag on the link line.
[1] Not strictly true: if the executable doesn't need any symbols from foo.so, and if --as-needed linker option is in effect, then NEEDED: libbar.so.7 will not be recorded after all.
Update:
I have also check ldd executable and it returns libjpeg.so.62
This means that the executable that you run ldd on is correct, but the executable that actually run is not, and they must be different executables.
Update 2:
You're right. ldd executable shows both libjpeg.so.62 and libjpeg.so.9 are included
Actually, no, I wasn't. But I will be right this time.
What's happening is that your executable correctly records NEEDED: libjpeg.so.62 (you can verify this with the following command: readelf -d /path/to/exe | grep 'NEEDED.*libjpeg').
But you also have some other shared library (one of the ones listed in ldd output), which has not been rebuilt, and still has a dependency on libjpeg.so.9.
You can find that library by running readelf -d /path/to/libXXX.so | grep 'NEEDED.*libjpeg\.so\.9' on all libraries listed in ldd output.
Once you find it, you'll have to rebuild it so it also depends on libjpeg.so.62.

MPI - error loading shared libraries

The problem I faced has been solved here:
Loading shared library in open-mpi/ mpi-run
I know not how, setting LD_LIBRARY_PATH or specifying -x LD_LIBRARY_PATH fixes the problem, when my installation itself specifies the necessary -L arguments. My installation is in ~/mpi/
I have also included my compile-link configs.
$ mpic++ -showme:version
mpic++: Open MPI 1.6.3 (Language: C++)
$ mpic++ -showme
g++ -I/home/vigneshwaren/mpi/include -pthread -L/home/vigneshwaren/mpi/lib
-lmpi_cxx -lmpi -ldl -lm -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
$ mpic++ -showme:libdirs
/home/vigneshwaren/mpi/lib
$ mpic++ -showme:libs
mpi_cxx mpi dl m rt nsl util m dl % Notice mpi_cxx here %
When I compiled with mpic++ <file> and ran with mpirun a.out I got a (shared library) linker error
error while loading shared libraries: libmpi_cxx.so.1:
cannot open shared object file: No such file or directory
The error has been fixed by setting LD_LIBRARY_PATH. The question is how and why? What am i missing? Why is LD_LIBRARY_PATH required when my installation looks just fine.
libdl, libm, librt, libnsl and libutil are all essential system-wide libraries and they come as part of the very basic OS installation. libmpi and libmpi_cxx are part of the Open MPI installation and in your case are located in a non-standard location that must be explicitly included in the linker search path LD_LIBRARY_PATH.
It is possible to modify the configuration of the Open MPI compiler wrappers and make them pass the -rpath option to the linker. -rpath takes a library path and appends its to a list, stored inside the executable file, which tells the runtime link editor (a.k.a. the dynamic linker) where to search for libraries before it consults the LD_LIBRARY_PATH variable. For example, in your case the following option would suffice:
-Wl,-rpath,/home/vigneshwaren/mpi/lib
This would embed the path to the Open MPI libraries inside the executable and it would not matter if that path is part of LD_LIBRARY_PATH at run time or not.
To make the corresponding wrapper add that option to the list of compiler flags, you would have to modify the mpiXX-wrapper-data.txt file (where XX is cc, c++, CC, f90, etc.), located in mpi/share/openmpi/. For example, to make mpicc pass the option, you would have to modify /home/vigneshwaren/mpi/share/openmpi/mpicc-wrapper-data.txt and add the following to the line that starts with linker_flags=:
linker_flags= ... -Wl,-rpath,${prefix}/lib
${prefix} is automatically expanded by the wrapper to the current Open MPI installation path.
In my case, I just simply appends
export LD_LIBRARY_PATH=/PATH_TO_openmpi-version/lib:$LD_LIBRARY_PATH
For example
export LD_LIBRARY_PATH=/usr/local/openmpi-1.8.1/lib:$LD_LIBRARY_PATH
into $HOME/.bashrc file and then source it to active again source $HOME/.bashrc.
I installed mpich 3.2 using the following command on Ubuntu.
sudo apt-get install mpich
When I tried to run the mpi process using mpiexec, I got the same error.
/home/node1/examples/.libs/lt-cpi: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory
Configuring LD_LIBRARY_PATH didn't fix my problem.
I did a search for the file 'libmpi.so.0' on my machine but couldn't find it. Took me some time to figure out that 'libmpi.so.0' file is named as 'libmpi.so' on my machine. So I renamed it to 'libmpi.so.0'.
It solved my problem!
If you are having the same problem and you installed the library through apt-get, then do the following.
The file 'libmpi.so' should be in the location '/usr/lib/'. Rename the file to 'libmpi.so.0'
mv /usr/lib/libmpi.so /usr/lib/libmpi.so.0
After that MPI jobs should run without any problem.
If 'libmpi.so' is not found in '/usr/lib', you can get its location using the following command.
whereis libmpi.so
first, run this command
$ sudo apt-get install libcr-dev
if still have this problem then configure LD_LIBRARY_PATH like this:
export LD_LIBRARY_PATH=/usr/local/mpich-3.2.1/lib:$LD_LIBRARY_PATH
then add it to ~/.bashrc before this line:
[ -z "$PS1" ] && return
Simply running
$ ldconfig
appears to me as a better way to solve the problem (taken from a comment on this question). In particular, since it avoids misuse of the LD_LIBRARY_PATH environment variable. See here and here, for why I believe it's misused to solve the problem at hand.

Problems using a shared library

I am following the explanation in this page and this page trying to build and use shared libraries on Ubuntu Linux.
I am building the libraries and application using a cross-compiler on my PC, than copying the files to the target system and running there.
Finally, I am at the stage where all symlinks are defined correctly and the I am able to run the application - but not in the required form.
Let's say that I have a shared library libtest.so.1.0 in a directory /home/ysap/libs. I then created the symlinks libtest.so.1 and libtest.so in the same directory, both pointing to the library file.
In the directory /home/ysap/apps I have an application program app.e that uses the test library.
Now, to run the application, I can type:
> LD_LIBRARY_PATH=/home/ysap/libs ./app.e
and the application runs nicely. However, I'd like to eliminate the assignment, so I tried typing:
> export LD_LIBRARY_PATH=/home/ysap/libs
> ./app.e
but unfortunately I get an error message, saying:
./app.e: error while loading shared libraries: libtest.so.1: cannot open shared object file: No such file or directory
I also tried typing:
> ldconfig -n /home/ysap/libs
and
> sudo ldconfig -n /home/ysap/libs
but it does not help.
What am I doing wrong? How can I make app.e run w/o the variable assignment?
Update 1:
The application uses the mmap() call, so it has to be run with sudo priviledge. The actual invocation line is:
> sudo LD_LIBRARY_PATH=/home/ysap/libs ./app.e
Is it possible that the export-ed variable is not updated in the sudo environment?
Update 2:
Output of ldd ./app.e:
libtest.so.1 => /home/ysap/libs/libtest.so.1 (0xb6faa000)
libgcc_s.so.1 => /lib/arm-linux-gnueabi/libgcc_s.so.1 (0xb6f85000)
libc.so.6 => /lib/arm-linux-gnueabi/libc.so.6 (0xb6ea4000)
/lib/ld-linux.so.3 (0xb6fb7000)
The sudo problem is as #duskwuff states, but if you want to compile an application, and not need to modify the LD_LIBRARY_PATH variable, when linking the application you can use the $ORIGIN variable, which is recognized by most recent versions of linux.
If all the libraries are in the current directory, then when you link the application, you use the extra option:
-Wl,-R'$ORIGIN'
You need to quote the option to prevent it being expanded by the shell when compiling.
If you're putting it into a Makefile then you use:
-Wl,-R\$$ORIGIN
the $$ is for make to use a $, the \ is to prevent the shell that is invoked from the command line expanding the variable before passing it into the command.
You can use any symbolic path reference, so if you had a structure where binaries were in bin/ and libraries were in lib/, you can use $ORIGIN/../lib.
This works for dlopen as well, so it will find libraries when they are being dynamically loaded at run-time
Loading libraries from a user-specified path is a security risk, so sudo always strips out all LD_ environment variables, including LD_LIBRARY_PATH.

Get rid of "gcc - /usr/bin/ld: warning lib not found"

I have the following warning during link:
/usr/bin/ld: warning: libxxx.so.6, needed by /a/b/c/libyyy.so, not found (try using -rpath or -rpath-link)
Setting environment variable LD_LIBRARY_PATH=path_to_libxxx.so.6 silence the warning (adding -Lpath_to_libxxx.so.6 doesn't help).
I have a separate compilation server, where the resulting binary is only compile.
The binary is executed on other server and there the libxxx.so.6 is seen by the binary (checked with ldd executable).
Is there're other way to get rid of the warning at compilation time (I have it several times and it's very annoying)?
You need to add the dynamic library equivalent of -L:
-Wl,-rpath-link,/path/to/lib
This will cause the linker to look for shared libraries in non-standard places, but only for the purpose of verifying the link is correct.
If you want the program to find the library at that location at run-time, then there's a similar option to do that:
-Wl,-rpath,/path/to/lib
But, if your program runs fine without this then you don't need it.
Make sure the paths to the needed libraries are known to the runtime linker. This is done by adding a file in /etc/ld.so.conf.d/ with the needed path. For example, /etc/ld.so.conf.d/foo with the following contents:
/usr/local/lib/foo/
If you have a very old Linux version, /etc/ld.so.conf.d/ might not be supported, in which case you might have to add the paths directly into the /etc/ld.so.conf file.
After you've done that, you need to update the linker's database by executing the "ldconfig" command.
I know this is old, but here's a better fix:
The root cause:
The problem actually happens when LD invoked by GCC starts resolving
library dependencies. Both GCC and LD are aware of the sysroot
containing libraries, however LD may be missing one critical
component: the /etc/ld.so.conf file. Here’s an exampleld.so.conf file
from a Raspberry PI system:
include /etc/ld.so.conf.d/*.conf
The /etc/ld.so.conf.d directory contains the following files:
00-vmcs.conf:
/opt/vc/lib
arm-linux-gnueabihf.conf:
/lib/arm-linux-gnueabihf /usr/lib/arm-linux-gnueabihf
libc.conf:
/usr/local/lib
The universal solution
 
The problem can be easily solved by copying the LD configuration files
to a location where the cross-toolchain’s LD can find them. There’s
one pitfall however: if your cross-toolchain was built with MinGW
(most are), it probably did not have access to the glob() function so
it won’t be able to parse a wildcard-enabled include statement
like *.conf. The workaround here is to just manually combine the
contents of all .conf files from /etc/ld.so.conf.d and paste them
into /etc/ld.so.conf
*/opt/vc/lib
/lib/arm-linux-gnueabihf
/usr/lib/arm-linux-gnueabihf
/usr/local/lib*
Once you create the ld.so.conf file in the correct folder, your
toolchain will be able to resolve all shared library references
automatically and you won’t see that error message again!
The only way to silence these warning using command line options would be the -L flag which curiously does not work for you (maybe you can post more details on this). Since the warning is generated by ld we could try to use -Wl,option to disable a linker warning but from the documentation of GNU ld however there is no option for (de)activating this warnings.
So this leaves us with writing a wrapper script filtering out this warning or compile a custom version of ld.

Symbol not found when dynamic library is moved

I have an app that depends on a dynamic library that is not in a system location. If the library is located in the location from which the executable was linked and LD_LIBRARY_PATH is set to that directory, the application runs.
If the libraries are copied to another directory and LD_LIBRARY_PATH is reset, the application won't start and an undefined symbol error occurs, despite the fact that the symbol appears to be in the library.
Any ideas why this may happen?
Thanks,
Try ldd to show what path used:
ldd youprogram

Resources