is it possible to suppress user libraries from system libraies in linux - linux

I'm creating a application that is using a pre-compiled third party shared library files.To use these I'm required to set the LD_LIBRARY_PATH or create a conf file under /etc/ld.so.conf.d/application.conf, My problem is There is a system libcurl.so.4 already available under /usr/lib/.The third party Library also has a libcurl.so.4 . If I create /etc/ld.so.conf.d/application.conf file, I'm not able to use "YUM installer" .
I'm getting the error
Pycurl error occured ,
Compile time Version is higher than the Linking version
I'm worried to remove the application libcurl.so.4 as it may break the features in that third party library that I'm making use of(making my application meaning less) and I can't neglect the system library either .
Is it possible to use these two libraries without any conflict as I mentioned above.
PS : Setting LD_LIBRARY_PATH too causes the same problem

Create a script that sets and exports $LD_LIBRARY_PATH before invoking the executable. The variable will disappear once the script exits.

If you have 2 conflicting libs and one is system, another one is a user app, don't put application.conf into /etc/ld.so.conf.d/. Instead use something like ~your_user_name/custom_conf , put your application.conf file there (eventually you may need to edit it adding the path to the proper version of libcurl.so.4). libcurl.so.4 should be not in system dirs as well rather in ~your_user_name/lib. You can make a wrapper for your app where you set $LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~your_user_name/lib as Ignacio Vazquez-Abrams suggested or compile your app explicitly pointing out which library to link (use linker flags -L /full/path/to/your/libcurl.so.4)

Related

Can I avoid exporting LD_LIBRARY_PATH by hardcoding library paths in the executable?

I'm zipping a pre-built (no source/object files) binary application for distribution. The binary application requires a couple of libraries not included by default. The only way I seem to be able to get the application to start on the end-user is by including a run.sh that sets the library path to the current directory:
export LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH
./MyApp.out
However, I'd really like to allow the user to just unzip the zip and doubleclick MyApp.out (without the shell script). Can I edit MyApp.out to search the current directory for the library? I've done something similar on OSX using install_name_tool, but that tool isn't available here.
You want to set the rpath. See this answer. So link using
gcc yourobjects*.o -L/some/lib/dir/ -lsome -Wl,-rpath,.
But you might want even to use -Wl,-rpath,$PWD or perhaps -Wl,-rpath,'$ORIGIN'. See this.
You could also (and this should work for a pre-built executable) configure your /etc/ld.so.conf by adding a line there with an absolute path (of the directory containing the lib), then running ldconfig -v ... See ldconfig(8)
I would suggest adding /usr/local/lib into /etc/ld.so.conf and making a symlink from /usr/local/lib/libfoo.so to e.g. $HOME/libfoo.so etc... (then run ldconfig ...). I don't think adding a user specific directory to /etc/ld.so.conf is reasonable ...
PS. What you really want is to package your application (e.g. as a *.deb package for Debian or Ubuntu, or an *.rpm for Fedora or Redhat). Package management systems handle dependencies!

How to manage development and installed versions of a shared library?

In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.

gcc: linked libraries in /usr/local/lib are not found, but /etc/ld/so.conf.d/libc.conf lists it?

I've got a problem with shared libraries and gcc. At first I couldn't run my compiled program because I was getting the following error: gcc error while loading shared libraries.
I did some searching and found that this is because the shared library cannot be found. However I had already identified that the shared library is in /usr/local/lib, which AFAICT is a commonly used directory for shared libraries and should work from the get go.
I read that you can set LD_LIBRARY_PATH, which worked for me. However I do not wish to set this each time I want to run my program.
Further searching suggested editing ld.so.conf. When I looked in this it had the following:
include /etc/ld.so.conf.d/*.conf
Looking in the ld.so.conf.d directory shows me a range of files, including libc.conf. Inside this file is the following:
/usr/local/lib
So my question is, why do I need to manually set LD_LIBRARY_PATH when the ld.so.conf appears to use the libc.conf which includes /usr/local/lib?
Is there something that I'm missing here that must be configured first? Is there an option at compile time that I'm missing?
I should note that to compile, I had to specify the path to the library, I don't know if this is a symptom of my problem or normal behaviour.
I should also note that this is a concern for me for when I deploy my software on other systems. I would have thought that I should be able to put the .so in the appropriate place and install my program without messing with ld.so.conf.
I hope this is the proper forum for this question, I read the FAQ and I think it's ok.
Cheers.
You should run ldconfig (as root) after every change of the directories configured via /etc/ld.so.conf or under /etc/ld.so.conf.d/, in particular in your case after every update inside /usr/local/lib (e.g. after every addition or update of some shared libraries there).

Linux capabilities (setcap) seems to disable LD_LIBRARY_PATH

I use LD_LIBRARY_PATH to set the path of a certain user library for an application. But if I set capabilities on this application
sudo setcap CAP_NET_BIND_SERVICE=eip myapplication
then LD_LIBRARY_PATH seems to be ignored. When I launch the program, Linux complains that it cannot find a certain shared library.
I guess that there's some kind of protection kicking in, to prevent applications with extended rights from being hijacked. Is there a workaround?
As already stated in other answers, this behavior is intended. There is some kind of workaround if you can compile (or at least link) the application yourself. Then you can pass -Wl,-rpath <yourDynamicLibraryPath> to gcc or -rpath <yourDynamicLibraryPath> to ld and you won't have to specify LD_LIBRARY_PATH at all on execution.
The solution to this problem on linux is as follows:
go to directory
$cd /etc/ld.so.conf.d/
create a new file
$touch xyz.conf
open this file using any editor
$vi xyz.conf
Add the your dynamic library path in this file line by line for e.g. if your path is as follows:
/home/xyz/libs1:/home/xyz/libs2/:/home/xyz/libs3/
then there should be three entries in this file as follows:
/home/xyz/libs1/
/home/xyz/libs2/
/home/xyz/libs3/
Then save this file and execute the following command:
$ldconfig
All the above mentioned operation need to be performed from root login
The man page for sudo explains:
Note that the dynamic linker on most operating systems will remove
variables that can control dynamic linking from the environment of
setuid executables, including sudo. Depending on the operating system
this may include RLD*, DYLD*, LD_, LDR_, LIBPATH, SHLIB_PATH, and
others. These type of variables are removed from the environment
before sudo even begins execution and, as such, it is not possible for
sudo to preserve them.
As this link explains, the actual mechanism for doing this is in glibc. If the UID does not match the EUID (which is the case for any setuid program, including sudo), then all "unsecure environment variables" are removed. Thus, a program with elevated privileges runs without alteration.
Yes, it's disabled for security reasons.
An alternative to consider is to "correct" a poorly compiled ELF shared library and/or executable using patchelf to set the rpath.
https://nixos.org/patchelf.html
ld.so.conf is not always the sure bet. It will work if whatever you are running was compiled properly. In my case, with a particular specially packaged vendor's apache product, it was compiled so poorly: They did not even use unique .so filenames so they conflicted with .so filenames from RPMs in the base RHEL repositories that provided some pretty critical commonly used libraries. So this was the only option to isolate how they were used. Using ld.so.conf against those shared objects in the vendor's lib path would have blown away a lot of stuff, that included yum, along with glibc shared library failures, system-wide.

What's the accepted method for deploying a linux application that relies on shared libraries?

I have an application that relies on Qt, GDCM, and VTK, with the main build environment being Qt. All of these libraries are cross-platform and compile on Windows, Mac, and Linux. I need to deploy the application to Linux after deploying on Windows. The versions of vtk and gdcm I'm using are trunk versions from git (about a month old), more recent than what I can get apt-get on Ubuntu 11.04, which is my current (and only) Linux deployment target.
What is the accepted method for deploying an application that relies on these kinds of libraries?
Should I be statically linking here, to avoid LD_LIBRARY_PATH? I see conflicting reports on LD_LIBRARY_PATH; tutorials like this one suggest that it's the 'right way' to modify the library path to use shared libraries through system reboots. Others suggest that I should never set LD_LIBRARY_PATH. In the default version of GDCM, the installation already puts libraries into the /usr/local/lib directory, so those libraries get seen when I run ldd <my program>. VTK, on the other hand, puts its libraries into /usr/local/lib/vtk-5.9, which is not part of the LD_LIBRARY_PATH on most user's machines, and so is not found unless some change is made to the system. Copying the VTK files into '/usr/local/lib' does not allow 'ldd' to see the files.
So, how can I make my application see VTK to use the libraries?
On windows, deploying the dlls is very straightforward, because I can just include them in the installer, and the application finds them because they are in the local directory. That approach does not work in Linux, so I was going to have the users install Qt, GDCM, and VTK from whatever appropriate source and use the default locations, and then have the application point to those default locations. However, since VTK is putting things into a non-standard location, should I also expect users to modify LD_LIBRARY_PATH? Should I include the specific versions of the libraries that I want and then figure out how to make the executable look in the local directory for those libraries and ignore the ones it finds in the library path?
Every "serious" commercial application I have ever seen uses LD_LIBRARY_PATH. They invariably include a shell script that looks something like this:
#!/bin/sh
here="${0%/*}" # or you can use `dirname "$0"`
LD_LIBRARY_PATH="$here"/lib:"$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
exec "$0".bin "$#"
They name this script something like .wrapper and create a directory tree that looks like this:
.wrapper
lib/ (directory full of .so files)
app1 -> .wrapper (symlink)
app1.bin (executable)
app2 -> .wrapper (symlink)
app2.bin (executable)
Now you can copy this whole tree to wherever you want, and you can run "/path/to/tree/app1" or "/path/to/tree/app2 --with --some --arguments" and it will work. So will putting /path/to/tree in your PATH.
Incidentally, this is also how Firefox and Chrome do it, more or less.
Whoever told you not to use LD_LIBRARY_PATH is full of it, IMHO.
Which system libraries you want to put in lib depends on which Linux versions you want to officially support.
Do not even think about static linking. The glibc developers do not like it, they do not care about supporting it, and they somehow manage to break it a little harder with every release.
Good luck.
In general, you're best off depending on the 'normal' versions of the libraries for whatever distribution you're targetting (and saying you don't support dists that don't support recent enough versions of the lib), but if you REALLY need to depend on a bleeding edge version of some shared lib, you can link your app with -Wl,-rpath,'$ORIGIN' and then install a copy of the exact version you want in the same directory as your executable.
Note that if you use make, you'll need $$ in the makefile to get a single $ into the argument that is actually sent to the linker. The single qutoes are needed so the shell doesn't munge things...
Well, there are two options for deploying Linux application.
The correct way:
make a package for your app and for the libraries, if they are so special, that they can't be installed from standard repositories
There are two major package formats. RPM and DEB.
The easy way:
make a self-extract file that will install the "windows way" into /opt.
You can have libraries in the same directory as the executable, it's just not the preferred way.

Resources