library and include paths, ~/lib and ~/include? - linux

What is the canonical path for a custom library and include files? I thought of either /usr/local/lib + /usr/local/include or ~/lib ~/include. To me the latter looks a better option, since the former are managed by the distribution's package manager and it is best not to interfere.. Though I can not find any reference to people actually using ~/lib.
Thanks

Is this something that you've created yourself, or a third party installation?
Normally /usr/local/ is a good place to install packages not part of the original OS. I do this myself for anything I've built and installed from source. Another place to put things is /opt which is often used by commercial third party software.
If you're going to writing something of your own then using your home directory "~" sounds fine. This is also good if you don't have root access or don't want it to mixed in with the other OS packages.
When compiling and linking you will need to configure things to use those directories. Also if you're using dynamic shared libraries the LD_LIBRARY_PATH must be set as well.

Related

gperftools: modify makefile to install in a different folder?

I was installing gperftools:
https://code.google.com/p/gperftools/
Everything worked, and I see that the project links to /usr/local/lib
I'd like to put the library in a folder local to my project, instead.
The reasoning behind this is that I'm putting the project on different machines, and I just need to link against the libprofiler and libtcmalloc libraries, instead of the entire package, that also comes with the pprof and such.
The machines also have different architectures, so I actually need to build into that directory, instead of copy-pasting over
Is this a trivial thing to do?
gperftools uses autoconf/automake, so you can do
./configure --prefix=/path/to/whereever
make
make install
This works for all autotools projects, unless they are severely broken.
On that note, it is generally a good idea to read the INSTALL file in a source tree to find out about this sort of stuff. There is one in the gperftools sources, and this is documented there.

Installing libraries in non standard location and using them to install a software

I am trying to install a software on a cluster running Linux without root. However, the software requires some non-standard libraries before it could be installed. I installed the required libraries in my home directory. When I used ./configure to compile the software's source code, I got an error message saying that it couldn't find library files.
I tried using CPPFLAGS, LDFLAGS, and LD_LIBRARY_PATH to tell the compiler where to find the libraries, but it did not seem to work.
How can I install a non-standard library without administrative privileges and tell the compiler where to find that library? Should I also do the same thing for other libraries too?
I'm afraid that the exact process for doing so entirely depends on how the software's actual script, and/or Makefile, and/or code. There is no universal answer that works with every software package in existence. Each one's configuration script is unique, and different.
It also depends, in some part, to how the libraries get installed in the nonstandard location. Quite often the library package would include one of several configuration mechanisms that applications that use the library must use in order to configure themselves to the library; a part of which includes the necessary mojo to link the software application to put the correct RPATH into the software application's executable, so that it can load the libraries from the right location; this typically involves the variables you mentioned. One thing you didn't mention is specifying the -R flag to set the RPATH in the executable.
So, the only answer here is for you to keep digging into the library's and the application's configuration scripts, and try to figure it out. There's just no other way to do this, except by brute force. In many cases, it's just not possible to do what you're trying to do "out of the box", and it becomes necessary to patch one or the other's configure script, so that the "right thing" happens.
Set PKG_CONFIG_PATH while building binaries that link against previously installed libraries:
export PKG_CONFIG_PATH="/home/user/dir/install/lib/pkgconfig:$PKG_CONFIG_PATH"
When executing binaries compiled against those libraries, set LD_LIBRARY_PATH
export LD_LIBRARY_PATH="/home/user/dir/install/lib:$LD_LIBRARY_PATH"
If you execute binaries installed in non-standard locations, set PATH too:
export PATH="/home/user/dir/install/sbin:/home/user/dir/install/bin:$PATH"
You might want to set the last two in your .bashrc for future use.
Putting the previous variable settings at the end of the string gives higher precedence to the non-standard library and binary locations, if files exist in both places. Consider switching them around if you prefer using programs installed through your package manager.

Should we use absolute paths or relative paths for file names in configuration files?

I'm writing a set of programs in c++ which i want to be deployed across many machines and distributed to other developers for testing. How do I specify file paths in configuration files and how do I specify the location of the config files in cron jobs, command line, sample api etc?
I mean, should I use ROOT_DIR for my application and always specify file paths relative to this directory? What is the standard practice?
Can I use autoconf's configure script to write the ROOT_DIR in my application configuration files or should I stat the configuration file to find its location in the machine? Thanks.
For Autoconf, I believe you typically use the --prefix option to install an application to a non-default location. The default is system-wide.
Here is some Autoconf documentation.
That way, users can decide if they want it system-wide or user-specific (or something else).
You may want to use the package manager that your OS provides. For instance, on Debian you might want to make a deb packages - other systems uses different package managers by default. This will help with resolving dependencies of your application(s).
E.g: If your application requires version X.Y of library Foo, the package manager can make sure your users have that before installing your package.
You could also look into RPM Package Manager, which has been around for a while.
In short: there is no one answer. Common trope in Linux (:
What is the standard practice?
You might want to check out the Filesystem Hierarchy Standard to clarify your thoughts of where things should be installed. It sounds like you probably want to be installing something in /var/lib instead of HOME_DIR, but it's hard to tell without more details.

Linux vs Solaris - Compiling software

Background:
At work I'm used to working on Solaris 10. We have sysadmins who know what they're doing and can help out if required.
I've compiled things like apache, perl and mod_perl from source without any problems.
I've been given a redhat server to play with and am hitting problems. The sysadmins are out sick at the moment.
I keep hitting problems regarding LD_LIBRARY_PATH when building software. At the moment for test purposes I am compiling to my home directory, as I don't have root, or permissions to install anywhere else.
I plan on having an area under /opt for us to install into, like we do on Solaris, but I'll need out sysadmin around to create that for us.
My .bashrc had nothing for LD_LIBRARY_PATH so I've been appending things to that to get stuff built (e.g. ffmpeg from source). I've been reading about this and apparently this isn't the way to go, it's not reliable or something. I don't have access to ldconfig (permission denied).
Now the quetions:
What is the best way to build applications under linux so that they won't break? Creating entries under /etc/ld.so.conf.d/ ?
Can anyone give a brief overview of what LD_LIBRARY_PATH actually does?
From the ld.so(8) man page:
LD_LIBRARY_PATH
A colon-separated list of directories in which to search for ELF
libraries at execution-time. Similar to the PATH environment
variable.
But honestly, find an admin. Become one if need be. Oh, and build packages.
LD_LIBRARY_PATH makes it possible for individual users or individual processes to add locations to the search path on a fine-grained basis. /etc/ld.so.conf should be used for system wide library path setting, i.e. deploying your application. (Better yet you could package it as an rpm/deb and deploy it through your distributions usual package channels)
Typically a user might use LD_LIBRARY_PATH to force execution of their program to pick a different version of a library. Normally this is useful for favouring debugging or instrumented versions of libraries, but you can also use it to inject your own code into 3rd party code. (It is also possible use this for malicious purposes sometimes, if you can alter someone's bash profile to trick them into executing your code, without realising it).
Some applications also set LD_LIBRARY_PATH if they install "private" libraries in non-default locations, i.e. so they won't be used for normal dynamic linking but still exist. For scenarios like this though I'd be inclined to prefer dlopen() and friends.
Setting LD_LIBRARY_PATH is considered harmful because (amongst other reasons):
Your program is dynamically linked based on your LD_LIBRARY_PATH. This means that it could link against a particular version of a library, which happened to be in your LD_LIBRARY_PATH e.g. /home/user/lib/libtheora.so. This can cause lots of confusion if someone else tries to run it with without yourLD_LIBRARY_PATH and ends up linking against the default version e.g. in /usr/lib/libtheora.so.
It is used in preference to any default system link path. This means that if you end up having a dodgy libc on you LD_LIBRARY_PATH it could end up doing bad things like compromising your account.
As ignacio said, use packages wherever you can. This avoids library nightmares.

How to make binary distribution of Qt application for Linux

I am developing cross-platform Qt application.
It is freeware though not open-source. Therefore I want to distribute it as a compiled binary.
On windows there is no problem, I pack my compiled exe along with MinGW's and Qt's DLLs and everything goes great.
But on Linux there is a problem because the user may have shared libraries in his/her system very different from mine.
Qt deployment guide suggests two methods: static linking and using shared libraries.
The first produces huge executable and also require static versions of many libraries which Qt depends on, i.e. I'll have to rebuild all of them from scratches. The second method is based on reconfiguring dynamic linker right before the application startup and seems a bit tricky to me.
Can anyone share his/her experience in distributing Qt applications under Linux? What method should I use? What problems may I confront with? Are there any other methods to get this job done?
Shared libraries is the way to go, but you can avoid using LD_LIBRARY_PATH (which involves running the application using a launcher shell script, etc) building your binary with the -rpath compiler flag, pointing to there you store your libraries.
For example, I store my libraries either next to my binary or in a directory called "mylib" next to my binary. To use this on my QMake file, I add this line in the .pro file:
QMAKE_LFLAGS += -Wl,-rpath,\\$\$ORIGIN/lib/:\\$\$ORIGIN/../mylib/
And I can run my binaries with my local libraries overriding any system library, and with no need for a launcher script.
You can also distribute Qt shared libraries on Linux. Then, get your software to load those instead of the system default ones. Shared libraries can be over-ridden using the LD_LIBRARY_PATH environment variable. This is probably the simplest solution for you. You can always change this in a wrapper script for your executable.
Alternatively, just specify the minimum library version that your users need to have installed on the system.
When we distribute Qt apps on Linux (or really any apps that use shared libraries) we ship a directory tree which contains the actual executable and associated wrapper script at the top with sub-directories containing the shared libraries and any other necessary resources that you don't want to link in.
The advantage of doing this is that you can have the wrapper script setup everything you need for running the application without having to worry about having the user set environment variables, install to a specific location, etc. If done correctly, this also allows you to not have to worry about from where you are calling the application because it can always find the resources.
We actually take this tree structure even further by placing all the executable and shared libraries in platform/architecture sub-directories so that the wrapper script can determine the local architecture and call the appropriate executable for that platform and set the environment variables to find the appropriate shared libraries. We found this setup to be particularly helpful when distributing for multiple different linux versions that share a common file system.
All this being said, we do still prefer to build statically when possible, Qt apps are no exception. You can definitely build with Qt statically and you shouldn't have to go build a lot of additional dependencies as krbyrd noted in his response.
sybreon's answer is exactly what I have done. You can either always add your libraries to LD_LIBRARY_PATH or you can do something a bit more fancy:
Setup your shipped Qt libraries one per directory. Write a shell script, have it run ldd on the executable and grep for 'not found', for each of those libraries, add the appropriate directory to a list (let's call it $LDD). After you have them all, run the binary with LD_LIBRARY_PATH set to it's previous value plus $LDD.
Finally a comment about "I'll have to rebuild all of them from scratches". No, you won't have to. If you have the dev packages for those libraries, you should have .a files, you can statically link against these.
Not an answer as such (sybreon covered that), but please note that you are not allowed to distribute your binary if it is statically linked against Qt, unless you have bought a commercial license, otherwise your entire binary falls under the GPL (or you're in violation of Qt's license.)
If you have a commercial license, never mind.
If you don't have a commercial license, you have two options:
Link dynamically against Qt v4.5.0 or newer (the LGPL versions - you may not use the previous versions except in open source apps), or
Open your source code.
The probably easiest way to create a Qt application package on Linux is probably linuxdeployqt. It collects all required files and lets you build an AppImage which runs on most Linux distributions.
Make sure you build the application on the oldest still-supported Ubuntu LTS release so your AppImage can be listed on AppImageHub.
You can look into QtCreator folder and use it as an example. It has qt.conf and qtcreator.sh files in QtCreator/bin.
lib/qtcreator is the folder with all needed Qt *.so libraries. Relative path is set inside qtcreator.sh, which should be renamed to you-app-name.sh
imports,plugins,qml are inside bin directory. Path to them is set in qt.conf file. This is needed for QML applications deployment.
This article has information on the topic. I will try it myself:
http://labs.trolltech.com/blogs/2009/06/02/deploying-a-browser-on-gnulinux/
In a few words:
Configure Qt with -platform linux-lsb-g++
Linking should be done
with –lsb-use-default-linker
Package everything and deploy (will
need a few tweaks here but I haven't yet tried it sorry)

Resources