Should we use absolute paths or relative paths for file names in configuration files? - linux

I'm writing a set of programs in c++ which i want to be deployed across many machines and distributed to other developers for testing. How do I specify file paths in configuration files and how do I specify the location of the config files in cron jobs, command line, sample api etc?
I mean, should I use ROOT_DIR for my application and always specify file paths relative to this directory? What is the standard practice?
Can I use autoconf's configure script to write the ROOT_DIR in my application configuration files or should I stat the configuration file to find its location in the machine? Thanks.

For Autoconf, I believe you typically use the --prefix option to install an application to a non-default location. The default is system-wide.
Here is some Autoconf documentation.
That way, users can decide if they want it system-wide or user-specific (or something else).
You may want to use the package manager that your OS provides. For instance, on Debian you might want to make a deb packages - other systems uses different package managers by default. This will help with resolving dependencies of your application(s).
E.g: If your application requires version X.Y of library Foo, the package manager can make sure your users have that before installing your package.
You could also look into RPM Package Manager, which has been around for a while.
In short: there is no one answer. Common trope in Linux (:

What is the standard practice?
You might want to check out the Filesystem Hierarchy Standard to clarify your thoughts of where things should be installed. It sounds like you probably want to be installing something in /var/lib instead of HOME_DIR, but it's hard to tell without more details.

Related

library and include paths, ~/lib and ~/include?

What is the canonical path for a custom library and include files? I thought of either /usr/local/lib + /usr/local/include or ~/lib ~/include. To me the latter looks a better option, since the former are managed by the distribution's package manager and it is best not to interfere.. Though I can not find any reference to people actually using ~/lib.
Thanks
Is this something that you've created yourself, or a third party installation?
Normally /usr/local/ is a good place to install packages not part of the original OS. I do this myself for anything I've built and installed from source. Another place to put things is /opt which is often used by commercial third party software.
If you're going to writing something of your own then using your home directory "~" sounds fine. This is also good if you don't have root access or don't want it to mixed in with the other OS packages.
When compiling and linking you will need to configure things to use those directories. Also if you're using dynamic shared libraries the LD_LIBRARY_PATH must be set as well.

How to gather the full config of a NixOS system?

I read a bit about NixOS and tried it these days, because I got the impression that it would let me configure a Linux with just one file.
When I used it, I installed a bunch of packages with nix-env, so they didn't end up in the configuration.nix, but I could simply uninstall them later and add them to the configuration.nix by hand. I there something like npm i -g <package> that would install this globally so it would end up in the configuration.nix and could simply be copied to another machine.
Also, I installed stuff like zsh and atom and they have an entirely different approach to configuration and customization (bashscript, javascript, less, etc).
Is there a way for Nix/NixOS to track the package-specific config too?
Does it already happen and I don't see it? Like the nix expression of the package knows where the package will store its config etc.
I mean, it's nice that I can add these packages to the main config and when using it at another PC I get the same software installed, but I still see myself writing rather much configs for the installed packages too.
If you want packages installed through configuration.nix, then the easiest way to accomplish that is to add them to the environment.systemPackages attribute. Packages listed in there will be available automatically to all users on the machine. As far as I know, there is no shell command available to automate the maintenance of that attribute, though. The only way to manage that list is by editing configuration.nix and manually adding the packages you'd like to have installed.
Nix does not manage package-specific configuration files. As you probably know, NixOS provides such a mechanism for files in /etc, but a similar mechanism to manage config files in $HOME etc. does not exist. The PR https://github.com/NixOS/nixpkgs/pull/9250 on Github contains a concrete proposal to add this capability to Nix, but it hasn't been merged yet because it requires some changes that are controversial.
Nix does not currently offer ways of managing user specific configuration or language specific package managers. AFAICT that's because it is a very complex and opinionated territory compared to generating configs for sshd etc.
There are however Nix-based projects providing solution to at least some parts of your question. For managing user configuration (zsh etc.), have a look at home manager.

Best practice: deploying depencencies on Linux

What is the best practice for deploying dependencies on Linux when shipping an own application?
Some SO posts recommend to include all dependencies in the package (utilizing LD_LIBRARY_PATH), other posts recommend to only ship the binary and use the "dependency" feature of the DEB/RPM packages instead. I tried to use the second approach, but immediately ran into the problem that one dependency (libicu52) doesn't seem to be available in certain Linux distributions yet. For example, in my OpenSuse test installation only "libicu51" is available in the package manager.
I initially thought that the whole idea of the packaging system is to avoid duplicate SO files in the system. But does it really work (see above), or should I rather ship all dependencies with my app, to make sure that it runs on all distributions?
For custom application, which "does not care" about distribution-specific packaging, versioning, it's upgrades, etc,. I would recommend to redistribute dependencies manually.
You can use RPATH linker option, by it's setting value to $ORIGIN you will tell linker to search libraries in directory, relative to that binary file, without need to pre-set LD_LIBRARY_PATH before execution:
gcc -Wl,-rpath,'$ORIGIN/../lib'
Example taken from here.

RPM - Install time parameters

I have packaged my application into an RPM package, say, myapp.rpm. While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application?
P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
In general, RPM packages should not require user interaction. Time and time again, the RPM folks have stated that it is an explicit design goal of RPM to not have interactive installs. For packages that need some sort of input before first use, you typically ask for this information on first use, our you put it all in config files with macros or something and tell your users that they will have to configure the application before it is usable.
Even passing a parameter of some sort counts as end-user interaction. I think what you want is to have your pre or install scripts auto detect the environment somehow, maybe by having a file somewhere they can examine. I'll also point out that from an RPM user's perspective, having a package named *-qa.rpm is a lot more intuitive than passing some random parameter.
For your exact problem, if you are installing different content, you should create different packages. If you try to do things differently, you're going to end up fighting the RPM system more and more.
It isn't hard to create a build system that can spit out 20+ packages that are all mostly similar. I've done it with a template-ish spec file and some scripts run by make that will create the various spec files and build the RPMs. Without knowing the specifics, it sounds like you might even have a core package that all 20+ environment packages depend on, then the environment specific packages install whatever is specific to their target environment.
You could use the relocate option, e.g.
rpm -i --relocate /env=/uat somepkg.rpm
and have your script look up the variable data from a file located in the "env" directory
I think this is a very valid question, specially as soon as you are moving into the application development realm. There he configuration of the application for different target systems is your daily bread: you need to configure for Development, Integration Test, Acceptance Test, Production etc. I sure don't think building a seperate package for each enviroment is the solution. Basically it should be the same code running in different enviroments.
I know that this requirement is not supported by rpm. But what you can do as a work around is to use a simple config file, that the %pre script knows
to look for. The config file could be a simple shell script that for example sets environment variables, and then the different und pre and post scripts can use those.

How to make binary distribution of Qt application for Linux

I am developing cross-platform Qt application.
It is freeware though not open-source. Therefore I want to distribute it as a compiled binary.
On windows there is no problem, I pack my compiled exe along with MinGW's and Qt's DLLs and everything goes great.
But on Linux there is a problem because the user may have shared libraries in his/her system very different from mine.
Qt deployment guide suggests two methods: static linking and using shared libraries.
The first produces huge executable and also require static versions of many libraries which Qt depends on, i.e. I'll have to rebuild all of them from scratches. The second method is based on reconfiguring dynamic linker right before the application startup and seems a bit tricky to me.
Can anyone share his/her experience in distributing Qt applications under Linux? What method should I use? What problems may I confront with? Are there any other methods to get this job done?
Shared libraries is the way to go, but you can avoid using LD_LIBRARY_PATH (which involves running the application using a launcher shell script, etc) building your binary with the -rpath compiler flag, pointing to there you store your libraries.
For example, I store my libraries either next to my binary or in a directory called "mylib" next to my binary. To use this on my QMake file, I add this line in the .pro file:
QMAKE_LFLAGS += -Wl,-rpath,\\$\$ORIGIN/lib/:\\$\$ORIGIN/../mylib/
And I can run my binaries with my local libraries overriding any system library, and with no need for a launcher script.
You can also distribute Qt shared libraries on Linux. Then, get your software to load those instead of the system default ones. Shared libraries can be over-ridden using the LD_LIBRARY_PATH environment variable. This is probably the simplest solution for you. You can always change this in a wrapper script for your executable.
Alternatively, just specify the minimum library version that your users need to have installed on the system.
When we distribute Qt apps on Linux (or really any apps that use shared libraries) we ship a directory tree which contains the actual executable and associated wrapper script at the top with sub-directories containing the shared libraries and any other necessary resources that you don't want to link in.
The advantage of doing this is that you can have the wrapper script setup everything you need for running the application without having to worry about having the user set environment variables, install to a specific location, etc. If done correctly, this also allows you to not have to worry about from where you are calling the application because it can always find the resources.
We actually take this tree structure even further by placing all the executable and shared libraries in platform/architecture sub-directories so that the wrapper script can determine the local architecture and call the appropriate executable for that platform and set the environment variables to find the appropriate shared libraries. We found this setup to be particularly helpful when distributing for multiple different linux versions that share a common file system.
All this being said, we do still prefer to build statically when possible, Qt apps are no exception. You can definitely build with Qt statically and you shouldn't have to go build a lot of additional dependencies as krbyrd noted in his response.
sybreon's answer is exactly what I have done. You can either always add your libraries to LD_LIBRARY_PATH or you can do something a bit more fancy:
Setup your shipped Qt libraries one per directory. Write a shell script, have it run ldd on the executable and grep for 'not found', for each of those libraries, add the appropriate directory to a list (let's call it $LDD). After you have them all, run the binary with LD_LIBRARY_PATH set to it's previous value plus $LDD.
Finally a comment about "I'll have to rebuild all of them from scratches". No, you won't have to. If you have the dev packages for those libraries, you should have .a files, you can statically link against these.
Not an answer as such (sybreon covered that), but please note that you are not allowed to distribute your binary if it is statically linked against Qt, unless you have bought a commercial license, otherwise your entire binary falls under the GPL (or you're in violation of Qt's license.)
If you have a commercial license, never mind.
If you don't have a commercial license, you have two options:
Link dynamically against Qt v4.5.0 or newer (the LGPL versions - you may not use the previous versions except in open source apps), or
Open your source code.
The probably easiest way to create a Qt application package on Linux is probably linuxdeployqt. It collects all required files and lets you build an AppImage which runs on most Linux distributions.
Make sure you build the application on the oldest still-supported Ubuntu LTS release so your AppImage can be listed on AppImageHub.
You can look into QtCreator folder and use it as an example. It has qt.conf and qtcreator.sh files in QtCreator/bin.
lib/qtcreator is the folder with all needed Qt *.so libraries. Relative path is set inside qtcreator.sh, which should be renamed to you-app-name.sh
imports,plugins,qml are inside bin directory. Path to them is set in qt.conf file. This is needed for QML applications deployment.
This article has information on the topic. I will try it myself:
http://labs.trolltech.com/blogs/2009/06/02/deploying-a-browser-on-gnulinux/
In a few words:
Configure Qt with -platform linux-lsb-g++
Linking should be done
with –lsb-use-default-linker
Package everything and deploy (will
need a few tweaks here but I haven't yet tried it sorry)

Resources