How to gather the full config of a NixOS system? - nixos

I read a bit about NixOS and tried it these days, because I got the impression that it would let me configure a Linux with just one file.
When I used it, I installed a bunch of packages with nix-env, so they didn't end up in the configuration.nix, but I could simply uninstall them later and add them to the configuration.nix by hand. I there something like npm i -g <package> that would install this globally so it would end up in the configuration.nix and could simply be copied to another machine.
Also, I installed stuff like zsh and atom and they have an entirely different approach to configuration and customization (bashscript, javascript, less, etc).
Is there a way for Nix/NixOS to track the package-specific config too?
Does it already happen and I don't see it? Like the nix expression of the package knows where the package will store its config etc.
I mean, it's nice that I can add these packages to the main config and when using it at another PC I get the same software installed, but I still see myself writing rather much configs for the installed packages too.

If you want packages installed through configuration.nix, then the easiest way to accomplish that is to add them to the environment.systemPackages attribute. Packages listed in there will be available automatically to all users on the machine. As far as I know, there is no shell command available to automate the maintenance of that attribute, though. The only way to manage that list is by editing configuration.nix and manually adding the packages you'd like to have installed.
Nix does not manage package-specific configuration files. As you probably know, NixOS provides such a mechanism for files in /etc, but a similar mechanism to manage config files in $HOME etc. does not exist. The PR https://github.com/NixOS/nixpkgs/pull/9250 on Github contains a concrete proposal to add this capability to Nix, but it hasn't been merged yet because it requires some changes that are controversial.

Nix does not currently offer ways of managing user specific configuration or language specific package managers. AFAICT that's because it is a very complex and opinionated territory compared to generating configs for sshd etc.
There are however Nix-based projects providing solution to at least some parts of your question. For managing user configuration (zsh etc.), have a look at home manager.

Related

Creating a 'graphical' command line install for a Linux based OS package

Although I am relatively new to using Linux, I would like to know more on how deploying packages works. I have tried searching for this but have had no luck. I have seen countless packages and install scripts that use the same looking 'graphical' command line install for the user to select options for the package. Take the Debian net install for example. [1]
As I have a lot to learn, I would only like a summary of how this is possible, and any resources that anyone has on how developers do this.
Thanks in advance.
[1] http://doudoulinux.org/blog/public/screenshots/install/install-selected-tasks.png
OK, now, i believe, i understand what you're after. I'll still insist that interactive configuration of packages is distro specific and shall not be the main form of package configuration.
It is preferred to ship in a default working configuration and then document how the user can change configuration files (normally in /etc) to reconfigure the package. It is often useful to ship several default configurations (example package: wpa_supplicant which ships with several examples of network configuration, all disabled by default) and allow the user to choose by uncommenting lines.
Debian
The debian specific way to get packages configured is debconf, its configuration is a simple shell script (or a perl script, or whatever else that can talk over STDIN/STDOUT) and a template file. The template file is what will provide options in the aptitude/apt-get interface as in the screenshot in your question.
It is worth reading the debian guidelines on package configuration to have an idea of which kind of configuration is too much.
And they also have a thorough tutorial. Since you said you do not have experience packaging i also recommend reading the introduction to packaging, which will tell you where the files shall be placed. Also, dbhelper is a great tool to place files in the correct place in the package directory.
On other distros
Each distro has its own way of adding configuration to packages. Debian is notable for its debconf as it is one of the most feature rich tools for configuring packages (together with dpkg-reconfigure).
Developers of the software that is sipped in a distro package are often different people from the ones that do the packaging. Configuration options left by developers are often much more thorough than in the package (e.g. inclusion or exclusion of certain libraries).
The fact that you're most familiar with the debian distro (that is an assumption based on your question tags), might give you a misleading idea of package configuration. Only debian based distros have so many configurable packages, other distros often use package dependencies to install differently configured packages. For example:
RedHat based distros (.rpm packages) have no tool such as debconf (as far as i am aware), they use distinct (conflicting) package names to install differently configured packages.
Arch Linux based distros enforce the user configuration. In essence, arch forces the user to configure their packages by going into their configuration files and changing the configuration (that is a very good thing if you want to learn).
Funny enough, both RPM and Arch based distros often ship po-debconf, an adapted debconf for that distro. Yet, i cannot tell much about it since i never tried it.

How to distribute open source package you built yourself?

I built ZeroMQ and Sodium from source and have them installed properly on my development machine, which is just a Pi2. I have one other machine that I want to make sure these gets installed to properly. Is there a proper way to do this other than just copy .a and .so files around?
So, there are different ways of handling this particular issue.
If you're installing all your built-from-source packages into a dedicated tree (maybe /usr/local, or /opt/mypackages) then simply copying files around is a fine solution, using something like rsync. Particularly since you only have two machines, anything more complicated may not be worth the effort.
If you're trying to install ZeroMQ and Sodium along side system-managed files (in, e.g., /usr/lib and /usr/bin)...don't do that. That is, don't try to mix "things installed by packages" with "things installed from source", because that way lies sadness and doom.
That said, a more manageable way of distributing these files would be to build custom packages and then setting up a local apt repository, so that you can just apt install the packages on your systems. There are various guides out there for doing this if you want to go down this route. It's a good skill to have in general, especially if you ever want to share your tools with someone else (because it makes it easy for them to install any necessary dependencies).

Should we use absolute paths or relative paths for file names in configuration files?

I'm writing a set of programs in c++ which i want to be deployed across many machines and distributed to other developers for testing. How do I specify file paths in configuration files and how do I specify the location of the config files in cron jobs, command line, sample api etc?
I mean, should I use ROOT_DIR for my application and always specify file paths relative to this directory? What is the standard practice?
Can I use autoconf's configure script to write the ROOT_DIR in my application configuration files or should I stat the configuration file to find its location in the machine? Thanks.
For Autoconf, I believe you typically use the --prefix option to install an application to a non-default location. The default is system-wide.
Here is some Autoconf documentation.
That way, users can decide if they want it system-wide or user-specific (or something else).
You may want to use the package manager that your OS provides. For instance, on Debian you might want to make a deb packages - other systems uses different package managers by default. This will help with resolving dependencies of your application(s).
E.g: If your application requires version X.Y of library Foo, the package manager can make sure your users have that before installing your package.
You could also look into RPM Package Manager, which has been around for a while.
In short: there is no one answer. Common trope in Linux (:
What is the standard practice?
You might want to check out the Filesystem Hierarchy Standard to clarify your thoughts of where things should be installed. It sounds like you probably want to be installing something in /var/lib instead of HOME_DIR, but it's hard to tell without more details.

RPM - Install time parameters

I have packaged my application into an RPM package, say, myapp.rpm. While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application?
P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
In general, RPM packages should not require user interaction. Time and time again, the RPM folks have stated that it is an explicit design goal of RPM to not have interactive installs. For packages that need some sort of input before first use, you typically ask for this information on first use, our you put it all in config files with macros or something and tell your users that they will have to configure the application before it is usable.
Even passing a parameter of some sort counts as end-user interaction. I think what you want is to have your pre or install scripts auto detect the environment somehow, maybe by having a file somewhere they can examine. I'll also point out that from an RPM user's perspective, having a package named *-qa.rpm is a lot more intuitive than passing some random parameter.
For your exact problem, if you are installing different content, you should create different packages. If you try to do things differently, you're going to end up fighting the RPM system more and more.
It isn't hard to create a build system that can spit out 20+ packages that are all mostly similar. I've done it with a template-ish spec file and some scripts run by make that will create the various spec files and build the RPMs. Without knowing the specifics, it sounds like you might even have a core package that all 20+ environment packages depend on, then the environment specific packages install whatever is specific to their target environment.
You could use the relocate option, e.g.
rpm -i --relocate /env=/uat somepkg.rpm
and have your script look up the variable data from a file located in the "env" directory
I think this is a very valid question, specially as soon as you are moving into the application development realm. There he configuration of the application for different target systems is your daily bread: you need to configure for Development, Integration Test, Acceptance Test, Production etc. I sure don't think building a seperate package for each enviroment is the solution. Basically it should be the same code running in different enviroments.
I know that this requirement is not supported by rpm. But what you can do as a work around is to use a simple config file, that the %pre script knows
to look for. The config file could be a simple shell script that for example sets environment variables, and then the different und pre and post scripts can use those.

Light weight packaging tool

I am looking for a good way to install an application I developed with all its dependencies in a fancy way. Currently I have a big make file that downloads, unpacks, compiles and installs all dependencies. This however is a little tedious, since there are quite a few dependencies and the make file is getting larger and larger which eventually will be hard to maintain. Therefore I am looking for a packaging tool with the following features:
It should be a light weight package manager which is very easy to install (or even installs itself and afterwards all my dependencies)
The destination of the installed binaries, libraries etc. should be customizable
Each installation process of a dependency should be easy configurable
It should be possible to include self written scripts that get executed at a specific point during the installation process (in order to manipulate make files, flags etc)
No admin rights should be necessary since all clients that install my application will not have admin rights and are not able to use an already installed package manager
I do not know if this kind of software exists. I myself don't have much of experience with packaging tools.
Thx in advance for any link, hint, suggestion!
opkg is something thats based on ipkg (now defunct) and originally dpkg. Its used in embedded systems. Light weight for sure.
ports from crux linux (www.crux.nu)?
A quick search returns InstallJammer. I would propose make debs and rpms and tarballs and stick with standard installation process (root privileges and such)m but if you can't do that, then, well, you can't.
I'm sure you know how suspicious it would look for the user.

Resources