NixOS: Setting options for nix-shell - nixos

Is it possible to set Options (http://nixos.org/nixos/options.html) just for a single nix-shell instead of defining them globally at /etc/nixos/configuration.nix?

Those options you are referring to are only meant for NixOS, which usually translate (in the background) to configuring systemd unit files and creating configurations files in /etc.
On the other hand, the nix-shell tool is part of Nix (the package manager) which can be used on any Linux distribution (alongside any other package manager), and also on the latest macOS / OS X.
Nix (the package manager) only installs binary packages, and does not configure them, like other linux package managers do. Something like how homebrew works.
To recap:
NixOS (nixos-*) commands use Nix to install and to configure binaries of packages.
Nix (nix-*) commands only install binaries of packages; you have to configure them yourself.
If you are running NixOS or any systemd-based Linux distro, there is a way to create systemd containers using the same NixOS options. Documentation on containers is avaliable here.
Now, before you start jumping into containers with Nix, please know that the nixos-container command is still a work in progress, and requires some knowledge of the Nix expression language. Nonetheless, any feedback is more than welcome, and Nix developers are actively working on improving it.
If you are only looking to configure certain packages (eg. Vim, weechat) to be used across you system, this is also possible for some of them, but currently also requires some knowledge of the Nix expression language. Let me know which packages you are interested in to configure, and I can tell you how hard it would be to do it.
Hope this helps you a bit.

Related

How to downgrade Terraform to a previous version?

I have installed a version (0.12.24) of Terraform which is later than the required version (0.12.17) specified in our configuration. How can I downgrade to that earlier version? My system is Linux Ubuntu 18.04.
As long as you are in linux, do the following in the terminal:
rm -r $(which terraform)
Install the previous version:
wget https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_amd64.zip
unzip terraform_1.3.4_linux_amd64.zip
mv terraform /usr/local/bin/terraform
terraform --version
That's it, my friend.
EDIT: I've assumed people now use v1.3.5 so the previous version is v1.3.4.
You could also checkout Terraform Switcher - this will allow you to switch between different versions easily.
First, download latest package information using:
sudo apt-get update
The simplest way to downgrade is to use apt-get to install the required version - this will automatically perform a downgrade:
Show a list of available versions - sudo apt list -a terraform
terraform/xenial 0.13.5 amd64
terraform/xenial 0.13.4-2 amd64
... etc
or use sudo apt policy terraform to list available versions
Install the desired version:
sudo apt-get install terraform=0.14.5
Or, for a 'clean' approach, remove the existing version before installing the desired version:
sudo apt remove terraform
There are other valid answers here. This may be useful if you have a situation, like I do, where you need multiple Terraform versions during a migration from an old version to a new version.
I use tfenv for that:
https://github.com/tfutils/tfenv
It provides a modified terraform script that does a lookup of the correct terraform executable based on a default or based on the closest .terraform-version file in the directory or parent directories. This allows us to use a version of Terraform 0.12 for our migrated stuff and keep Terraform 0.11 for our legacy stuff.
You shouldn't be installing terraform in ubuntu any more. Generally speaking, the industry has moved on to docker now. You can install docker like this:
sudo apt install -y curl
curl -LSs get.docker.com | sh
sudo groupadd docker
sudo usermod -aG docker $USER
Once installed you can run terraform like this:
docker run -v $PWD:/work -w /work -v ~/.aws:/root/.aws hashicorp/terraform:0.12.17 init
Assuming that your .aws directory contains your aws credentials. If not, you can leave that mount binding (-v ~/.aws:/root/.aws) out of the command and it'll work with whatever scheme you choose to use. You can change the version of terraform you are using with ease, without installing anything.
There are significant benefits in this approach over the accepted answer. First is the ease of versioning. If you have installed terraform using a package manager you can either uninstall it and install the version you need, or you can play around with Linux alternatives (if your distro supports them, or you are using Linux, or a package manager of some sort -- you could be using Windows and have downloaded and run an installer). Of course, this might be a one-off thing, in which case you do it once and you're ok forever, but in my experience, that isn't often the case as most teams are required to update versions due to security controls, and those teams that aren't required to regularly update software probably should be.
If this isn't a one-off thing, or you'd not like to play around too much with versioning then you could just download the binary, as one comment on this post points out. It's pretty easy to come up with a scheme of directories for each version, or just delete the one you're using and replace it completely. This may suit your use-case pretty well. Go to the appropriate website (I've forgotten which one -- Hashicorp or the GitHub repo's releases page, you can always search for it, though that takes time too -- which is my point) and find the right version and download it.
Or, you can just type docker run hashicorp/terraform:0.12.17 and the right version will be automagically pulled for you from a preconfigured online trusted repo.
So, installing new versions is easier, and of course, docker will run the checksum for you, and will also have scanned the image for vulnerabilities and reported the results back to the developers. Of course, you can do all of this yourself, because as the comment on this answer states, it's just a statically compiled binary, so no hassle just install it and go.
Only it still isn't that easy. Another benefit would be the ease in which you could incorporate the containerised version into docker-compose configurations, or run it in K8S. Again, you may not need this capability, but given that the industry is moving that way, you can learn to do it using the standardised tools now and apply that knowledge everywhere, or you can learn a different technique to install every single tool you use now (get some from GitHub releases and copy the binary, others you should use the package manager, others you should download, unzip, and install, still others should be installed from the vendor website using an installer, etc. etc. etc.). Or, you can just learn how to do it with docker and apply the same trick to everything. The vast of modern tools and software are now packaged in this 'standard' manner. That's the point of containers really -- standardisation. A single approach more-or-less fits everything.
So, you get a standardised approach that fits most modern software, extra security, and easier versioning, and this all works almost exactly the same way no matter which operating system you're running on (almost -- it does cover Linux, windows, osx, raspbian, etc.).
There are other benefits around security other than those specifically mentioned here, that apply in an enterprise environment, but I don't have time to go into a lot of detail here, but if you were interested you could look at things like Aqua and Prisma Cloud Compute. And of course you also have the possibility of extending the base hashicorp/terraform container and adding in your favourite defaults.
Personally, I have no choice in work but to run windows (without wsl), but I am allowed to run docker, so I have a 'swiss army knife' container with aliases to run other containers through the shared docker socket. This means that I get as close to a real Linux environment as possible while running windows. I dispose of my work container regularly, and wouldn't want to rebuild it whenever I change the version of a tool that I'm using, so I use an alias against the latest version of those tools, and new versions are automatically pulled into my workspace. If that breaks when I'm doing, then I can specify a version in the alias and continue working until I'm ready to upgrade. If I need to downgrade a tool when I'm working on somebody else's code I just change the alias again and everything works with the old version. It seems to me that this workflow is the easiest I've ever used, and I've been doing this for 35 years.
I think that docker and this approach to engineering is simpler, cleaner, and more secure than any that has come before it. I strongly recommend that everyone try it.

Recompile a package for debian

I want to recompile a Ubuntu package to use it on Debian
https://launchpad.net/ubuntu/+source/hollywood
how can I do that please ? I'm a beginner
You need the original tar-ball for the package, and the "debian" tarball which contains the scripts and patches needed to build the package. Finding those on Ubuntu's website is really the hard part.
Given those, you could use dpkg-buildpackage (on a system comparable to your target, of course), which expects the latter untar'd in the current directory, along with the original tar-ball.
(I build my own packages with a script which stages things into this arrangement; Debian packagers use a different scheme).

How to automake installation process on Linux-like operating systems?

I wrote a simple C application but it has some dependencies. Instead of giving my friend (who is a linux noob) commands to run in terminal, to install the dependencies, I would like to give him a single file that would install everything my application needs.
Btw, is makefile a good idea, or maybe a bash script would be most appropriate? I would like to ask about the root password only once, save it somewhere (in the script/makefile variable) and then simply use it to install all dependencies. Any ideas how to do it the most professional way?
This is what packages are for.
Depending on what target OS/distribution, you will need to package a DEB or an RPM.
There are tools to simplify this process that allow declaring dependencies as well as running pre/post install/uninstall scripts.
The most professional way to distribute this is using a private repository.

What is the correct way to upgrade the versions of Haskell programs installed on /usr/bin?

I have the 3.0.1 version of Alex installed on my /usr/bin. I think the Haskell Platform originally put it there (although I'm not 100% sure...).
Unfortunately, version 3.0.1 is bugged so I need to upgrade it to 3.0.5. I tried using cabal to install the latest version of Alex but cabal install alex-3.0.5 it installed the executable on .cabal/bin over on my home folder instead of on /usr/bin
Do I just manually copy the executable to /usr/bin? (that sound like a lot of trouble to do all the time)
Do I change my PATH environment variable so that .cabal/bin comes before /usr/bin? (I'm afraid that an "ls" executable or similar over on the cabal folder might end up messing up my system)
Or is there a simpler way to go at it in general?
I want to first point out the layout that works well for me, and then suggest how you might proceed in your particular situation.
What works well for me
In general, I think that a better layout is to have the following search path:
directories with important non-Haskell related binaries
directory that cabal install installs to
directory that binaries from the Haskell platform are in
This way, you can use cabal install to update binaries from the Haskell platform, but they cannot accidently shadow some non-Haskell related binary.
(On my Windows machine, this layout is easy to achieve, because the binaries from the Haskell platform are installed in a separate directory by default. So I just manually adapt the search path and that's it. I don't know how to achieve it on other platforms).
Suggestion for your particular situation
In your specific situation with the Haskell platform binaries already installed together with the non-Haskell related binaries, maybe you can use the following layout for the search path:
directory containing links to some of the binaries in 3
directory with important non-Haskell related binaries and Haskell platform binaries
directory that cabal install installs to.
This way, binaries from cabal install cannot accidently shadow the important stuff in 2. But if you decide you want to shadow something form the Haskell platform, you can manually add a link to 1. If it's a soft link, I think you only have to do that once per program name, and then you can call cabal install for that program to update it. You could even look up what executables are bundled with the Haskell platform and do that once and for all.
On second though, putting /.cabal/bin in front of /usr/bin in the PATH is simpler and is what most people do already.
Its also not a big deal since only cabal will put files in .cabal/bin so it should be predictable and with little risk of overwriting stuff.

Install CPAN Modules without messing up the system Perl installation

I have heard that it is best to not install modules from CPAN where your system's version of Perl is. I know how to install modules using the command line, I was just wondering if there is a way to keep CPAN separate from the system's core Perl.
Should I:
Download the source and make a directory specifically for these modules?
Anybody have any other ideas or implementations they have used successfully?
I am using Arch Linux with Perl 5.16.2.
Are you looking for something like local::lib
local::lib - create and use a local lib/ for perl modules with PERL5LIB
Download and extract the latest version of local::lib:
curl -LO http://search.cpan.org/CPAN/authors/id/A/AP/APEIRON/local-lib-1.008004.tar.gz
tar xzf local-lib-1.008004.tar.gz
cd local-lib-1.008004/
Deploy it:
perl Makefile.PL --bootstrap=$HOME/perl5
make
make test
make install
Save persistent configuration:
cat << PROFILE >> $HOME/.profile
eval \$(perl -I\$HOME/perl5/lib/perl5/ -Mlocal::lib)
PROFILE
Now, you can logoff/logon your session or simply source ~/.profile.
After that steps, CPAN modules will be installed locally.
You don't need to install the module manually. You just need to have somewhere to install it to, and your environment configured to install it there. Then you can use cpan/cpanp/cpanm/etc as normal. (cpan minus wins for me)
Setting up that environment manually is a bit of a pain, so most people use an application to set up the configuration for them.
The two main choices for this are:
local::lib — This sets up your environment variables so you can install modules away from the system perl, but continues to use the system perl.
Perlbrew — this installs a complete perl for you so lets you avoid your system perl entirely, and use a more up to date version of perl itself then might come with your system. It also manages multiple perl installs side by side (so you can test your modules against different versions of perl).
Personally, I prefer Perlbrew (as it makes it easy to play with shiny new features like the yada yada operator and smart match (not that smart match is all that new now) but it takes longer to set up (as you have to compile perl).
I have heard that it is best to not install modules from CPAN where your system's version of Perl is.
The idea is to avoid breaking your distro's tools by upgrading a module they use.
Installing the modules to a fresh directory and telling Perl about it using PERL5LIB (which is what aforementioned install::lib does) is not going to help at all in that case, since Perl sees exactly the same thing as if you had installed the module in the usual site directory.
(One would mainly use PERL5LIB to install modules when one doesn't have permission to install to the default directories.)
The other problem with using the system Perl is that you are prevented from upgrading it.
The solution to both is to install your own build of Perl. This is very easy to do using perlbrew.
What's about cpanminus?
CPAN minus module
Why don't you pack the modules into real packages, rpm or dep style? That way you keep control over the installed software, you can remove and update the packages as required and as you are used to. So instead of bypassing the management, which rarely is a good idea, you stay in control.
If you are using an rpm based distribution I really recommend OBS for this task. You can create your own project, configure sources, test them and have packages created for all sorts of distributions and architectures. And when you import your home projects repository into your software management then installing the packages comes down to a single click.

Resources