How to downgrade Terraform to a previous version? - terraform

I have installed a version (0.12.24) of Terraform which is later than the required version (0.12.17) specified in our configuration. How can I downgrade to that earlier version? My system is Linux Ubuntu 18.04.

As long as you are in linux, do the following in the terminal:
rm -r $(which terraform)
Install the previous version:
wget https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_amd64.zip
unzip terraform_1.3.4_linux_amd64.zip
mv terraform /usr/local/bin/terraform
terraform --version
That's it, my friend.
EDIT: I've assumed people now use v1.3.5 so the previous version is v1.3.4.

You could also checkout Terraform Switcher - this will allow you to switch between different versions easily.

First, download latest package information using:
sudo apt-get update
The simplest way to downgrade is to use apt-get to install the required version - this will automatically perform a downgrade:
Show a list of available versions - sudo apt list -a terraform
terraform/xenial 0.13.5 amd64
terraform/xenial 0.13.4-2 amd64
... etc
or use sudo apt policy terraform to list available versions
Install the desired version:
sudo apt-get install terraform=0.14.5
Or, for a 'clean' approach, remove the existing version before installing the desired version:
sudo apt remove terraform

There are other valid answers here. This may be useful if you have a situation, like I do, where you need multiple Terraform versions during a migration from an old version to a new version.
I use tfenv for that:
https://github.com/tfutils/tfenv
It provides a modified terraform script that does a lookup of the correct terraform executable based on a default or based on the closest .terraform-version file in the directory or parent directories. This allows us to use a version of Terraform 0.12 for our migrated stuff and keep Terraform 0.11 for our legacy stuff.

You shouldn't be installing terraform in ubuntu any more. Generally speaking, the industry has moved on to docker now. You can install docker like this:
sudo apt install -y curl
curl -LSs get.docker.com | sh
sudo groupadd docker
sudo usermod -aG docker $USER
Once installed you can run terraform like this:
docker run -v $PWD:/work -w /work -v ~/.aws:/root/.aws hashicorp/terraform:0.12.17 init
Assuming that your .aws directory contains your aws credentials. If not, you can leave that mount binding (-v ~/.aws:/root/.aws) out of the command and it'll work with whatever scheme you choose to use. You can change the version of terraform you are using with ease, without installing anything.
There are significant benefits in this approach over the accepted answer. First is the ease of versioning. If you have installed terraform using a package manager you can either uninstall it and install the version you need, or you can play around with Linux alternatives (if your distro supports them, or you are using Linux, or a package manager of some sort -- you could be using Windows and have downloaded and run an installer). Of course, this might be a one-off thing, in which case you do it once and you're ok forever, but in my experience, that isn't often the case as most teams are required to update versions due to security controls, and those teams that aren't required to regularly update software probably should be.
If this isn't a one-off thing, or you'd not like to play around too much with versioning then you could just download the binary, as one comment on this post points out. It's pretty easy to come up with a scheme of directories for each version, or just delete the one you're using and replace it completely. This may suit your use-case pretty well. Go to the appropriate website (I've forgotten which one -- Hashicorp or the GitHub repo's releases page, you can always search for it, though that takes time too -- which is my point) and find the right version and download it.
Or, you can just type docker run hashicorp/terraform:0.12.17 and the right version will be automagically pulled for you from a preconfigured online trusted repo.
So, installing new versions is easier, and of course, docker will run the checksum for you, and will also have scanned the image for vulnerabilities and reported the results back to the developers. Of course, you can do all of this yourself, because as the comment on this answer states, it's just a statically compiled binary, so no hassle just install it and go.
Only it still isn't that easy. Another benefit would be the ease in which you could incorporate the containerised version into docker-compose configurations, or run it in K8S. Again, you may not need this capability, but given that the industry is moving that way, you can learn to do it using the standardised tools now and apply that knowledge everywhere, or you can learn a different technique to install every single tool you use now (get some from GitHub releases and copy the binary, others you should use the package manager, others you should download, unzip, and install, still others should be installed from the vendor website using an installer, etc. etc. etc.). Or, you can just learn how to do it with docker and apply the same trick to everything. The vast of modern tools and software are now packaged in this 'standard' manner. That's the point of containers really -- standardisation. A single approach more-or-less fits everything.
So, you get a standardised approach that fits most modern software, extra security, and easier versioning, and this all works almost exactly the same way no matter which operating system you're running on (almost -- it does cover Linux, windows, osx, raspbian, etc.).
There are other benefits around security other than those specifically mentioned here, that apply in an enterprise environment, but I don't have time to go into a lot of detail here, but if you were interested you could look at things like Aqua and Prisma Cloud Compute. And of course you also have the possibility of extending the base hashicorp/terraform container and adding in your favourite defaults.
Personally, I have no choice in work but to run windows (without wsl), but I am allowed to run docker, so I have a 'swiss army knife' container with aliases to run other containers through the shared docker socket. This means that I get as close to a real Linux environment as possible while running windows. I dispose of my work container regularly, and wouldn't want to rebuild it whenever I change the version of a tool that I'm using, so I use an alias against the latest version of those tools, and new versions are automatically pulled into my workspace. If that breaks when I'm doing, then I can specify a version in the alias and continue working until I'm ready to upgrade. If I need to downgrade a tool when I'm working on somebody else's code I just change the alias again and everything works with the old version. It seems to me that this workflow is the easiest I've ever used, and I've been doing this for 35 years.
I think that docker and this approach to engineering is simpler, cleaner, and more secure than any that has come before it. I strongly recommend that everyone try it.

Related

NixOS: Setting options for nix-shell

Is it possible to set Options (http://nixos.org/nixos/options.html) just for a single nix-shell instead of defining them globally at /etc/nixos/configuration.nix?
Those options you are referring to are only meant for NixOS, which usually translate (in the background) to configuring systemd unit files and creating configurations files in /etc.
On the other hand, the nix-shell tool is part of Nix (the package manager) which can be used on any Linux distribution (alongside any other package manager), and also on the latest macOS / OS X.
Nix (the package manager) only installs binary packages, and does not configure them, like other linux package managers do. Something like how homebrew works.
To recap:
NixOS (nixos-*) commands use Nix to install and to configure binaries of packages.
Nix (nix-*) commands only install binaries of packages; you have to configure them yourself.
If you are running NixOS or any systemd-based Linux distro, there is a way to create systemd containers using the same NixOS options. Documentation on containers is avaliable here.
Now, before you start jumping into containers with Nix, please know that the nixos-container command is still a work in progress, and requires some knowledge of the Nix expression language. Nonetheless, any feedback is more than welcome, and Nix developers are actively working on improving it.
If you are only looking to configure certain packages (eg. Vim, weechat) to be used across you system, this is also possible for some of them, but currently also requires some knowledge of the Nix expression language. Let me know which packages you are interested in to configure, and I can tell you how hard it would be to do it.
Hope this helps you a bit.

Installing dependencies in configure script

I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?
Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.

Using RPMs for installation on embedded system images

I'm trying to use RPMs to install public and private software into disk images that are eventually written to the boot flash of Linux based embedded systems.
My current methodology is to mount the image (/mnt/foo) read/write on a CentOS 6.5 box and use the rpm --installroot=/mnt/foo option. There are two problems:
--installroot=/mnt/foo appears to chroot into /mnt/foo, meaning that when the post install scripts run /bin/sh (etc.) they're actually using /mnt/foo/bin/sh (etc.) That's sort of workable if the target architecture is the same as the installation box but gets very messy if its not. I'm interested to hear if someone has solved this before.
At a higher level it would be nice to use yum or apt-get or ??? to handle package dependencies and repositories. yum is the obvious choice on CentOS but it has a weak grasp of non-native architectures and would likely require some hacking. apt-get looks more promising in that department but in truth I've never used it and my attempts to install it on CentOS 6.5 have left me in dependency hell.
This seems like a problem someone would have hit before but unfortunately everything I can find about RPMs and embedded systems assumes identical processor architectures.
Bottom line, I need to use RPMs to install software to a Linux image that will be the boot disk for a embedded system. Other than doing the rpm install as part of the image installation on the embedded system (our installation time is already a big problem) I'm open to just about anything.
Any suggestions will be gratefully received.
Have you tried using some continuous build system like Jenkins? You can use that to easily set up build hosts on any architecture/platform you like, so long as that platform has some basic tools (like ssh).
You could use a combination of the --installroot flag mentioned by other commenters, alongside of some VMs setup as build hosts in Jenkins in order to install your RPMs in a specific directory while avoiding any platform/architecture issues.
I'm not sure what your specific requirements are, but, depending on how far you are willing to go... RPMs are just compressed CPIO archives with a header, so you could use rpm2cpio piped to cpio to extract the files in the RPM. You can then extract the postinstall scripts using rpm -qp --scripts filename.rpm and run them yourself. The downside to this, is of course, that you lose a lot of the benefit of using RPM/yum in the first place like the automatic installation of dependencies, and so on.

Make - Make Install and Linux update

I am trying my hands new on Linux.
The following command is very useful:
sudo apt-get install <application>;
As it adds the application into the linux programs list and automatically upgrades it while running the update manager.
But I would like to get more knowledge on installing the programs from the .tar.gz archives as well.
So I do:
Extract the archive
./configure;
make;
make install;
I have two questions in this process:
1) I read in the forum that "make install" is not good if we are updating the binaries.
So should I just do "make" and the "install" ?
2) Second question is that is there a way to add the program installed in such manner to the Linux Software Update list so that I do not have to use the terminal for every new version that is released
Installing programs from tarballs:
You really do not want to install packages from .tar.gz when they are in the repositories. It is much harder to update or remove it manually than you could do with apt-get.
If you really have to compile the program yourself use checkinstall instead of make install. This creates a package you can install it via the package management and later remove using apt-get. This is much cleaner.
Also you may want to type
./configure && make && sudo checkinstall
instead of the commands you wrote. This way the program is only compiled if the configuration succeeded. The package is only built if the compilation succeeded. With ; instead of && all processes would be attempted no matter if its prerequisites are matched.
Graphical package managers
You can install your packages from GUI programs. Kubuntu uses for example uses muon for this, but the programs vary between distributions.
make install is "not good" if you want to be able to easily remove the files associated with a package as there is no log of the work it does and often no easy way to reverse the process. That has little to nothing to do with updating the software though (though updates can certainly run into related issues).
No, you can't add the manually compiled and installed software to your distributions list of packaged software (other than through something like checkinstall or creating a package yourself) since that's exactly what you were avoiding in the first place.
That all being said if the package exists for your distribution and you want to build it from source yourself you can often just build a more-or-less official version of the package from the distributions source package.

Best way to Manage Packages Compiled from Source

I'm looking into trying to find an easy way to manage packages compiled from source so that when it comes time to upgrade, I'm not in a huge mess trying to uninstall/install the new package.
I found a utility called CheckInstall, but it seems to be quite old, and I was wondering if this a reliable solution before I begin using it?
http://www.asic-linux.com.mx/~izto/checkinstall/
Also would simply likely to know any other methods/utilities that you use to handle these installations from source?
Whatever you do, make sure that you eventually go through your distribution's package management system (e.g. rpm for Fedora/Mandriva/RH/SuSE, dpkg for Debian/Ubuntu etc). Otherwise your package manager will not know anything about the packages you installed by hand and you will have unsatisfied dependencies at best, or the mother of all messes at worst.
If you don't have a package manager, then get one and stick with it!
I would suggest that you learn to make your own packages. You can start by having a look at the source packages of your distribution. In fact, if all you want to do is upgrade to version 1.2.3 of MyPackage, your distribution's source package for 1.2.2 can usually be adapted with a simple version change (unless there are patches, but that's another story...).
Unless you want distribution-quality packages (e.g. split library/application/debugging packages, multiple-architecture support etc) it is usually easy to convert your typical configure & make & make install scenario into a proper source package. If you can convince your package to install into a directory rather than /, you are usually done.
As for checkinstall, I have used it in the past, and it worked for a couple of simple packages, but I did not like the fact that it actually let the package install itself onto my system before creating the rpm/deb package. It just tracked which files got installed so that it would package them, which did not protect against unwelcome changes. Oh, and it needed root prilileges to work, which is another main sticking point for me. And lets not go into what happens with statically linked core utilities...
Most tools of the kind seem to work that way, so I simply learnt to build my own packages The Right Way (TM) and let checkinstall and friends mess around elsewhere. If you are still interested, however, there is a list of similar programs here:
http://www.dwheeler.com/essays/automating-destdir.html
PS: BTW checkinstall was updated at the end of 2009, which probably means that it's still adequately current.
EDIT:
In my opinion, the easiest way to perform an upgrade to the latest version of a package if it is not readily available in a repository is to alter the source package of the latest version in your distribution. E.g. for Centos the source packages for the latest version are here:
http://mirror.centos.org/centos/5.5/os/SRPMS/
http://mirror.centos.org/centos/5.5/updates/SRPMS/
...
If you want to upgrade e.g. php, you get the latest SRPM for your distrbution e.g. php-5.1.6-27.el5.src.rpm. Then you do:
rpm -hiv php-5.1.6-27.el5.src.rpm
which installs the source package (just the sources - it does not compile anything). Then you go to the rpm build directory (on my mandriva system its /usr/src/rpm), you copy the latest php source tarball to the SOURCES subdirectory and you make sure it's compressed in the same way as the tarball that just got installed there. Afterwards you edit the php.spec file in the SPECS directory to change the package version and build the binary package with something like:
rpmbuild -ba php.spec
In many cases that's all it will take for a new package. In others things might get a bit more complicated - if there are patches or if there are some major changes in the package you might have to do more.
I suggest you read up on the rpm and rpmbuild commands (their manpages are quite good, in a bit extensive) and check up the documentation on writing spec files. Even if you decide to rely on official backport repositories, it is useful to know how to build your own packages. See also:
http://www.rpm.org/wiki/Docs
EDIT 2:
If you are already installing packages from source, using rpm will actually simplify the building process in the long term, apart from maintaining the integrity of your system. The reason for this is that you won't have to remember the quirks of each package on your own ("oooh, right, now I remember, foo needs me to add -lbar to its CFLAGS"), as the build process will be in the .spec file, which you could imagine as a somewhat structured build script.
As far as upgrading goes, if you already have a .spec file for a previous version of the package, there are two main issues that you may encounter, but both exist whether you use rpm to build your package or not:
A patch that was applied to the previous version by the distribution does not apply any more. In many cases the patch has already been applied to the upstream package, so you can simply drop it. In others you may have to edit it - or I suppose if you deem it unimportant you can drop it too.
The package changed in some major way which affected e.g. the layout of the files it installs. You do read the release notes notes for each new version, don't you?
Other than these two issues, upgrading often boils down to just changing a version number in the spec file and running rpmbuild - even easier than installing from a tarball.
I would suggest that you have a look at the tutorials or at the source package for some simple piece of software such as:
http://mirror.centos.org/centos/5.5/os/SRPMS/ipv6calc-0.61-1.src.rpm
http://mirror.centos.org/centos/5.5/os/SRPMS/libevent-1.4.13-1.src.rpm
If you have experience in buildling packages from a tarball, using rpm to build software is not much of a leap really. It will never be as simple as installing a premade binary package, however.
I use checkinstall on Debian. It should not be so different on CentOS. I use it like that:
./configure
make
sudo checkinstall make install # fakeroot in place of sudo works usally for more security
# install the package generated

Resources