How to store Terraform state in SVN? - terraform

Does Terraform allow storing the state file in SVN? If this is not directly supported by Terraform, do any third party/ open source options exist?

When using Terraform it's not typical to store the Terraform state in the same version control repository as the configuration that drives it, because the expected workflow to use version control with Terraform is to review and commit the proposed changes first, and only then apply the changes to your real infrastructure from your main branch.
To understand why it might help to think about the relationship between an application's main code and its database. We don't typically store the main database for a web application in the version control repository along with the code, because the way we interact with the two are different: many developers can be concurrently working on and proposing changes to the application source code, and our version control system is often able to merge together those separate proposals to reduce collisions, but for the application's central database it's more common to use locks so that two writers don't try to change the same data (or interconnected data) at the same time.
In this way, Terraform state is roughly analogous to Terraform's "backend database". When using Terraform in a team setting then, you'd typically select one of the backends that stores state remotely and supports locking, and then anyone working with that particular Terraform configuration will find that Terraform will take out a lock before making any remote system modifications, hold that lock throughout its work, and then write the newly-updated state to the backend before releasing the lock.
Although you specifically asked about Subversion, my suggestions here are intended to apply to all version control systems. Version control is a good place to keep the source code for your Terraform modules, but it's not a good place to keep your Terraform state.

Related

Terraform download current infastructure into tf file

I am migrating some manually provisioned infastructure over to terraform.
Currently I am manually defining the terraform resource's in .tf files, importing the remote state with terraform import. I then run terraform plan multiple times, each time modifying the local tf files until the match the existing infastructure.
How can I speed this process up by downloading the remote state directly into a .tf resource file?
The mapping from the configuration as written in .tf files to real infrastructure that's indirectly represented in state snapshots is a lossy one.
A typical Terraform configuration has arguments of one resource derived from attributes of another, uses count or for_each to systematically declare multiple similar objects, and might use Terraform modules to decompose the problem and reuse certain components.
All of that context is lost in the mapping to real remote objects, and so there is no way to recover it and generate idiomatic .tf files that would be ready to use. Therefore you would always need to make some modifications to the configuration in order to produce a useful Terraform configuration.
While keeping that caveat in mind, you can review the settings for objects you've added using terraform import by running the terraform show command. Its output is intended to be read by humans rather than machines, but it does present the information using a Terraform-language-like formatting, and so what it produces can potentially be a starting point for a Terraform configuration, with the caveat that it won't always be totally valid and will typically need at least some adjustments in order to be accepted by terraform plan as valid, and to be useful for ongoing use.

Terraform backward compatibility between 0.13.x to 0.12.x

Hi Terraform techies ,
I have a problem statement here. I used Terraform 0.13.5 to create my infrastructure. Due to some of the constraints I need to move back to 0.12.18. when I have make changes in infrastructure ,I see that ,the state files generated with tf 0.13.5 don't work with 0.12.8. is there a way I can backport the state files.
This is a process, as far as I know there is not a shortcut. You will need to do a state migration which can be tedious depending on the size of the state file.
Another option would be to import the infrastructure into the 0.12 state, or use data sources instead of migrating.

why is it required to persist terraform state file remotely?

I am new to terraform. Can someone please explain
why do we need to save .tfstate file in local or remote storage,
when terraform apply always refreshes the state file with new infrastructure.
Thanks in advance.
The state file tracks the resources that Terraform is managing, whether it created them or imported them. Terraform's refresh only detects drift in managed resources and won't detect if you have created new resources outside of the state file.
If you lose the state you will end up with orphaned resources that are not being managed by Terraform. If, for some reason, you are okay with that or you have some other way of sharing state with other team members/CI and backing it up then you're fine.
Of course, using Terraform's remote state neatly solves those things so you should use it if you care about any of those things or think you might need to in the future (you probably will).
I will add a more developer-oriented perspective to help understand.
Think about you are using yarn or npm to do a NodeJS app, package.json is like your tf files, while yarn.lock or package-lock.json.
Dont take literally though, as terraform state file has physical underlying implications.

Backing up of Terraform statefile

I usually run all my Terraform scripts through Bastion server and all my code including the tf statefile resides on the same server. There happened this incident where my machine accidentally went down (hard reboot) and somehow the root filesystem got corrupted. Now my statefile is gone but my resources still exist and are running. I don't want to again run terraform apply to recreate the whole environment with a downtime. What's the best way to recover from this mess and what can be done so that this doesn't get repeated in future.
I have already taken a look at terraform refresh and terraform import. But are there any better ways to do this ?
and all my code including the tf statefile resides on the same server.
As you don't have .backup file, I'm not sure if you can recover the statefile smoothly in terraform way, do let me know if you find a way :) . However you can take few step which will help you come out from situation like this.
The best practice is keep all your statefiles in some remote storage like S3 or Blob and configure your backend accordingly so that each time you destroy or create a new stack, it will always contact the statefile remotely.
On top of it, you can take the advantage of terraform workspace to avoid the mess of statefile in multi environment scenario. Also consider creating a plan for backtracking and versioning of previous deployments.
terraform plan -var-file "" -out "" -target=module.<blue/green>
what can be done so that this doesn't get repeated in future.
Terraform blue-green deployment is the answer to your question. We implemented this model quite a while and it's running smoothly. The whole idea is modularity and reusability, same templates is working for 5 different component with different architecture without any downtime(The core template remains same and variable files is different).
We are taking advantage of Terraform module. We have two module called blue and green, you can name anything. At any given point of time either blue or green will be taking traffic. If we have some changes to deploy we will bring the alternative stack based on state output( targeted module based on terraform state), auto validate it then move the traffic to the new stack and destroy the old one.
Here is an article you can keep as reference but this exactly doesn't reflect what we do nevertheless good to start with.
Please see this blog post, which, unfortunately, illustrates import being the only solution.
If you are still unable to recover the terraform state. You can create a blueprint of terraform configuration as well as state for a specific aws resources using terraforming But it requires some manual effort to edit the state for managing the resources back. You can have this state file, run terraform plan and compare its output with your infrastructure. It is good to have remote state especially using any object stores like aws s3 or key value store like consul. It has support for locking the state when multiple transactions happened at a same time. Backing up process is also quite simple.

How to call a puppet provider method from puppet manifest?

I'm using the ibm_installation_manager module from the puppet forge and it is a bit basic because IBM wrote Installation Manager in a time where idempotency was done much.
ref: https://forge.puppet.com/puppetlabs/ibm_installation_manager
As such it does not cater nicely for upgrades - so the module will not detect if an upgrade is needed, stop existing processes, do the upgrade and then start the processes again. It will just detect if an upgrade is needed and try to install the desired version and if that constitutes an upgrade that's great, but it will probably fail due to running instances.
So I need to implement some "stop processes" pre-upgrade functionality.
I need to mention at this point I'm new to ruby and fairly new to puppet.
The provider that the module uses (imcl.rb) has an exists method.
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
Or how would you approach this problem?
Something like imcl.exists(ibm_pkg["my_imcl_pkg_resource"])
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
No, it is not possible, at least not in any useful way. Your manifests describe how to build a catalog of resources describing the target state of the machine. In a master / agent setup, this happens on the master. The catalog is then used as input to a separate step, in which it is transferred to the target machine and applied there. It is in this second step that providers are engaged.
To the extent that you want the contents of your catalogs to be influenced by the current state of the target machine, the Puppet mechanism for that is to convey the needed state details to the catalog builder in the form of facts. It is relatively straightforward to add your own facts. Indeed, there are at least two distinct, non-exclusive mechanisms, going under the names "external facts" and "custom facts".

Resources