I am using orchestration tools to automate Terraform deployments using the opensource version. I would like to know more on the workspace options that are available.
More specifically, what happens when two developers execute a Terraform deployment in two different workspaces at the same time using the same executable? The scope of the templates is within the directory however what will be the scope of the workspaces? Is the scope tied to a single Terraform executable or is it also directory driven?
Any help is much appreciated!
By default, it's also directory driven. But you can configure different backend types (AWS S3, PostgreSQL, etc).
If you use AWS S3, each workspace will represent different files inside the bucket. If you use PostgreSQL, each workspace will represent a different row in the states table.
If two developers execute terraform apply for two different workspaces you won't have any problem, but you'll need State locking to avoid two executions for the same workspace at the same time because this can potentially corrupt your state.
Related
I am migrating some manually provisioned infastructure over to terraform.
Currently I am manually defining the terraform resource's in .tf files, importing the remote state with terraform import. I then run terraform plan multiple times, each time modifying the local tf files until the match the existing infastructure.
How can I speed this process up by downloading the remote state directly into a .tf resource file?
The mapping from the configuration as written in .tf files to real infrastructure that's indirectly represented in state snapshots is a lossy one.
A typical Terraform configuration has arguments of one resource derived from attributes of another, uses count or for_each to systematically declare multiple similar objects, and might use Terraform modules to decompose the problem and reuse certain components.
All of that context is lost in the mapping to real remote objects, and so there is no way to recover it and generate idiomatic .tf files that would be ready to use. Therefore you would always need to make some modifications to the configuration in order to produce a useful Terraform configuration.
While keeping that caveat in mind, you can review the settings for objects you've added using terraform import by running the terraform show command. Its output is intended to be read by humans rather than machines, but it does present the information using a Terraform-language-like formatting, and so what it produces can potentially be a starting point for a Terraform configuration, with the caveat that it won't always be totally valid and will typically need at least some adjustments in order to be accepted by terraform plan as valid, and to be useful for ongoing use.
Does Terraform allow storing the state file in SVN? If this is not directly supported by Terraform, do any third party/ open source options exist?
When using Terraform it's not typical to store the Terraform state in the same version control repository as the configuration that drives it, because the expected workflow to use version control with Terraform is to review and commit the proposed changes first, and only then apply the changes to your real infrastructure from your main branch.
To understand why it might help to think about the relationship between an application's main code and its database. We don't typically store the main database for a web application in the version control repository along with the code, because the way we interact with the two are different: many developers can be concurrently working on and proposing changes to the application source code, and our version control system is often able to merge together those separate proposals to reduce collisions, but for the application's central database it's more common to use locks so that two writers don't try to change the same data (or interconnected data) at the same time.
In this way, Terraform state is roughly analogous to Terraform's "backend database". When using Terraform in a team setting then, you'd typically select one of the backends that stores state remotely and supports locking, and then anyone working with that particular Terraform configuration will find that Terraform will take out a lock before making any remote system modifications, hold that lock throughout its work, and then write the newly-updated state to the backend before releasing the lock.
Although you specifically asked about Subversion, my suggestions here are intended to apply to all version control systems. Version control is a good place to keep the source code for your Terraform modules, but it's not a good place to keep your Terraform state.
Background:
I have a shared module called "releases". releases contains the following resources:
aws_s3_bucket.my_deployment_bucket
aws_iam_role.my_role
aws_iam_role_policy.my_role_policy
aws_iam_instance_profile.my_instance_profile
These resources are used by ec2 instances belonging to an ASG to pull code deployments when they provision themselves. The release resources are created once and will rarely/never change. This module is one of a handful used inside an environment-specific project (qa-static) that has it's own tfstate file in AWS.
Fast Forward: It's now time to create a "prd-static" project. This project wants to re-use the environment agnostic AWS resources defined in the releases module. prd-static is basically a copy of qa with beefed up configuration for the database and cache server, etc.
The Problem:
prd-static sees the environment-agnostic AWS resources defined in the "releases" module as new resources that don't exist in AWS yet. An init and plan call shows that it wants to create these from scratch. It makes sense to me since prd-static has it's own tfstate - and tfstate is essentially the system-of-record - that terraform doesn't know that no changes should be applied. But, ideally terraform would use AWS as the source of truth for existing resources and their configuration.
If we try to apply the plan as is, the prd-static project just bombs out with an Entity Already Exists error. Leading me to this post:
what is the best way to solve EntityAlreadyExists error in terraform?
^-- logically I could import these resources into the tfstate file for prd-static and be on my merry way. Then, both projects know about the resources and in theory would only apply updates if the configuration had changed. I was able to import the bucket and the role and then re-run the plan.
Now terraform wants to delete the s3 bucket and recreate the role. That's weird - and not at all what I wanted to do.
TLDR: It appears that while modules like to be shared, modules that create single re-usable resources (like an S3 bucket) really don't want to be shared. It looks like I need to pull the environment-agnostic static resources module into it's own project with it's own tfstate that can be used independently rather than try and share the releases module across environments. Environment-specific stuff that depend on the release resources can reference them via their outputs in my build-process.
Should I be able to define a resource in a module, like an S3 bucket where the same instance is used across terraform projects that each have their own tfstate file (remote state in S3). Because I cannot.
If I really shouldn't be able to do this is the correct approach to extract the single instance stuff into its own project and depend on the outputs?
I want to run a script using terraform inside an existing instance on any cloud which is pre-created .The instance was created manually , is there any way to push my script to this instance and run it using terraform ?
if yes ,then How can i connect to the instance using terraform and push my script and run it ?
I believe ansible is a better option to achieve this easily.
Refer the example give here -
https://docs.ansible.com/ansible/latest/modules/script_module.html
Create a .tf file and describe your already existing resource (e.g. VM) there
Import existing thing using terraform import
If this is a VM then add your script to remote machine using file provisioner and run it using remote-exec - both steps are described in Terraform file, no manual changes needed
Run terraform plan to see if expected changes are ok, then terraform apply if plan was fine
Terraform's core mission is to create, update, and destroy long-lived infrastructure objects. It is not generally concerned with the software running in the compute instances it deploys. Instead, it generally expects each object it is deploying to behave as a sort of specialized "appliance", either by being a managed service provided by your cloud vendor or because you've prepared your own machine image outside of Terraform that is designed to launch the relevant workload immediately when the system boots. Terraform then just provides the system with any configuration information required to find and interact with the surrounding infrastructure.
A less-ideal way to work with Terraform is to use its provisioners feature to do late customization of an image just after it's created, but that's considered to be a last resort because Terraform's lifecycle is not designed to include strong support for such a workflow, and it will tend to require a lot more coupling between your main system and its orchestration layer.
Terraform has no mechanism intended for pushing arbitrary files into existing virtual machines. If your virtual machines need ongoing configuration maintenence after they've been created (by Terraform or otherwise) then that's a use-case for traditional configuration management software such as Ansible, Chef, Puppet, etc, rather than for Terraform.
I usually run all my Terraform scripts through Bastion server and all my code including the tf statefile resides on the same server. There happened this incident where my machine accidentally went down (hard reboot) and somehow the root filesystem got corrupted. Now my statefile is gone but my resources still exist and are running. I don't want to again run terraform apply to recreate the whole environment with a downtime. What's the best way to recover from this mess and what can be done so that this doesn't get repeated in future.
I have already taken a look at terraform refresh and terraform import. But are there any better ways to do this ?
and all my code including the tf statefile resides on the same server.
As you don't have .backup file, I'm not sure if you can recover the statefile smoothly in terraform way, do let me know if you find a way :) . However you can take few step which will help you come out from situation like this.
The best practice is keep all your statefiles in some remote storage like S3 or Blob and configure your backend accordingly so that each time you destroy or create a new stack, it will always contact the statefile remotely.
On top of it, you can take the advantage of terraform workspace to avoid the mess of statefile in multi environment scenario. Also consider creating a plan for backtracking and versioning of previous deployments.
terraform plan -var-file "" -out "" -target=module.<blue/green>
what can be done so that this doesn't get repeated in future.
Terraform blue-green deployment is the answer to your question. We implemented this model quite a while and it's running smoothly. The whole idea is modularity and reusability, same templates is working for 5 different component with different architecture without any downtime(The core template remains same and variable files is different).
We are taking advantage of Terraform module. We have two module called blue and green, you can name anything. At any given point of time either blue or green will be taking traffic. If we have some changes to deploy we will bring the alternative stack based on state output( targeted module based on terraform state), auto validate it then move the traffic to the new stack and destroy the old one.
Here is an article you can keep as reference but this exactly doesn't reflect what we do nevertheless good to start with.
Please see this blog post, which, unfortunately, illustrates import being the only solution.
If you are still unable to recover the terraform state. You can create a blueprint of terraform configuration as well as state for a specific aws resources using terraforming But it requires some manual effort to edit the state for managing the resources back. You can have this state file, run terraform plan and compare its output with your infrastructure. It is good to have remote state especially using any object stores like aws s3 or key value store like consul. It has support for locking the state when multiple transactions happened at a same time. Backing up process is also quite simple.