I am migrating some manually provisioned infastructure over to terraform.
Currently I am manually defining the terraform resource's in .tf files, importing the remote state with terraform import. I then run terraform plan multiple times, each time modifying the local tf files until the match the existing infastructure.
How can I speed this process up by downloading the remote state directly into a .tf resource file?
The mapping from the configuration as written in .tf files to real infrastructure that's indirectly represented in state snapshots is a lossy one.
A typical Terraform configuration has arguments of one resource derived from attributes of another, uses count or for_each to systematically declare multiple similar objects, and might use Terraform modules to decompose the problem and reuse certain components.
All of that context is lost in the mapping to real remote objects, and so there is no way to recover it and generate idiomatic .tf files that would be ready to use. Therefore you would always need to make some modifications to the configuration in order to produce a useful Terraform configuration.
While keeping that caveat in mind, you can review the settings for objects you've added using terraform import by running the terraform show command. Its output is intended to be read by humans rather than machines, but it does present the information using a Terraform-language-like formatting, and so what it produces can potentially be a starting point for a Terraform configuration, with the caveat that it won't always be totally valid and will typically need at least some adjustments in order to be accepted by terraform plan as valid, and to be useful for ongoing use.
Related
I have a local directory with files. How can I copy that directory with included objects to another one using Terraform?
And I need to do that every time when I apply module, not only the first time.
I tried to use provisioner "local-exec" but it doesn't suitable for me cause I'll run this on Linux and Win and I don't want to change command and interpreter every time
Terraform is not designed for managing local files; it is primarily intended for working with remote APIs over the network, so that the results can persist in the remote system between runs.
However, the hashicorp/local provider does have some resource types which try to treat the local file system as if it were a remote API. Its documentation contains a caution about the likely limitations of doing that:
Terraform primarily deals with remote resources which are able to outlive a single Terraform run, and so local resources can sometimes violate its assumptions. The resources here are best used with care, since depending on local state can make it hard to apply the same Terraform configuration on many different local systems where the local resources may not be universally available. See specific notes in each resource for more information.
If this tradeoff is acceptable to you then it should be possible to implement a Terraform module which declares the effect you're describing using the local_file resource type to declare the destination files, but the provider does not have a data source for reading the contents of a directory so for the source files the set of files will need to be fixed on disk before Terraform runs so that it's feasible to use the fileset function. (Functions get evaluated as part of initial configuration evaluation, before Terraform takes any other actions, and so this function cannot react to changes made to the filesystem while Terraform is running.)
variable "source_dir" {
type = string
}
variable "destination_dir" {
type = string
}
locals {
source_files = fileset(var.source_dir, "**")
}
resource "local_file" "dest" {
for_each = local.source_files
filename = "${var.source_dir}/${each.value}"
content_base64 = filebase64("${var.destination_dir}/${each.value}")
}
This uses local_file to declare that each destination file should exist.
An important caveat here is that the filebase64 function is reading the full content of the given file into memory and then Terraform is copying that value to the provider's content_base64 argument as part of the request to the provider plugin. That means that this technique is only feasible if you know that all of the files you will be copying are relatively small. If you have any particularly large files then you are likely to hit either RAM limits loading the file into memory or provider plugin protocol message size limits sending the content to the hashicorp/local provider.
If there is any way to solve this part of your problem using software outside of Terraform, such as using a traditional configuration management tool, then I would recommend considering that approach instead. Although this might work, this is not what Terraform is designed for.
I use terraform to initialize some OpenStack cloud resources.
I have a scenario where I would need to initialize/prepare a volume disk using a temporary compute resource. Once volume is fully initialized, I would no longer need the temporary compute resource but need to attach to another compute resource (different network configuration and other settings making reuse of first impossible). As you might have guessed, I cannot reach directly the expected long term goal without the intermediary step.
I know I could drive a state machine or some sort of processing queue from outside terraform to achieve this, but I wonder if it was possible to do it nicely in one single run of terraform.
The best I could think of, is that a main terraform script would trigger creation/destruction of the intermediate compute resource by launching a another terraform instance responsible just for the intermediate resources (using terraform apply followed by terraform destroy). However it requires extra care such as ensuring unique folder to deal with concurrent "main" resource initialization and makes the whole a bit messy I think.
I wonder if it was possible to do it nicely in one single run of terraform.
Sadly, no. Any "solution" which you could possibly implement for that (e.g. running custom scripts through local-exec, etc) in a single TF will only be convoluted mess, and will only lead to more issues that it solves in the long term.
The proper way, as you wrote, is to use dedicated CI/CD pipeline for a multistage deployment. Alternatively, don't use TF at all, and use other IaC tool.
I have recently started working on Terraform, have a question on terraform state mv and terraform import. As per the documentation, terraform state mv can be used when a resource name changes, and the updated name has to be added to the state file. And terraform import can be used to import the resources created outside of Terraform to a state file. My question is even when a resource name changes or code structure changes(using modules), we can still use terraform import to update the state file correct? Could anyone tell me, what is the real benefit of using terraform state mv command?
So the question really is this particular case:
I have renamed the TF resource / changed the structure of the resource
in IaC. Can I just re-import it into the new structure, instead of moving it?
Yes you can, but what will happen to the state? You'll be importing a resource you're already managing according to the TF state. The old resource that you've modified should still be managed, therefore you might run into issues where the TF operator will attempt to recreate it or even delete it. It will all depend on what state matches the reality in your cloud provider.
If you'd like to still import the updated, I'd go for terraform state rm & terraform import afterwards. This is sometimes required / an easy hack after big changes to a particular module / resource. It's also a good debugging experience, when you're not exactly sure about how does the cloud resource matches the TF code, as you're see state differences only for this newly imported resource.
One benefit of terraform state mv is useful if you need to refactor your code in or out of modules. I've used it quite a bit. I recommend backing up your state before making any changes. If you are using a remote state, you can always take a copy of it, disable your use of the remote state temporarily and then utilize the copy locally.
You can see the names of your state objects by using terraform state list.
The usage of terraform import is to add an existing thing to your state file, so it's tracked.
Terraform Import - Terraform is able to import existing infrastructure. This allows you take resources you've created by some other means and bring it under Terraform management.
Terraform State MV - It is less common situation where you wish to retain an existing remote object but track it as a different resource instance address in Terraform, such as if you have renamed a resource block or you have moved it into a different module in your configuration.
Use terraform import for all resources, created outside terraform
Use terraform state mv in the case, you want to restruct a already exisiting terraform resource.
I am using terraform state mv as soon as my projects needs to be restructed, e.g. become more complex, want to move to modules, etc.
Sometimes (even for older terraform projects), it could also be a good practice to import the resource again (with another name) and to a terraform state rm.
Using Terraform 0.12.6
I have a directory with multiple *.tf files, e.g., product1.tf, product2.tf etc. How can I execute terraform plan and subsequently terraform apply for a certain *.tf file? I was hoping it would be the -target option but I read the docs and didn't see this mentioned.
You can't. Terraform concatenates all the .tf files in a directory and works on them all at once.
You can use -target to target specific resources but it has no idea what file they're all in.
-target in general should be used sparingly as an escape hatch, if you need to run separate bits of Terraform at a time then split your Terraform code up into separate directories and state files.
This is also discussed in the docs:
This targeting capability is provided for exceptional circumstances,
such as recovering from mistakes or working around Terraform
limitations. It is not recommended to use -target for routine
operations, since this can lead to undetected configuration drift and
confusion about how the true state of resources relates to
configuration.
Instead of using -target as a means to operate on isolated portions of
very large configurations, prefer instead to break large
configurations into several smaller configurations that can each be
independently applied. Data sources can be used to access information
about resources created in other configurations, allowing a complex
system architecture to be broken down into more manageable parts that
can be updated independently.
I usually run all my Terraform scripts through Bastion server and all my code including the tf statefile resides on the same server. There happened this incident where my machine accidentally went down (hard reboot) and somehow the root filesystem got corrupted. Now my statefile is gone but my resources still exist and are running. I don't want to again run terraform apply to recreate the whole environment with a downtime. What's the best way to recover from this mess and what can be done so that this doesn't get repeated in future.
I have already taken a look at terraform refresh and terraform import. But are there any better ways to do this ?
and all my code including the tf statefile resides on the same server.
As you don't have .backup file, I'm not sure if you can recover the statefile smoothly in terraform way, do let me know if you find a way :) . However you can take few step which will help you come out from situation like this.
The best practice is keep all your statefiles in some remote storage like S3 or Blob and configure your backend accordingly so that each time you destroy or create a new stack, it will always contact the statefile remotely.
On top of it, you can take the advantage of terraform workspace to avoid the mess of statefile in multi environment scenario. Also consider creating a plan for backtracking and versioning of previous deployments.
terraform plan -var-file "" -out "" -target=module.<blue/green>
what can be done so that this doesn't get repeated in future.
Terraform blue-green deployment is the answer to your question. We implemented this model quite a while and it's running smoothly. The whole idea is modularity and reusability, same templates is working for 5 different component with different architecture without any downtime(The core template remains same and variable files is different).
We are taking advantage of Terraform module. We have two module called blue and green, you can name anything. At any given point of time either blue or green will be taking traffic. If we have some changes to deploy we will bring the alternative stack based on state output( targeted module based on terraform state), auto validate it then move the traffic to the new stack and destroy the old one.
Here is an article you can keep as reference but this exactly doesn't reflect what we do nevertheless good to start with.
Please see this blog post, which, unfortunately, illustrates import being the only solution.
If you are still unable to recover the terraform state. You can create a blueprint of terraform configuration as well as state for a specific aws resources using terraforming But it requires some manual effort to edit the state for managing the resources back. You can have this state file, run terraform plan and compare its output with your infrastructure. It is good to have remote state especially using any object stores like aws s3 or key value store like consul. It has support for locking the state when multiple transactions happened at a same time. Backing up process is also quite simple.