I am a bit confused on how complex terraform folder structure would be managed in a single terraform state file.
Assuming I have the following structure:
tf-structure
modules folder is a reusable code.
backend-app is not a module, but an actual resources which describe my backend "stuff".
frontend-app is not a module, but an actual resources which describe my frontend "stuff".
root-infra - let's assume I have additional folder called "root-infra" which is running all my VPC/gateway/network and some common infra stuff.
I can't understand how everything would be triggered to run in a single state file?
for example, if I add some resources in my backend-app, I would run plan/apply from backend-app folder, but this will result all my common infra + frontend be deleted.
So I'm assuming that even if I make a change in my backend-app folder, I still need to run plan/apply from my root-infra folder, assuming that the main.tf there include the backend-app (and also the frontend, etc.).
Am I right?
if so, how would I import my backend/frontend folders into my root-infra main.tf? and why is backend/frontend are any different from a regular module?
From the structure you provided, it seems you should actually be having multiple state files - one for root-infra, one for backend-app and one for frontend-app.
Each of those directories' state file will then manage the resources located in them. Using one single state file like you mentioned (Assuming you're using a remote state here, as local state files would already solve that problem), means that when you run it in the root-infra, terraform 'thinks' that these are the only resources you're deploying.
Next, when you move to backend-app and try to deploy from there, but with the same state file used in root-infra, terraform doesn't see the root-infra resources anymore in this directory, but instead sees new backend-app resources. It will attempt to delete the root-infra resources and replace them with backend-app etc. The same thing will happen later when you're deploying frontend-app.
The only solution here is to have different state files managing unique stack of resources. root-infra, backend-infra and frontend-infra are each one stack which should be managed individually.
If you wanted to manage all of them from one single state file, your structure should change and the entire thing should be one or two stack max. One for infra, one for applications. As if you were deploying all resources from one single directory instead, and you could just identify the different apps individually by having different tf files in the same directory. E.g.:
tf/modules/network/dns.tf
tf/modules/network/output.tf
tf/modules/network/variables.tf
tf/infra/main_infra.tf
tf/infra/vars_infra.tf
tf/infra/infra_remote_state.tf
tf/apps/main_frontend.tf
tf/apps/main_backend.tf
tf/apps/apps_remote_state.tf
Related
I am migrating some manually provisioned infastructure over to terraform.
Currently I am manually defining the terraform resource's in .tf files, importing the remote state with terraform import. I then run terraform plan multiple times, each time modifying the local tf files until the match the existing infastructure.
How can I speed this process up by downloading the remote state directly into a .tf resource file?
The mapping from the configuration as written in .tf files to real infrastructure that's indirectly represented in state snapshots is a lossy one.
A typical Terraform configuration has arguments of one resource derived from attributes of another, uses count or for_each to systematically declare multiple similar objects, and might use Terraform modules to decompose the problem and reuse certain components.
All of that context is lost in the mapping to real remote objects, and so there is no way to recover it and generate idiomatic .tf files that would be ready to use. Therefore you would always need to make some modifications to the configuration in order to produce a useful Terraform configuration.
While keeping that caveat in mind, you can review the settings for objects you've added using terraform import by running the terraform show command. Its output is intended to be read by humans rather than machines, but it does present the information using a Terraform-language-like formatting, and so what it produces can potentially be a starting point for a Terraform configuration, with the caveat that it won't always be totally valid and will typically need at least some adjustments in order to be accepted by terraform plan as valid, and to be useful for ongoing use.
I recreated a state file for an existing infrastructure using a third party tool i.e. terraformer. Now I want o move the .tfstate to another azurerm back-end and manage from there.
If I just copy the file i.e. mystate.tfstate from local to inside storage account container with the same file name/key as in the backend configurations will it work or do I need to do something else to achieve it?
I don't want to risk the state file or infrastructure by trying to do something that isn't sure to work as I expect.
Terraform has some automatic migration behavior built in to terraform init.
Based on your description, it sounds like so far you've been using local state storage, and so the latest state snapshot is in a .tfstate file on your local system and you probably don't have a backend block in your configuration yet, since local storage is the default.
Before beginning this process, I suggest first making a copy of your state file in a safe place so that you can experiment more confidently. This process should not risk your existing state file, but it can't hurt to be careful if you've invested significant work in constructing that state file already.
Next, add a backend "azurerm" block to tell Terraform it should use that backend. Refer to the documentation to see which settings you'll need to set and what other preparation steps you may need to make first, such as establishing a new storage container.
If you've been using local state then you will presumably have a terraform.tfstate file in your current working directory, which Terraform will check for in the next step. If you've renamed that file at any point so far, you'll need to rename it back to terraform.tfstate to match the expectations of Terraform's local state storage implementation.
If you now run terraform init, Terraform should notice the following two things:
You have a backend block but the current working directory doesn't currently have an initialized backend connection.
You have an existing terraform.tfstate file in your working directory.
With those two things being true, Terraform will propose to migrate your state from the local backend to the azurerm backend. You can follow the steps it proposes and answer the prompts that appear, after which you should find the same state snapshot stored in your configured Azure storage container.
Once you've confirmed that the object is present in Azure storage, you can delete the terraform.tfstate file, since Terraform will no longer refer to it.
I don't want to risk the state file or infrastructure by trying to do
something that isnt sure to work as I expect.
Make a backup of the state file first, and then you won't be risking the state file.
As long as you aren't running an apply command, you won't be risking the infrastructure. And even if you are running an apply command, you will be able to review the plan before proceeding.
So just (always) backup your state file, and always review a plan before applying it, and there is no risk.
Background:
I have a shared module called "releases". releases contains the following resources:
aws_s3_bucket.my_deployment_bucket
aws_iam_role.my_role
aws_iam_role_policy.my_role_policy
aws_iam_instance_profile.my_instance_profile
These resources are used by ec2 instances belonging to an ASG to pull code deployments when they provision themselves. The release resources are created once and will rarely/never change. This module is one of a handful used inside an environment-specific project (qa-static) that has it's own tfstate file in AWS.
Fast Forward: It's now time to create a "prd-static" project. This project wants to re-use the environment agnostic AWS resources defined in the releases module. prd-static is basically a copy of qa with beefed up configuration for the database and cache server, etc.
The Problem:
prd-static sees the environment-agnostic AWS resources defined in the "releases" module as new resources that don't exist in AWS yet. An init and plan call shows that it wants to create these from scratch. It makes sense to me since prd-static has it's own tfstate - and tfstate is essentially the system-of-record - that terraform doesn't know that no changes should be applied. But, ideally terraform would use AWS as the source of truth for existing resources and their configuration.
If we try to apply the plan as is, the prd-static project just bombs out with an Entity Already Exists error. Leading me to this post:
what is the best way to solve EntityAlreadyExists error in terraform?
^-- logically I could import these resources into the tfstate file for prd-static and be on my merry way. Then, both projects know about the resources and in theory would only apply updates if the configuration had changed. I was able to import the bucket and the role and then re-run the plan.
Now terraform wants to delete the s3 bucket and recreate the role. That's weird - and not at all what I wanted to do.
TLDR: It appears that while modules like to be shared, modules that create single re-usable resources (like an S3 bucket) really don't want to be shared. It looks like I need to pull the environment-agnostic static resources module into it's own project with it's own tfstate that can be used independently rather than try and share the releases module across environments. Environment-specific stuff that depend on the release resources can reference them via their outputs in my build-process.
Should I be able to define a resource in a module, like an S3 bucket where the same instance is used across terraform projects that each have their own tfstate file (remote state in S3). Because I cannot.
If I really shouldn't be able to do this is the correct approach to extract the single instance stuff into its own project and depend on the outputs?
Right now I have a Terraform project consisting of multiple directories:
root
|- stack
|- applications
|- app1
|- app2
|- app3
Stack contains all of the common dependencies for the apps.
app1, app2, and app3 use remote state to refer to stack resources.
I currently have to run terraform apply in each of the four directories separately with separate .tfvars. (This was originally by design.)
I would like to refactor this project with a single .tf file in the root directory and the stack/app* directories as modules. I know how to do that just fine from a TF standpoint, but since this project is already deployed in two different environments, I'm trying to figure out the best way to migrate my existing stack/app* resources into a new combined state file or at least automate all the terraform import commands I'm going to need to run.
Any ideas?
You're probably going to want to use terraform state mv to move the state from each of the app states to the stack state. The exact usage will depend on the type of backend you use for your state. You can see an example of that from the documentation here. Be sure to backup all of your state files before doing this so that you can easily recover in the event that something goes wrong.
Vincent is right, Terraform state mv won't do this. The idea was to Merge two state together.
My first suggestion, is to combine the entire project into a single main.tf, variable.tf ...or by making them a module by calling them one after the other in a single main.tf
Then terraform init, it will bring something like :
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "gcs" backend to the
newly configured "gcs" backend. An existing non-empty state already exists in
the new backend. The two states have been saved to temporary files that will be
removed after responding to this query.
Previous (type "stateA"): C:\Users\xx\xx\xx\Temp\terraform447677407\xx.tfstate
New (type "stateB"): C:\Users\xx\xx\xx\Temp\terraform447677407\yy.tfstate
Do you want to overwrite the state in the new backend with the previous state?
Enter "yes" to copy and "no" to start with the existing state in the newly
configured "stateB" backend.
Enter a value:
So you can decide whether to merge this states or use an new state. I will recommend to back up your states before doing this.
More information can be seen here: https://www.terraform.io/docs/cli/commands/state/mv.html#example-move-a-module-to-another-state
Wow, it's been a long time.
I ended up writing a script to "harvest" the resource IDs from my old state using terraform state show and then another script to terraform import them into my new project's state. It wasn't particularly pleasant, but in the end it did what I needed it to do. That was Terraform v0.11.x stuff, too. State has become a lot easier to manage since then.
Terraform addresses re-use of components via Modules. So I might put the definition for an AWS Autoscale Group in a module and then have multiple top level resource files use that ASG. Fine so far.
My question is: how to use Terraform to group and organize multiple top level resource files? In other words, what is the next level of organization?
We have a system that has multiple applications...each application would correspond to a TF resource file and those resource files would use the modules. We have different customers that use different sets of the applications so we need to keep them in their own resource files.
We're asking if there is a TF concept for deploying multiple top level resource files (applications to us).
At some point, you can't or it doesn't make sense to abstract any further. You will always have a top level resource file (i.e main.tf) describing modules to use. You can organize these top level resource files via:
Use Terraform Workspaces
You can use workspaces - in your case, maybe one per client name. Workspaces each have their own backing Terraform state. Then you can use terraform.workspace variable in your Terraform code. Workspaces can also be used to target different environments.
Use Separate Statefiles
Have one top level statefile for each of your clients, i.e clienta.main.tf, clientb.main.tf, etc. You could have them all in the same repository and use a script to run them all, individually, or in whatever pattern you prefer; or you could have one repository per client.
You can also combine workspaces with your separate statefiles to target individual environments, i.e staging,production, for each client. The Terraform docs go into more detail about workspaces and some of their downsides.