So I have an application that runs terraform apply in a directory, then can also run terraform destroy. I was testing the application, and I accidentally interrupted the processes while running apply
Now it seems to be stuck with a partially created instance, where it recognizes the name of my instance I was creating/destroying and when I try to apply it says that an instance of that name already exists. But then destroy says there is nothing to destroy. So I can't do either. Is there anyway to unsnarl this?
I'm afraid that the only option is by doing:
execute terraform state rm RESOURCE example: terraform state rm aws_ebs_volume.volume.
Manually remove the resource from your cloud provider.
you can run the below to view all current resources still live from the project directory:
$ terraform state list
to destroy each resource run the below on each individual resource:
$ terraform destroy --target=resource.name
could write a script to loop through the 'terraform state list' output if there is a lot.
I was able to get out of this state by making sure the trailing comma was removed from the cloud provider resource definition (on AWS). Then I refreshed the state with terraform refresh. After that I was able to plan and apply again.
Related
I am working on upgrading templates from terraform 0.12.31 to 0.13.7, we need to ensure that we have an automatic system for dealing with deployments that were created under the older version.
An issue I am working through is that I removed the use of all null providers in the move. When I attempt to apply or plan on a state file created on 0.12 when using terraform version 0.13 I recieve the following error:
$ terraform plan --var-file MY_VAR_FILE.json
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
Error: Provider configuration not present
To work with
module.gcp_volt_site.module.ce_config.data.null_data_source.hosts_localhost
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.gcp_volt_site.module.ce_config.data.null_data_source.hosts_localhost,
after which you can remove the provider configuration again.
Error: Provider configuration not present
To work with
module.gcp_volt_site.module.ce_config.data.null_data_source.cloud_init_master
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.gcp_volt_site.module.ce_config.data.null_data_source.cloud_init_master,
after which you can remove the provider configuration again.
Error: Provider configuration not present
To work with
module.gcp_volt_site.module.ce_config.data.null_data_source.vpm_config its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.gcp_volt_site.module.ce_config.data.null_data_source.vpm_config, after
which you can remove the provider configuration again.
My manual solution is to run terraform state rm on all the modules listed:
terraform state rm module.gcp_volt_site.module.ce_config.data.null_data_source.vpm_config
terraform state rm module.gcp_volt_site.module.ce_config.data.null_data_source.hosts_localhost
terraform state rm module.gcp_volt_site.module.ce_config.data.null_data_source.cloud_init_master
I would like to know how to do this automatically to enable a script to make these changes.
Is there some kind of terraform command I can use to list out these removed modules without the extra test so I can loop through runs of terraform state rm to remove them from the state file?
Or is there some kind of terraform command that can automatically do this in a generic manner like terraform state rm -all-not-present?
This gives me a list I can iterate through using terraform state rm $MODULE_NAME:
$ terraform state list | grep 'null_data_source'
module.gcp_volt_site.module.ce_config.data.null_data_source.cloud_init_master
module.gcp_volt_site.module.ce_config.data.null_data_source.hosts_localhost
module.gcp_volt_site.module.ce_config.data.null_data_source.vpm_config
There's a few possibilities. Without the source code of the module it's difficult to say so providing that might be helpful.
A couple of suggestions
Cleaning Cache
Remove the .terraform directory (normally in the directory you're running the init, plan, and apply from). The an older version of the module could be cached that still contains the null references.
State Refresh
Using Terraform Refresh you should be able to scan infra and bring state into alignment.
Can be dangerous, not recommended by Hashicorp.
Manual removals
The state rm command like you've suggested could help here and is fairly safe. You have an option of a --dry-run and you point to resources specifically
Using terraform state rm like you've suggested to manually remove those resources within the modules in state. Again, you want to check that the module reference isn't pointing to an old version of the module or caching an old one or they will just be recreated.
No there's not a rm --all-missing but if you know it's all the null data sources that will be missing you could use terraform state ls to list all those resources, then iterate over each of them removing them in a loop.
We use an Azure blob storage as our Terraform remote state, and I'm trying to move state info about specific existing resources to a different container in that Storage Account. The new container (terraforminfra-v2) already exists, and the existing Terraform code points to the old container (terraforminfra). I've tried the following steps:
Use "terraform state pull > migrate.tfstate" to create a local copy of the state data in terraforminfra. When I look at this file, it seems to have all the proper state info.
Update the Terraform code to now refer to container terraforminfra-v2.
Use "terraform init" which recognizes that the backend config has changed and asks to migrate all the workspaces. I enter 'no' because I only want specific resources to change, not everything from all workspaces.
Use the command "terraform state push migrate.tfstate".
The last command seems to run for a bit like it's doing something, but when it completes (with no hint of an error), there still is no state info in the new container.
Is it because I answer 'no' in step #3, does this mean it doesn't actually change to which remote state it "points"? Related to that, is there any way with the "terraform state" command to tell where your state is?
Am I missing a step here? Thanks in advance.
OK, I think I figured out how to do this (or at least, these steps seemed to work):
rename the current folder with the .tf files to something else (like folder.old)
use "terraform state pull" to get a local copy of the state for the current workspace (you need to repeat these steps for each workspace you want to migrate)
create a new folder with the original name and copy your code to it.
create a new workspace with the same name as the original.
modify the code for the remote backend to point to the new container (or whatever else you're changing about the name/location of the remote state).
run "terraform init" so it's pointing to the new remote backend.
use "terraform state push local state file" to push the exported state to the new backend.
I then used "terraform state list" and "terraform plan" in the new folder to sanity check that everything seemed to be there.
I am beginner in terraform in a (dangerous) live environment.
I ran a script for creating 3 new accounts in AWS Organizations. Two got generated and due to service limit error I couldn't create one.
To add to it, there was a mistake of the parent-id in the script. I rectified the accounts on the console by moving it to the right parent ID.
That leaves me with one account to be created.
After making the necessary changes in the service limit, I tried running the script. The plan shows 3 accounts to be added 2 to be destroyed. There's no way these accounts can be deleted and added. (Since the script is now version controlled - I can't run just for this one account).
Here's what I did - I modified the terraform state (the parent id) in the S3 bucket. Ensured that terraform show is reflecting the new changes. The terraform plan still shows 3 accounts to add and 2 to destroy.
How do I get this fixed? Any help is deeply appreciated.
Thanks.
The code is source of truth when working with Infrastructure as Code, even if you change state file, you need to update the code as well as state file.
There is no way Terraform can update source code when detecting a drift on your resouces.
So you need:
1- write the manual changes you done in AWS into the Terraform code.
2- Do a terraform plan. It will refresh the state and show you if there is still a difference
If modifying the state file like me, do it at your own risk. I followed how to clean your terraform state and performed the surgery!
Ensure that the code is reflected properly to pick the changes.
I have a huge terraform module setup to launch a entire infrastructure. Now post provisioning there were many changes applied to the setup manually. I updated the statefile to be aware of these changes using terraform refresh command.
Now I've added new components to my terraform. When I execute terraform plan it is trying to reset the old updated resources to it's initial state (coz that is what is defined in my terraform code). Is there any way for terraform to ignore the changes in the old resources and create only the newly added components?
I found a solution myself for the above. Apparently there is an option called ignore_changes under the lifecycle block that should be defined for all the resources that you expect to be changed using external methods.
Reference Link: https://www.terraform.io/docs/configuration/resources.html#ignore_changes
I sometime see that Terraform Apply has different "plan" than Terraform Plan.
For instance, today i have seen one of TF files that I am trying to "Terraform Apply" resulted in only 1 "change" and 1 "add" while it got "3 add", "1 change" and "3 destroy" when using "Terraform Plan"
I have been using Terraform for just two months. Is this intended behavior in Terraform?
Could anyone give an explanation for this behavior? Thanks!
Terraform version: 0.11.13
This is unexpected behaviour, but the best practice it to:
terraform plan -out deploy.tfplan
it will save the plan in the deploy.tfplan file.
Now, terraform apply deploy.tfplan.
this will ensure that the plan you want is executed all the Time without fail.
This is not an intended behaviour of terraform unless if there is a mess anywhere. I never saw this kind of issue any time till now. Did you ever edited or deleted your .tfstate state file after you passed the terraform plan command? If you are observing this issue again or still facing this kind of issue, probably you can open an issue with the product owner. But I don't think this is an issue and you will never face this kind of issue again.
Try to follow these steps when trying to perform a Terraform apply .
First make sure the changes to the terraform file has been saved.
Try running a terraform plan terraform-plan before running terraform-apply
Sounds like some the files changes have been made to are not saved with the current terraform file
Can you explain the full scenario? Normally, in my experience it is same.
Difference i can only see -- Either you are using variable file with plan and apply and some variables causes some resources and other way might be if you using a remote location for state and some other job/person also updating the state.
If you are running everything locally, it should not happen like this.
Terraform builds a graph of all the resources.It, then creates the non-dependent resources in parallel to make resource creation slightly efficient. Is any the resource creation fails, It leaves terraform in partially applied state which gets recorded in the tfstate file. After fixing the issue with resource, when you reapply the .tf files it shows you only the new resources to be changed. In your case, I think it has got more to do with the fact that some resource have a policy of "destroy-before-creation" which shows up in result. so when you apply change to 1 resource it ends up showing 1 resource deleted 1 created. This when occurs with some non "destroy-before-creation" type resources, ends up giving you output like what you mentioned above
Did you comment any of the resources in terraform file while triggering command : terraform apply ?
If yes Please check the same as commenting resources in existing terraform file will result in destroying those resources in terraform.
Have been using terraform for quite a long time and this is not an intended behaviour. It looks like something has changed in between plan and apply.
But what you can do is save the plan in a file using
terraform plan -out plan.tfplan
and then deploy using same file
terraform apply plan.tfplan.