Terraform Apply has different "plan" than Terraform Plan - terraform

I sometime see that Terraform Apply has different "plan" than Terraform Plan.
For instance, today i have seen one of TF files that I am trying to "Terraform Apply" resulted in only 1 "change" and 1 "add" while it got "3 add", "1 change" and "3 destroy" when using "Terraform Plan"
I have been using Terraform for just two months. Is this intended behavior in Terraform?
Could anyone give an explanation for this behavior? Thanks!
Terraform version: 0.11.13

This is unexpected behaviour, but the best practice it to:
terraform plan -out deploy.tfplan
it will save the plan in the deploy.tfplan file.
Now, terraform apply deploy.tfplan.
this will ensure that the plan you want is executed all the Time without fail.

This is not an intended behaviour of terraform unless if there is a mess anywhere. I never saw this kind of issue any time till now. Did you ever edited or deleted your .tfstate state file after you passed the terraform plan command? If you are observing this issue again or still facing this kind of issue, probably you can open an issue with the product owner. But I don't think this is an issue and you will never face this kind of issue again.

Try to follow these steps when trying to perform a Terraform apply .
First make sure the changes to the terraform file has been saved.
Try running a terraform plan terraform-plan before running terraform-apply
Sounds like some the files changes have been made to are not saved with the current terraform file

Can you explain the full scenario? Normally, in my experience it is same.
Difference i can only see -- Either you are using variable file with plan and apply and some variables causes some resources and other way might be if you using a remote location for state and some other job/person also updating the state.
If you are running everything locally, it should not happen like this.

Terraform builds a graph of all the resources.It, then creates the non-dependent resources in parallel to make resource creation slightly efficient. Is any the resource creation fails, It leaves terraform in partially applied state which gets recorded in the tfstate file. After fixing the issue with resource, when you reapply the .tf files it shows you only the new resources to be changed. In your case, I think it has got more to do with the fact that some resource have a policy of "destroy-before-creation" which shows up in result. so when you apply change to 1 resource it ends up showing 1 resource deleted 1 created. This when occurs with some non "destroy-before-creation" type resources, ends up giving you output like what you mentioned above

Did you comment any of the resources in terraform file while triggering command : terraform apply ?
If yes Please check the same as commenting resources in existing terraform file will result in destroying those resources in terraform.

Have been using terraform for quite a long time and this is not an intended behaviour. It looks like something has changed in between plan and apply.
But what you can do is save the plan in a file using
terraform plan -out plan.tfplan
and then deploy using same file
terraform apply plan.tfplan.

Related

How to destroy resources created using terraform in azure devops pipeline by using PowerShell

I have a project where I'm using Terraform in Azure DevOps Pipeline create Infrastructure but want to destroy the infrastructure in a PowerShell script running locally.
So the PScommand that I want to run is this:
$TerraCMD = "terraform destroy -var-file C:/Users/Documents/Terraform/config.json"
Invoke-Expression -Command $TerraCMD
But I get the following output:
[0m[1m[32mNo changes.[0m[1m No objects need to be destroyed.[0m
[0mEither you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
[33mâ•·[0m[0m
[33m│[0m [0m[1m[33mWarning: [0m[0m[1mValue for undeclared variable[0m
[33m│[0m [0m
[33m│[0m [0m[0mThe root module does not declare a variable named "config" but a value was
[33m│[0m [0mfound in file
[33m│[0m [0m"C:/Users/mahera.erum.baloch/source/repos/PCFA-CloudMigration/On-Prem-Env/IaC/Terraform/config.json".
[33m│[0m [0mIf you meant to use this value, add a "variable" block to the
[33m│[0m [0mconfiguration.
[33m│[0m [0m
[33m│[0m [0mTo silence these warnings, use TF_VAR_... environment variables to provide
[33m│[0m [0mcertain "global" settings to all configurations in your organization. To
[33m│[0m [0mreduce the verbosity of these warnings, use the -compact-warnings option.
[33m╵[0m[0m
[0m[1m[32m
Destroy complete! Resources: 0 destroyed.
I know this is probably due to that I created the resources through the pipeline and not from local repository, but is there a way to do this?
Any help would be appreciated.
P.S. The State file is saved in the Azure Storage.
I'm going to assume that your code is kept in a repo that you have access to, since you mentioned that it's being deployed from Terraform running in an Azure DevOps Pipeline.
As others mentioned, the state file AND your terraform code is your source of truth. Hence, you'd need for both the PowerShell script and the Pipeline to be referring to the same state file and code, to achieve what you're trying to.
For the terraform destroy to run, it would need access to both your Terraform code and the state file so that it can compare what needs to be destroyed.
Unless your setup is very different from this, you could have your PowerShell script just git clone or git pull the repo, depending on your requirements, and then execute a terraform destroy on that version of the code. Your state file will then be updated accordingly.
I've just run into the problem of keeping Terraform state from an Azure Pipeline build. Repeated builds of the pipeline fail because the resource group already exists, but the Terraform state is not kept by the build pipeline. And I can find no way to execute terraform destroy on the pipeline even if I had the state.
One approach I found in chapter 2 of this book is storing terraform.tfstate in a remote back end. This looks like it will keep .tfstate across multiple builds of the pipeline and from elsewhere too.
I don't know yet if it will allow a terraform destroy.

Terraform: Stop printing plan to console when apply or destroy are called

I recently upgraded from Terraform 14.9 to 15.4.
In 14.9, I could use the -auto-approve option in either the apply or destroy commands and this would also stop terraform printing its plan to the console window. However, v15.4 no longer does this, instead choosing to output the whole plan. This unnecessary printing slows down the process when deploying a lot of resources and for what I am doing, I dont care what it is telling me.
Therefore, is there a command/option to revert to the 14.9 behaviour so I don't see the plan when apply or destroy are called?
EDIT: So the calls I make are as follows:
>terraform plan -lock=false -out=tfplan -input=false
then
>terraform apply -lock=false -input=false tfplan
in V15.4 calling the apply command will result in the plan being printed to the console window, which v14.9 did not do.
Note 1: I have also had to add -lock=false to stop a locking error that occurs when run in v15.4. Although not ideal, this is not on a network shared by other users so is fine for my situation.
Note 2: I did previously have the -auto-approve but the new -input=falseoption overrides it. I was following content from the apply page and the associated automation page. But these still result in output being put in the console.
I raised this issue as a bug with Hashicorp here.
Their response is this was an intentional change because the apply action, in isolation, didn't explain what it was doing. The rationale being the -auto-approve should only skip the approval prompt, not the planning phase entirely.
Nonetheless, they are taking my issue into consideration.
Can I also say how pleasant the experience of dealing with Hashicorp has been (they have been very responsive).
All the best

Terraform show and plan not matching

I am beginner in terraform in a (dangerous) live environment.
I ran a script for creating 3 new accounts in AWS Organizations. Two got generated and due to service limit error I couldn't create one.
To add to it, there was a mistake of the parent-id in the script. I rectified the accounts on the console by moving it to the right parent ID.
That leaves me with one account to be created.
After making the necessary changes in the service limit, I tried running the script. The plan shows 3 accounts to be added 2 to be destroyed. There's no way these accounts can be deleted and added. (Since the script is now version controlled - I can't run just for this one account).
Here's what I did - I modified the terraform state (the parent id) in the S3 bucket. Ensured that terraform show is reflecting the new changes. The terraform plan still shows 3 accounts to add and 2 to destroy.
How do I get this fixed? Any help is deeply appreciated.
Thanks.
The code is source of truth when working with Infrastructure as Code, even if you change state file, you need to update the code as well as state file.
There is no way Terraform can update source code when detecting a drift on your resouces.
So you need:
1- write the manual changes you done in AWS into the Terraform code.
2- Do a terraform plan. It will refresh the state and show you if there is still a difference
If modifying the state file like me, do it at your own risk. I followed how to clean your terraform state and performed the surgery!
Ensure that the code is reflected properly to pick the changes.

Can't create multiple Cosmos MongoDB collections at the same time

Trying to create two Collections at the same time throws me this error:
The specified type system value 'TypedJsonBson' is invalid.
Judging by the response log, and the fact that the error is occurring at the apply phase, I suspect it is something with the API.
Samples:
Configuration to simulate the problem: main.tf
Terraform logs: run-pAXmLixNWWimiHNs-apply-log.txt
Workaround
It is possible to avoid this problem by creating one Collection at a time.
depends_on = [
azurerm_cosmosdb_mongo_collection.example
]
I tried your terraform main.tf files on my local PowerShell, it works fine. So the terraform configuration file should be correct.
I would suggest running terraform apply on the Azure cloud shell. You could remove the old terraform.tfstate file and .terraform folder and re-run terraform init locally or verify other reasons on your working environment.
Yes, if Terraform has a means to specify that parent resources need to exist before child resources can be created then you should use this because ARM requires this for any resource to be created.

Interrupted terraform apply, now cannot destroy or apply

So I have an application that runs terraform apply in a directory, then can also run terraform destroy. I was testing the application, and I accidentally interrupted the processes while running apply
Now it seems to be stuck with a partially created instance, where it recognizes the name of my instance I was creating/destroying and when I try to apply it says that an instance of that name already exists. But then destroy says there is nothing to destroy. So I can't do either. Is there anyway to unsnarl this?
I'm afraid that the only option is by doing:
execute terraform state rm RESOURCE example: terraform state rm aws_ebs_volume.volume.
Manually remove the resource from your cloud provider.
you can run the below to view all current resources still live from the project directory:
$ terraform state list
to destroy each resource run the below on each individual resource:
$ terraform destroy --target=resource.name
could write a script to loop through the 'terraform state list' output if there is a lot.
I was able to get out of this state by making sure the trailing comma was removed from the cloud provider resource definition (on AWS). Then I refreshed the state with terraform refresh. After that I was able to plan and apply again.

Resources