Trying to create two Collections at the same time throws me this error:
The specified type system value 'TypedJsonBson' is invalid.
Judging by the response log, and the fact that the error is occurring at the apply phase, I suspect it is something with the API.
Samples:
Configuration to simulate the problem: main.tf
Terraform logs: run-pAXmLixNWWimiHNs-apply-log.txt
Workaround
It is possible to avoid this problem by creating one Collection at a time.
depends_on = [
azurerm_cosmosdb_mongo_collection.example
]
I tried your terraform main.tf files on my local PowerShell, it works fine. So the terraform configuration file should be correct.
I would suggest running terraform apply on the Azure cloud shell. You could remove the old terraform.tfstate file and .terraform folder and re-run terraform init locally or verify other reasons on your working environment.
Yes, if Terraform has a means to specify that parent resources need to exist before child resources can be created then you should use this because ARM requires this for any resource to be created.
Related
I am working on an open source project with Terraform that will allow me to set up ad-hoc environments through GitHub Actions. Each ad-hoc environment will correspond to a terraform workspace. I'm setting the workspace by exporting TF_WORKSPACE before running terraform init, plan and apply. This works the first time around. For example, I'm able to create an ad-hoc environment called alpha. In my S3 backend I can see that the state file is saved under the alpha folder. The issue is that when I run the same pipeline to create another ad-hoc environment called beta, I get the following message:
Initializing the backend...
╷
│Error: Currently selected workspace "beta" does not exist
│
│
╵
Error: Process completed with exit code 1.
Here is the section of my GitHub action that is failing: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L142
I have been over this article: https://support.hashicorp.com/hc/en-us/articles/360043550953-Selecting-a-workspace-when-running-Terraform-in-automation but I'm still not sure what I'm doing wrong in my automation pipeline.
The alpha workspace did not exist, but it seemed to be able to create it and use it as the workspace in my first run. I'm not sure why other workspaces are not able to be created using the same pipeline.
I got some help from #apparentlymart on the Terraform community forum. Here's the reply: https://discuss.hashicorp.com/t/help-using-terraform-workspaces-in-an-automation-pipeline-with-tf-workspace-currently-selected-workspace-x-does-not-exist/40676/2
In order to make the pipeline work I had to use terraform commands in the following order:
terraform init ...
terraform workspace create ${WORKSPACE} || echo "Workspace ${WORKSPACE} already exists or cannot be created"
export TF_WORKSPACE=$WORKSPACE
terraform apply ...
terraform output ...
This allows me to create multiple ad hoc environments without any errors. The code in my example project has also been updated with this fix: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L146
I have a project where I'm using Terraform in Azure DevOps Pipeline create Infrastructure but want to destroy the infrastructure in a PowerShell script running locally.
So the PScommand that I want to run is this:
$TerraCMD = "terraform destroy -var-file C:/Users/Documents/Terraform/config.json"
Invoke-Expression -Command $TerraCMD
But I get the following output:
[0m[1m[32mNo changes.[0m[1m No objects need to be destroyed.[0m
[0mEither you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
[33mâ•·[0m[0m
[33m│[0m [0m[1m[33mWarning: [0m[0m[1mValue for undeclared variable[0m
[33m│[0m [0m
[33m│[0m [0m[0mThe root module does not declare a variable named "config" but a value was
[33m│[0m [0mfound in file
[33m│[0m [0m"C:/Users/mahera.erum.baloch/source/repos/PCFA-CloudMigration/On-Prem-Env/IaC/Terraform/config.json".
[33m│[0m [0mIf you meant to use this value, add a "variable" block to the
[33m│[0m [0mconfiguration.
[33m│[0m [0m
[33m│[0m [0mTo silence these warnings, use TF_VAR_... environment variables to provide
[33m│[0m [0mcertain "global" settings to all configurations in your organization. To
[33m│[0m [0mreduce the verbosity of these warnings, use the -compact-warnings option.
[33m╵[0m[0m
[0m[1m[32m
Destroy complete! Resources: 0 destroyed.
I know this is probably due to that I created the resources through the pipeline and not from local repository, but is there a way to do this?
Any help would be appreciated.
P.S. The State file is saved in the Azure Storage.
I'm going to assume that your code is kept in a repo that you have access to, since you mentioned that it's being deployed from Terraform running in an Azure DevOps Pipeline.
As others mentioned, the state file AND your terraform code is your source of truth. Hence, you'd need for both the PowerShell script and the Pipeline to be referring to the same state file and code, to achieve what you're trying to.
For the terraform destroy to run, it would need access to both your Terraform code and the state file so that it can compare what needs to be destroyed.
Unless your setup is very different from this, you could have your PowerShell script just git clone or git pull the repo, depending on your requirements, and then execute a terraform destroy on that version of the code. Your state file will then be updated accordingly.
I've just run into the problem of keeping Terraform state from an Azure Pipeline build. Repeated builds of the pipeline fail because the resource group already exists, but the Terraform state is not kept by the build pipeline. And I can find no way to execute terraform destroy on the pipeline even if I had the state.
One approach I found in chapter 2 of this book is storing terraform.tfstate in a remote back end. This looks like it will keep .tfstate across multiple builds of the pipeline and from elsewhere too.
I don't know yet if it will allow a terraform destroy.
Terraform v0.13.5
AzureRM v2.44.0
I use the AzureRM backend to store the tfstate file. My initial Terraform project had a master main.tf with some modules and I used workspaces to separate the different environments (dev/qa). This created the tfstate files in a single container and it would append the environment to the name of the tfstate file. I recently changed the file structure of my Terraform project so that each environment had its own folder instead, where I would change directory to that environment folder and run terraform apply
But now, Terraform wants to create all of the resources as if they don't exist even though my main.tf file is the same. So, it seems like I'm missing something here because now, I don't really need to use workspaces, but I need to use my existing tfstate files in azure so Terraform knows!
What am I missing here?
Okay, I feel silly. I didn't realize that I never ran terraform workspace select when I went to the new folder. I guess what I should do is figure out how to do away with workspaces.
I have a huge terraform module setup to launch a entire infrastructure. Now post provisioning there were many changes applied to the setup manually. I updated the statefile to be aware of these changes using terraform refresh command.
Now I've added new components to my terraform. When I execute terraform plan it is trying to reset the old updated resources to it's initial state (coz that is what is defined in my terraform code). Is there any way for terraform to ignore the changes in the old resources and create only the newly added components?
I found a solution myself for the above. Apparently there is an option called ignore_changes under the lifecycle block that should be defined for all the resources that you expect to be changed using external methods.
Reference Link: https://www.terraform.io/docs/configuration/resources.html#ignore_changes
I sometime see that Terraform Apply has different "plan" than Terraform Plan.
For instance, today i have seen one of TF files that I am trying to "Terraform Apply" resulted in only 1 "change" and 1 "add" while it got "3 add", "1 change" and "3 destroy" when using "Terraform Plan"
I have been using Terraform for just two months. Is this intended behavior in Terraform?
Could anyone give an explanation for this behavior? Thanks!
Terraform version: 0.11.13
This is unexpected behaviour, but the best practice it to:
terraform plan -out deploy.tfplan
it will save the plan in the deploy.tfplan file.
Now, terraform apply deploy.tfplan.
this will ensure that the plan you want is executed all the Time without fail.
This is not an intended behaviour of terraform unless if there is a mess anywhere. I never saw this kind of issue any time till now. Did you ever edited or deleted your .tfstate state file after you passed the terraform plan command? If you are observing this issue again or still facing this kind of issue, probably you can open an issue with the product owner. But I don't think this is an issue and you will never face this kind of issue again.
Try to follow these steps when trying to perform a Terraform apply .
First make sure the changes to the terraform file has been saved.
Try running a terraform plan terraform-plan before running terraform-apply
Sounds like some the files changes have been made to are not saved with the current terraform file
Can you explain the full scenario? Normally, in my experience it is same.
Difference i can only see -- Either you are using variable file with plan and apply and some variables causes some resources and other way might be if you using a remote location for state and some other job/person also updating the state.
If you are running everything locally, it should not happen like this.
Terraform builds a graph of all the resources.It, then creates the non-dependent resources in parallel to make resource creation slightly efficient. Is any the resource creation fails, It leaves terraform in partially applied state which gets recorded in the tfstate file. After fixing the issue with resource, when you reapply the .tf files it shows you only the new resources to be changed. In your case, I think it has got more to do with the fact that some resource have a policy of "destroy-before-creation" which shows up in result. so when you apply change to 1 resource it ends up showing 1 resource deleted 1 created. This when occurs with some non "destroy-before-creation" type resources, ends up giving you output like what you mentioned above
Did you comment any of the resources in terraform file while triggering command : terraform apply ?
If yes Please check the same as commenting resources in existing terraform file will result in destroying those resources in terraform.
Have been using terraform for quite a long time and this is not an intended behaviour. It looks like something has changed in between plan and apply.
But what you can do is save the plan in a file using
terraform plan -out plan.tfplan
and then deploy using same file
terraform apply plan.tfplan.