I have created resources in a different workspace in Terraform but I am not able to destroy resource from one specific workspace. Is there any way to destroy resources by specifying the workspace?
I have tried switched to that specific workspace while destroying resources but it is still pointing to other workspace's state file.
I am using the below commands:
Terraform workspace new test
Terraform apply -var-file terraform.test.tfvars
Terraform destroy
You have to first select the workspace with the following command
terraform workspace select <workspace_name>
Then you can destroy the workspace with
terraform destroy -refresh=false
if you want to list the workspaces created use
terraform workspace list
Related
I have a 'development' - 'staging' and 'production' workspace within the terraform cloud organisation.
I'm attempting to interact with them as per documentation here.
Particularly this:
If you associate the directory with multiple workspaces (using
workspace tags), you can use the terraform workspace commands to
select which remote workspace to use.
Locally, I also have the exact same three terraform workspaces created.
Screenshots:
local workspaces:
remote workspaces and tags:
100% same organisation I was able to interact with the workspaces if I hardcode the workspace value instead of using tags.
My terraform backend cloud definitions:
terraform {
cloud {
organization = "<<myorgname>>"
workspaces {
tags = ["development", "staging", "production"]
}
}
}
When I run a simple terraform init I get greeted by:
No workspaces found.
There are no workspaces with the configured tags (development, production, staging)
in your Terraform Cloud organization. To finish initializing, Terraform needs at
least one workspace available.
Terraform can create a properly tagged workspace for you now. Please enter a
name to create a new Terraform Cloud workspace.
I've been going through the documentation here that goes into CLI-driven runs with this context but I can't figure out the right way to do this.
What I want is:
run a terraform plan or terraform apply whilst in the locally-selected development workspace
and then:
the cloud terraform to perform a run on the remote development workspace.
If I just go ahead and write 'development' as a name, it will then apply all 3 tags in the static definition to the remote 'development' workspace, thus defeating the whole purpose of using tags instead of a name.
What's the right way of doing this?
That is true, however there is also this part of documentation [1]:
tags - (Optional) A set of Terraform Cloud workspace tags. You will be able to use this working directory with any workspaces that have all of the specified tags, and can use the terraform workspace commands to switch between them or create new workspaces. New workspaces will automatically have the specified tags. This option conflicts with name.
EDIT: As mentioned in the comments, in order for local workspaces to be usable in Terraform Cloud as well (i.e., to be able to apply the code in Terraform Cloud), there has to be a "common" or a "main" tag across all workspaces created in the Terraform Cloud.
[1] https://www.terraform.io/cli/cloud/settings#arguments
We are building a temp review app in terraform. Currently when review app is finished with the resources are destroyed with terraform using terraform apply -destroy. What i need to do is also remove the terraform state file for this infrastructure from the azure container. Could I use terraform -destroy to also remove the state file and how can i do this?
One of the workaround you can follow,
When we are using terraform destroy that time our resource detailed also removed from terraform.tfstate by removing from portal itself.
So to remove any particular resource from .tfstate you can try something like below;
First would suggest you to after destroy the file list the state file you have then remove those.
This below command is used to get the available instances which are in state file.
terraform state list
After listing those try with below which will remove from .tfstate file as mentioned by #Ansuman Bal i have also tried and it works fine .
terraform state rm "azurerm_resource_group.example"
OUTPUT DETAILS FOR REFERENCE:-
NOTE:- This aforementioned cmdlts will remove the instance/resources from .tfstate file only not from portal. Only terraform destroy can do that.
For more information please refer this SO THREAD| Terraform - Removing a resource from local state file.
I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,
Can we destroy a particular resource.
For example : An azure sql database only, without affecting the sql server or any firewalls.
will the below work and what is the resource address.
terraform destroy -target xxx
yes terraform has that functionality to destroy selected resources, but first you have to detached the dependent resources from the target resource and then try this command terraform destroy -target RESOURCE_TYPE.NAME
Yes, you can destroy specific resources, one at a time.
Following the terraform azure sql example : https://www.terraform.io/docs/providers/azurerm/r/sql_database.html
When the resources are created, they are registered in the terraform state file.
You can list the resources in the state file :
$ terraform state list
azurerm_resource_group.test
azurerm_sql_database.test
azurerm_sql_server.test
You can then destroy the sql database only with this command :
$ terraform destroy -target=azurerm_sql_database.test
Is there a way to specify a workspace for a remote state provider in HCL? How do I ensure collaborators use the proper workspace? I'd expect to see something like
terraform {
backend "s3" {
workspace = "someworkspace"
...
}
}
Terraform's documentation describes how to use workspace_key_prefix but that's not what I'm looking for.
For example, if one team member runs terraform workspace select dev then terraform apply then a different team member runs terraform apply without first running terraform workspace terraform will redeploy the resources defined (because the proper workspace wasn't selected).
Found a workaround. You can commit the environment file to VCS (.terraform/environment). Others running terraform apply will target the workspace specified by the environment file.