I have backend on aws s3 bucket, where I have all my *.tfstate files.
When I do
cd terraform/project.foo
terraform destroy
I would like that it will also remove foo.tfstate file from my backend S3 Bucket, but it's not doing so.
Is there any option, to remove needed tfstate file from backend via terraform?
Thank you!
This is totally possible if you are using Terraform workspace
I had two workspace default and prod.
I switched to prod workspace and ran terraform destroy
This is the S3 state file content, post terraform destroy
Once destroyed, switch to default workspace terraform workspace select default
From default workspace run terraform workspace delete prod
Poof, your state file is completely cleared up
Note: I'm using fish shell with Terraform plugin, terraform workspace gets printed in prompt (represented by arrow )
Related
I am using Gitlab CI/CD to deploy to AWS with Terraform.
I would like to use the Gitlab REST API to store and lock/unlock my state.
To add some security and prevent any loss of my state file, I want also to backup my state file to an S3 bucket.
My question is: how to sync/update my state file present in my S3 bucket when my pipeline run a terraform apply and make changes to my AWS resources ?
I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces
I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,
I have created resources in a different workspace in Terraform but I am not able to destroy resource from one specific workspace. Is there any way to destroy resources by specifying the workspace?
I have tried switched to that specific workspace while destroying resources but it is still pointing to other workspace's state file.
I am using the below commands:
Terraform workspace new test
Terraform apply -var-file terraform.test.tfvars
Terraform destroy
You have to first select the workspace with the following command
terraform workspace select <workspace_name>
Then you can destroy the workspace with
terraform destroy -refresh=false
if you want to list the workspaces created use
terraform workspace list
i used tf code to deploy AWS infrastructure, the terraform.tfstate stored locally on my machine. I now want to centralised my code to github and also started using terraform workspace so that i can use the same tf code with separate state file stored in separate s3 buckets for each region/workspace.
when i am running my new code pointing it to the terraform.tfstate file from the old deployment then i am getting prompted for Plan: 26 to add, 0 to change, 25 to destroy. i am expecting terraform to not to show any add or destroy as there is no change to infrastructure other than use of bash script to create workspace and store/read the state file remotely.
i notice i get the same message Plan: 26 to add, 0 to change, 25 to destroy. even when i copy the old terraform.tfstate to new code directory locally (not from remote s3). This is to troubleshoot if its anything to do with remote terraform.tfstate file. what could i be doing wrong here? wondering how can i have existing terraform state to work with workspace?