terraform.tfstate not working with workspace - terraform

i used tf code to deploy AWS infrastructure, the terraform.tfstate stored locally on my machine. I now want to centralised my code to github and also started using terraform workspace so that i can use the same tf code with separate state file stored in separate s3 buckets for each region/workspace.
when i am running my new code pointing it to the terraform.tfstate file from the old deployment then i am getting prompted for Plan: 26 to add, 0 to change, 25 to destroy. i am expecting terraform to not to show any add or destroy as there is no change to infrastructure other than use of bash script to create workspace and store/read the state file remotely.
i notice i get the same message Plan: 26 to add, 0 to change, 25 to destroy. even when i copy the old terraform.tfstate to new code directory locally (not from remote s3). This is to troubleshoot if its anything to do with remote terraform.tfstate file. what could i be doing wrong here? wondering how can i have existing terraform state to work with workspace?

Related

Terraform state file in multiple backends

I am using Gitlab CI/CD to deploy to AWS with Terraform.
I would like to use the Gitlab REST API to store and lock/unlock my state.
To add some security and prevent any loss of my state file, I want also to backup my state file to an S3 bucket.
My question is: how to sync/update my state file present in my S3 bucket when my pipeline run a terraform apply and make changes to my AWS resources ?

how to delete a terraform state file when the azure resources are removed using terraform?

We are building a temp review app in terraform. Currently when review app is finished with the resources are destroyed with terraform using terraform apply -destroy. What i need to do is also remove the terraform state file for this infrastructure from the azure container. Could I use terraform -destroy to also remove the state file and how can i do this?
One of the workaround you can follow,
When we are using terraform destroy that time our resource detailed also removed from terraform.tfstate by removing from portal itself.
So to remove any particular resource from .tfstate you can try something like below;
First would suggest you to after destroy the file list the state file you have then remove those.
This below command is used to get the available instances which are in state file.
terraform state list
After listing those try with below which will remove from .tfstate file as mentioned by #Ansuman Bal i have also tried and it works fine .
terraform state rm "azurerm_resource_group.example"
OUTPUT DETAILS FOR REFERENCE:-
NOTE:- This aforementioned cmdlts will remove the instance/resources from .tfstate file only not from portal. Only terraform destroy can do that.
For more information please refer this SO THREAD| Terraform - Removing a resource from local state file.

How to run only delta changes in Terraform for new workspace

I'm using terraform in a CI/CD pipeline.
When I open a pull request, I create a workspace named after the feature branch.
Terraform creates all the resources with the workspace name attached to them so as to be unique.
Once this is deployed to prod there is a cleanup step that destroys everything created by that workspace.
This works fine but recreating all the resources for every pull request soon will be unfeasible. Is there a way in Terraform to defer to the prod tfstate file so to plan only for the delta from this and their dependencies?
A simple example:
my prod tfstate has these resources
1 database(dbA)
2 schemas(dbA.schemaA,dbA.schemaB)
1 table (dbA.schemaA.tableA).
If I add one table(dbA.schemaB.tableB) in dbA.schemaB, I'd want terraform to plan for
dbA
dbA.schemaB
dbA.schemaB.tableB
and not for
dbA.schemaA
dbA.schemaA.tableA

How to delete *.tfstate file from backend S3 bucket

I have backend on aws s3 bucket, where I have all my *.tfstate files.
When I do
cd terraform/project.foo
terraform destroy
I would like that it will also remove foo.tfstate file from my backend S3 Bucket, but it's not doing so.
Is there any option, to remove needed tfstate file from backend via terraform?
Thank you!
This is totally possible if you are using Terraform workspace
I had two workspace default and prod.
I switched to prod workspace and ran terraform destroy
This is the S3 state file content, post terraform destroy
Once destroyed, switch to default workspace terraform workspace select default
From default workspace run terraform workspace delete prod
Poof, your state file is completely cleared up
Note: I'm using fish shell with Terraform plugin, terraform workspace gets printed in prompt (represented by arrow )

Import terraform workspaces from S3 remote state

I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,

Resources