I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,
Related
I have a 'development' - 'staging' and 'production' workspace within the terraform cloud organisation.
I'm attempting to interact with them as per documentation here.
Particularly this:
If you associate the directory with multiple workspaces (using
workspace tags), you can use the terraform workspace commands to
select which remote workspace to use.
Locally, I also have the exact same three terraform workspaces created.
Screenshots:
local workspaces:
remote workspaces and tags:
100% same organisation I was able to interact with the workspaces if I hardcode the workspace value instead of using tags.
My terraform backend cloud definitions:
terraform {
cloud {
organization = "<<myorgname>>"
workspaces {
tags = ["development", "staging", "production"]
}
}
}
When I run a simple terraform init I get greeted by:
No workspaces found.
There are no workspaces with the configured tags (development, production, staging)
in your Terraform Cloud organization. To finish initializing, Terraform needs at
least one workspace available.
Terraform can create a properly tagged workspace for you now. Please enter a
name to create a new Terraform Cloud workspace.
I've been going through the documentation here that goes into CLI-driven runs with this context but I can't figure out the right way to do this.
What I want is:
run a terraform plan or terraform apply whilst in the locally-selected development workspace
and then:
the cloud terraform to perform a run on the remote development workspace.
If I just go ahead and write 'development' as a name, it will then apply all 3 tags in the static definition to the remote 'development' workspace, thus defeating the whole purpose of using tags instead of a name.
What's the right way of doing this?
That is true, however there is also this part of documentation [1]:
tags - (Optional) A set of Terraform Cloud workspace tags. You will be able to use this working directory with any workspaces that have all of the specified tags, and can use the terraform workspace commands to switch between them or create new workspaces. New workspaces will automatically have the specified tags. This option conflicts with name.
EDIT: As mentioned in the comments, in order for local workspaces to be usable in Terraform Cloud as well (i.e., to be able to apply the code in Terraform Cloud), there has to be a "common" or a "main" tag across all workspaces created in the Terraform Cloud.
[1] https://www.terraform.io/cli/cloud/settings#arguments
I am using Terraform scripts to create azure services, I am having some doubts regarding Terraform,
1) If I have one environment let say dev in azure having some azure resources how can I copy all the resources to new environment lest say prod using terraform script.
2)what are the impact of re-run the terraform file with additional azure resources, what it will do.
3)What if I want to create an app service with the same name from Terraform script that already present in the azure will it update the resource or do nothing after terraform execution completed.
Please feel free to answer the question, it will be great help.
To answer your questions:
You could create a new workspace with terraform workspace new and copy all configuration files (.tf) to the new environment, then run terraform init, plan, apply.
The terraform will compare the content in your current state file with your configuration file, then update the new attributes or creating new resources other than re-creating the existing resources.
You could run terraform import to import existing infrastructure into Terraform. For referencing existing resources in the portal, you can use data sources.
I have backend on aws s3 bucket, where I have all my *.tfstate files.
When I do
cd terraform/project.foo
terraform destroy
I would like that it will also remove foo.tfstate file from my backend S3 Bucket, but it's not doing so.
Is there any option, to remove needed tfstate file from backend via terraform?
Thank you!
This is totally possible if you are using Terraform workspace
I had two workspace default and prod.
I switched to prod workspace and ran terraform destroy
This is the S3 state file content, post terraform destroy
Once destroyed, switch to default workspace terraform workspace select default
From default workspace run terraform workspace delete prod
Poof, your state file is completely cleared up
Note: I'm using fish shell with Terraform plugin, terraform workspace gets printed in prompt (represented by arrow )
I have created resources in a different workspace in Terraform but I am not able to destroy resource from one specific workspace. Is there any way to destroy resources by specifying the workspace?
I have tried switched to that specific workspace while destroying resources but it is still pointing to other workspace's state file.
I am using the below commands:
Terraform workspace new test
Terraform apply -var-file terraform.test.tfvars
Terraform destroy
You have to first select the workspace with the following command
terraform workspace select <workspace_name>
Then you can destroy the workspace with
terraform destroy -refresh=false
if you want to list the workspaces created use
terraform workspace list
Correct me if I'm wrong, when you run terraform init you are asked to name a storage account and container for the terraform state.
Can these also automatically be made with terraform?
Edit: I'm using Azure.
I usually split my terraform configurations into two parts.
One that creates a storage account with container, with a specific tag (tf=backend for example). The second one that creates all other resources. I share a backend.tfvars between the two, and in the second one, I get the storage account key using Azure CLI and the previously set tag (that way I don't have to get the key and pass it manually to my second script).
You could even migrate the state of the first terraform configuration once deployed, if you don't want to rely on a local state
Yes, absolutely. You would in general want an S3 bucket for each of your environments, although it's also possible to have a bucket shared across all environments and then set up access controls using bucket policies. Don't create this bucket as part of provisioning other resources, as their lifecycles will likely be different (you would want to retain the bucket for a long time and would be unlikely to want to destroy it).
What you do is you define this bucket in Terraform using local state first. After it is created, you add a remote backend pointing to this bucket.
terraform {
required_version = ">= 0.11.7"
backend "s3" {
bucket = "my-state-bucket"
key = "s3_state_bucket"
region = "us-west-2"
encrypt = "true"
}
}
After you run terraform init, Terraform will ask if you want to migrate the local state file to S3. Answer yes, and after this completes you can delete the local state file, as it's no longer used.
This approach allows you to break out of this chicken and egg situation and still manage all of your infrastructure as code, rather then creating it manually using web console or bash scripts.