Locally I:
Created main.tf
Initialize with ‘terraform init’
Imported GCP project and Google Run service
Updated main.tf so ‘terraform plan’ was not trying to do anything.
Checked main.tf to GitHub
I setup GitHub actions so:
Checkout
Setup Gcloud
Initialize with ‘terraform init’
Plan with ‘terraform plan’
Terraform plan is trying to recreate everything.
How do I make it detect existing resources?
By default Terraform will initialise a local state. The problem with this state is that it will be available only for you on your PC. If you execute a plan somewhere else, this state will be lost. To solve this issue, you need to set up a remote backend for Terraform for being able to store the state file in a centralised location.
If you are using Google Cloud, you can use a Cloud Store bucket for storing the state file. Terraform offers gcs module for being able to configure this backend using Cloud Store. You have to create a bucket and provide the bucket name to the gcs backend configuration:
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
Related
I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces
I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.
Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"
Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file
I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,
Correct me if I'm wrong, when you run terraform init you are asked to name a storage account and container for the terraform state.
Can these also automatically be made with terraform?
Edit: I'm using Azure.
I usually split my terraform configurations into two parts.
One that creates a storage account with container, with a specific tag (tf=backend for example). The second one that creates all other resources. I share a backend.tfvars between the two, and in the second one, I get the storage account key using Azure CLI and the previously set tag (that way I don't have to get the key and pass it manually to my second script).
You could even migrate the state of the first terraform configuration once deployed, if you don't want to rely on a local state
Yes, absolutely. You would in general want an S3 bucket for each of your environments, although it's also possible to have a bucket shared across all environments and then set up access controls using bucket policies. Don't create this bucket as part of provisioning other resources, as their lifecycles will likely be different (you would want to retain the bucket for a long time and would be unlikely to want to destroy it).
What you do is you define this bucket in Terraform using local state first. After it is created, you add a remote backend pointing to this bucket.
terraform {
required_version = ">= 0.11.7"
backend "s3" {
bucket = "my-state-bucket"
key = "s3_state_bucket"
region = "us-west-2"
encrypt = "true"
}
}
After you run terraform init, Terraform will ask if you want to migrate the local state file to S3. Answer yes, and after this completes you can delete the local state file, as it's no longer used.
This approach allows you to break out of this chicken and egg situation and still manage all of your infrastructure as code, rather then creating it manually using web console or bash scripts.
I want to create a new provider for terraform. It is suppose to read the state files produced by Azure provider and create an Ansible inventory file out of it. I am using this guide as a base https://www.terraform.io/docs/extend/writing-custom-providers.html
These are my solutions until now:
Reading the states JSON files with go.
using depends_on = ["Azure.example"] in my Ansible provider and get access to variables how it is done here https://stackoverflow.com/a/45492093/4244999
How can I read the state files into a provider as variables with terraform functions?