Terraform cloud : Import existing resource - terraform

I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.

Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"

Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file

Related

Manage multiple workspaces from a single configuration file in terraform

I want to create two different workspaces on Terraform Cloud: One for DEV environment, the other for PROD environment.
I am trying to create them hust using a single configuration file. The infrastructure will be the same just in two different Azure subscriptions with different credentials.
Here the code I am trying:
terraform {
required_version = ">= 1.1.0"
required_providers {
#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.40.0"
}
}
cloud {
organization = "mycompany"
workspaces {
tags = ["dev", "prod"]
}
}
}
I am watching the documentantion. It seems like inside the cloud -> workspace command I just can use either name or tags attributes. It is required I have at least one of them in my configuration.
Now in my Terraform Cloud account, I have two workspaces: 1 with the tag prod and one with the tag dev.
I set the envinroment variable:
$Env:TF_WORKSPACE="mycompany-infrastructure-dev"
And I try to initialize Terraform Cloud:
terraform init
But I get this error:
Error: Invalid workspace selection
Terraform failed to find workspace "mycompany-infrastructure-dev" with the tags specified in your configuration: │ [dev, prod]
How can I create one configuration that I can use with different environment/workspaces?
Thank you
First, I ran the similar code as yours in my environment and received an error shown below: It prompted me to use terraform login to generate a token for accessing the organization on Terraform Cloud.
The login was successful, and the browser generated an API token.
Token received and entered.
Logged into Terraform cloud as shown:
In Terraform Cloud -> Organizations, I created a new organization:
Script for creating different workspaces from a single configuration file:
cloud {
organization = "mycompanyone"
workspaces {
tags = ["dev", "prod"]
}
}
Taken your script and made a few changes as seen below:
Terraform will prompt for basic concerns while initializing, as shown here.
Now run terraform init or terraform init -upgrade.
terraform initialized successfully:

Running 'terragrunt apply' on an EC2 Instance housed in a No Internet Environment

I have been trying to set up my Terragrunt EC2 environment in a no/very limited internet setting.
Current Setup:
AWS network firewall that whitelists domains to allow traffic, and most internet traffic is blocked excepted a few domains.
EC2 instance where I run the terragrunt code, it has an instance profile that can assume the role in providers
VPC endpoints set up for sts, s3, dynamodb, codeartifact etc
All credentials (assumed role etc) work and have been verified
Remote State and Providers File
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "***"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "ap-southeast-1"
encrypt = true
dynamodb_table = "***"
}
}
# Dynamically changes the role depending on which account is being modified
generate "providers" {
path = "providers.tf"
if_exists = "overwrite"
contents = <<EOF
provider "aws" {
region = "${local.env_vars.locals.aws_region}"
assume_role {
role_arn = "arn:aws:iam::$***"
endpoints {
sts = "https://sts.ap-southeast-1.amazonaws.com"
s3 = "https://s3.ap-southeast-1.amazonaws.com"
dynamodb = "https://dynamodb.ap-southeast-1.amazonaws.com"
}
}
EOF
}
With Internet (Turning off the firewall):
I am able to run all the terragrunt commands
Without Internet
I only allow "registry.terraform.io" to pass the firewall
I am able to assume the role listed in providers via aws sts assume-role, and I can list the tables in dynamodb and files in the s3 bucket
I am able to run terragrunt init on my EC2 instance with the instance profile, I assume terragrunt does use the correct sts_endpoint
However when I run terragrunt apply, it hangs at the stage `DEBU[0022] Running command: terraform plan prefix=[***]
In my CloudTrail I do see that Terragrunt has assumed the username aws-go-sdk-1660077597688447480 for the event GetCallerIdentity, so I think the provider is able to assume the role that was declared in the providers block
I tried adding custom endpoints for sts, s3, and dynamodb, but it still hangs.
I suspect that terraform is still trying to use the internet when making the AWS SDK calls, which leads to terragrunt apply being stuck.
Is there a comprehensive list of endpoints I need to custom add, or a list of domains I should whitelist to be able to run terragrunt apply?
I set the environment variable TF_LOG to debug, and besides the registry.terraform.io domain, I was able to gather these ones:
github.com
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/go-tfe v1.0.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/hcl/v2 v2.12.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734
sts.region.amazonaws.com
resource.region.amazonaws.com
You'll want to add those domains to your whitelist in the firewall settings, something like *.region.amazonaws.com should do the trick, of course, you can be more restrictive, and rather than use a wildcard, you can specify the exact resource.
For reference: https://docs.aws.amazon.com/general/latest/gr/rande.html

Terraform wants to recreate imported resources

Locally I:
Created main.tf
Initialize with ‘terraform init’
Imported GCP project and Google Run service
Updated main.tf so ‘terraform plan’ was not trying to do anything.
Checked main.tf to GitHub
I setup GitHub actions so:
Checkout
Setup Gcloud
Initialize with ‘terraform init’
Plan with ‘terraform plan’
Terraform plan is trying to recreate everything.
How do I make it detect existing resources?
By default Terraform will initialise a local state. The problem with this state is that it will be available only for you on your PC. If you execute a plan somewhere else, this state will be lost. To solve this issue, you need to set up a remote backend for Terraform for being able to store the state file in a centralised location.
If you are using Google Cloud, you can use a Cloud Store bucket for storing the state file. Terraform offers gcs module for being able to configure this backend using Cloud Store. You have to create a bucket and provide the bucket name to the gcs backend configuration:
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}

Migrate Terraform CLI workspaces to Terraform Cloud error

I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces

Terraform profile field usage in AWS provider

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Resources