Terraform profile field usage in AWS provider - terraform

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?

Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Related

Running 'terragrunt apply' on an EC2 Instance housed in a No Internet Environment

I have been trying to set up my Terragrunt EC2 environment in a no/very limited internet setting.
Current Setup:
AWS network firewall that whitelists domains to allow traffic, and most internet traffic is blocked excepted a few domains.
EC2 instance where I run the terragrunt code, it has an instance profile that can assume the role in providers
VPC endpoints set up for sts, s3, dynamodb, codeartifact etc
All credentials (assumed role etc) work and have been verified
Remote State and Providers File
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "***"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "ap-southeast-1"
encrypt = true
dynamodb_table = "***"
}
}
# Dynamically changes the role depending on which account is being modified
generate "providers" {
path = "providers.tf"
if_exists = "overwrite"
contents = <<EOF
provider "aws" {
region = "${local.env_vars.locals.aws_region}"
assume_role {
role_arn = "arn:aws:iam::$***"
endpoints {
sts = "https://sts.ap-southeast-1.amazonaws.com"
s3 = "https://s3.ap-southeast-1.amazonaws.com"
dynamodb = "https://dynamodb.ap-southeast-1.amazonaws.com"
}
}
EOF
}
With Internet (Turning off the firewall):
I am able to run all the terragrunt commands
Without Internet
I only allow "registry.terraform.io" to pass the firewall
I am able to assume the role listed in providers via aws sts assume-role, and I can list the tables in dynamodb and files in the s3 bucket
I am able to run terragrunt init on my EC2 instance with the instance profile, I assume terragrunt does use the correct sts_endpoint
However when I run terragrunt apply, it hangs at the stage `DEBU[0022] Running command: terraform plan prefix=[***]
In my CloudTrail I do see that Terragrunt has assumed the username aws-go-sdk-1660077597688447480 for the event GetCallerIdentity, so I think the provider is able to assume the role that was declared in the providers block
I tried adding custom endpoints for sts, s3, and dynamodb, but it still hangs.
I suspect that terraform is still trying to use the internet when making the AWS SDK calls, which leads to terragrunt apply being stuck.
Is there a comprehensive list of endpoints I need to custom add, or a list of domains I should whitelist to be able to run terragrunt apply?
I set the environment variable TF_LOG to debug, and besides the registry.terraform.io domain, I was able to gather these ones:
github.com
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/go-tfe v1.0.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/hcl/v2 v2.12.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734
sts.region.amazonaws.com
resource.region.amazonaws.com
You'll want to add those domains to your whitelist in the firewall settings, something like *.region.amazonaws.com should do the trick, of course, you can be more restrictive, and rather than use a wildcard, you can specify the exact resource.
For reference: https://docs.aws.amazon.com/general/latest/gr/rande.html

How can I upload terraform state to s3 bucket?

I have a project whose infra is managed by terraform. I'd like to push the state to a s3 bucket so other teams can use the state. Below is the backend configuration:
terraform {
backend "s3" {
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
when I run terraform init I got below error:
AccessDenied: Access Denied
status code: 403, request id: 107E6007C9C64805, host id: kWASxeq1msxvGPZIKdi+7htg3yncMFxW9PQuXdC8ouwsEHMhx8ZPu6dKGUGWzDtblC6WRg1P1ew=
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
It seems that terraform tries to load state from s3 bucket rather than push to s3 bucket. How can I configure terraform to push state to s3?
I have configured aws profile on a tf file:
provider "aws" {
region = "ap-southeast-2"
profile = "me"
}
The credential for the current user has admin permission on the bucket.
I was facing the same issue and the found like the bucket mentioned in the backend.tf file was not created in my aws console. Hence I create the bucket with the same bucket name mentioned in the backend.tf file and it worked for me.
For further readers:
AWS credentials can be provided As #Thiago Arrais mentioned
Another way to provide credentials in backend block is to define profile:
terraform {
backend "s3" {
profile = "me" <-- aws profile
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
And your ~/.aws/credentails file has profile me with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY defined in it as follows:
[me]
AWS_ACCESS_KEY_ID = access_key_value
AWS_SECRET_ACCESS_KEY = secret_key_value
I had exact same problem. When terraform {backend "s3" {}} is defined then that block is evaluated before provider "aws" {} block. That's why backend cannot find credentials info defined in provider block.
You're not providing the S3 credentials in the backend block. You'll need to set them there (access_key and secret_key parameters) or via environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
You'll also need to make sure that the bucket exists and that these credentials do have access to it.
By the way, you don't need an AWS provider block. The S3 backend is usable even if you don't manage AWS resources in your Terraform config.
For me I was having different aws region in backend.tf than where my bucket was.

How to reload the terraform provider at runtime to use the different AWS profile

How to reload the terraform provider at runtime to use the different AWS profile.
Create a new user
resource "aws_iam_user" "user_lake_admin" {
name = var.lake_admin_user_name
path = "/"
tags = {
tag-key = "data-test"
}
}
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
this lake_admin user is created in the same file.
trying to use
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
resource "aws_glue_catalog_database" "myDB" {
name = "my-db"
provider = aws.lake-admin-profile
}
As I know terraform providers are executed first in all terraform files.
But is there any way we can reload the configurations of providers in the mid of terraform execution?
You can't do this directly.
You can apply the creation of the user in one root module and state and use its credentials in a provider for the second.
For the purposes of deploying infrastructure, you are likely better off with IAM Roles and assume role providers to handle this kind of situation.
Generally, you don't need to create infrastructure with a specific user. There's rarely an advantage to doing that. I can't think of a case where the principal creating infrastructure has any implied specific special access to the created infrastructure.
You can use a deployment IAM Role or IAM User to deploy everything in the account and then assign resource based and IAM policy to do the restrictions in the deployment.

How to encrypt password or sensitive data in terraform?

I want to create rds instance and entire required infrastructure for its in aws. I couldn't understand the security part of terraform. I want to encrypt the sensitive data in .tfstate at least. e.g: password/username for rds instance etc. What will be best way to store sensitive data for .tfstate? If not supported then please suggest other ways to do that? Thank you.
Use AWS provider as sample.
Recommend to save the .tfstate file to s3 bucket and set the policy on it that only nominates roles have the permission to access this bucket and the related kms key.
terraform {
required_version = "~> 0.10"
backend "s3" {
bucket = "<global_unique_bucket_name>"
key = "development/vpc.tfstate"
region = "ap-southeast-2"
kms_key_id = "alias/terraform"
encrypt = true
}
}
Always enable version control on that s3 bucket.

Restrict creation of resources to a particular AWS Provider Profile in Terraform

I am trying to implement a Logic to Restrict creation of AWS Resources for a Particular AWS Profile only, so that no one can accidentally create AWS resources in a different AWS Profile.
Eg -
Only if the AWS Variables are set for the below profile will the AWS Resources be created
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
}
If the user set's the AWS Variables for a Different Profile accidentally, then the AWS resources should not be created.
What's the best way to achieve this logic?
you could add allowed_account_ids argument here as well to restrict to exact AWS account, assuming your AWS profiles map to AWS accounts:
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
allowed_account_ids = ["${var.allowed_account_id}"]
}
Or you could use forbidden_account_ids to exclude the accounts not allowed:
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
forbidden_account_ids = ["${var.excluded_account_id}"]
}

Resources