loop over aws provider to create ressources in every aws account - terraform

I have a list of objects in Terraform called users and the structure is the following:
variable "accounts" {
type = list(object({
id = string #used in assume_role
email = string
is_enabled = bool
}))
}
What I am trying to achieve now is to create a simple S3 bucket in each of those AWS accounts (if the is_enabled is true). I was able to do it for a single account but I am not sure if there is a way to loop over a provider?
Code for a single account - main.tf
provider "aws" {
alias = "new_account"
region = "eu-west-3"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.this.id}:role/OrganizationAccountAccessRole"
session_name = "new_account_creation"
}
}
resource "aws_s3_bucket" "bucket" {
provider = aws.new_account
bucket = "new-account-bucket-${aws_organizations_account.this.id}"
acl = "private"
}

You need to define one provider for each aws account you want to use:
Create a module (i.e. a directory), where your bucket configuration lives:
├── main.tf
└── module
└── bucket.tf
bucket.tf should contain the resource definition: resource "aws_s3_bucket" "bucket" {...}
In main.tf , define multiple aws providers and call the module with each of them:
provider "aws" {
alias = "account1"
region = "eu-west-1"
...
}
provider "aws" {
alias = "account2"
region = "us-west-1"
...
}
module "my_module" {
source = "./module"
providers = {
aws.account1 = aws.account1
aws.account2 = aws.account2
}
}
I guess you could also get fancy by creating a variable containing the providers, and pass it to the module invocation (you could probably also use a filter on the list to take into account the is_enabled flag)
More details about providers: https://www.terraform.io/docs/language/modules/develop/providers.html

Found what I was looking for here: https://github.com/hashicorp/terraform/issues/19932
Thanks Bryan Karaffa
## Just some data... a list(map())
locals {
aws_accounts = [
{ "aws_account_id": "123456789012", "foo_value": "foo", "bar_value": "bar" },
{ "aws_account_id": "987654321098", "foo_value": "foofoo", "bar_value": "barbar" },
]
}
## Here's the proposed magic... `provider.for_each`
provider "aws" {
for_each = local.aws_accounts
alias = each.value.aws_account_id
assume_role {
role_arn = "arn:aws:iam::${each.value.aws_account_id}:role/TerraformAccessRole"
}
}
## Modules reference the provider dynamically using `each.value.aws_account_id`
module "foo" {
source = "./foo"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
foo = each.value.foo_value
}
module "bar" {
source = "./bar"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
bar = each.value.bar_value
}

Related

Terraform plan not working for a long time with AWS S3

I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.

Apply a resource for each provider - Terraform

I'm trying to create an aws_ssm_parameter in 2 regions from within 1 resource block as I need the writer endpoint in both eu-central-1 and eu-west-1 available from SSM.
Currently I'm doing it like this:
resource "aws_ssm_parameter" "primary_writer_endpoint" {
name = "/aurora/${var.environment}-${var.service_name}-primary-cluster/writer-endpoint"
type = "String"
value = aws_rds_cluster.primary.endpoint
overwrite = true
}
resource "aws_ssm_parameter" "primary_writer_endpoint_replica_region" {
name = "/aurora/${var.environment}-${var.service_name}-primary-cluster/writer-endpoint"
type = "String"
value = aws_rds_cluster.primary.endpoint
overwrite = true
provider = aws.replica
}
as I declared 2 providers:
provider "aws" {
region = "eu-central-1"
}
provider "aws" {
alias = "replica"
region = "eu-west-1"
}
Is there a cleaner way to do this, e.g. for each provider, or create a map of the providers?
Would be great to do this cleaner.

submodule not inheriting providers

Issue summary:
Providers not being passed to submodule
Issue description:
Hello,
I'm trying to pass providers to a submodule module from the root as recommended my Hashicorp, especially now as I need to loop through the root module, using for_each. However I'm getting an error that indicates that the submodule isn't getting the provider passed down to it.
Does anyone have any guidance on what I'm doing wrong?
Thank you for your time
error:
Error: missing provider
module.vpc_peering.provider["registry.terraform.io/hashicorp/aws"].requester
code:
main.tf
# Requestors's credentials
provider "aws" {
alias = "requester"
region = var.aws_region
assume_role {
role_arn = local.workspace_role_arn_requester
}
}
# Accepter's credentials
provider "aws" {
alias = "accepter"
region = var.aws_region
assume_role {
role_arn = local.workspace_role_arn_accepter
}
}
#################################################
# VPC peer from Admin to Current
#################################################
module "vpc_peering" {
for_each = toset(local.accepter_ids)
source = "./modules/peer"
providers = {
aws.requester = aws.requester
aws.accepter = aws.accepter
}
modules/peer/admin-peer.tf
module "vpc_peering_cross_account" {
source = "git::https://github.com/YouLend/terraform-aws-vpc-peering-multi-account?ref=aws_profile_accepter_version_0.13"
providers = {
aws.requester = aws.requester
aws.accepter = aws.accepter
}
I got it working but for those that are experiencing the same issue, this comment on github explains what needs to be done
https://github.com/hashicorp/terraform/issues/17399#issuecomment-367342717
essentially you need an empty provider block in each module that intends to pass the providers on so in my above example this code needs to go into modules/peer/admin-peer.tf
provider "aws" {
}
provider "aws" {
alias = "requester"
}
provider "aws" {
alias = "accepter"
}

Migrate terraform modules to updated provider format

I inherited a codebase with all providers stored inside modules and am having a lot of trouble moving the providers out so that I can remove the resources created from modules.
The current design violates the rules outlined here: https://www.terraform.io/docs/configuration/providers.html and makes removing modules impossible.
My understanding of the migration steps is:
Create a provider for use at the top-level
Update module resources to use providers stored outside of the module
Remove module (with top-level provider persisting)
Example module
An example /route53-alias-record/main.ts is:
variable "evaluate_target_health" {
default = true
}
data "terraform_remote_state" "env" {
backend = "s3"
config = {
bucket = "<bucket>"
key = "infra-${var.environment}-${var.target}.tfstate"
region = "<region>"
}
}
provider "aws" {
region = data.terraform_remote_state.env.outputs.region
allowed_account_ids = data.terraform_remote_state.env.outputs.allowed_accounts
assume_role {
role_arn = data.terraform_remote_state.env.outputs.aws_account_role
}
}
resource "aws_route53_record" "alias" {
zone_id = data.terraform_remote_state.env.outputs.public_zone_id
name = var.fqdn
type = "A"
alias {
name = var.alias_name
zone_id = var.zone_id
evaluate_target_health = var.evaluate_target_health
}
}
Starting usage
module "api-dns-alias" {
source = "../environment/infra/modules/route53-alias-record"
environment = "${var.environment}"
zone_id = "${module.service.lb_zone_id}"
alias_name = "${module.service.lb_dns_name}"
fqdn = "${var.environment}.example.com"
}
Provider overriding
## Same as inside module
provider "aws" {
region = data.terraform_remote_state.env.outputs.region
allowed_account_ids = data.terraform_remote_state.env.outputs.allowed_accounts
assume_role {
role_arn = data.terraform_remote_state.env.outputs.aws_account_role
}
}
module "api-dns-alias" {
source = "../environment/infra/modules/route53-alias-record"
environment = "${var.environment}"
zone_id = "${module.service.lb_zone_id}"
alias_name = "${module.service.lb_dns_name}"
fqdn = "${var.environment}.example.com"
providers = {
aws = aws ## <-- pass in explicitly
}
}
I was able to safely deploy with the providers set, but I do not believe that they are being used inside the module, which means the handshake still fails when I remove the module and the resources cannot be deleted.
I am looking for the steps needed to migrate to an outside provider so that I can safely remove resources.
I am currently working with terraform 0.12.24

Terraform optional provider for optional resource

I have a module where I want to conditionally create an s3 bucket in another region. I tried something like this:
resource "aws_s3_bucket" "backup" {
count = local.has_backup ? 1 : 0
provider = "aws.backup"
bucket = "${var.bucket_name}-backup"
versioning {
enabled = true
}
}
but it appears that I need to provide the aws.backup provider even if count is 0. Is there any way around this?
NOTE: this wouldn't be a problem if I could use a single provider to create buckets in multiple regions, see https://github.com/terraform-providers/terraform-provider-aws/issues/8853
Based on your description, I understand that you want to create resources using the same "profile", but in a different region.
For that case I would take the following approach:
Create a module file for you s3_bucket_backup, in that file you will build your "backup provider" with variables.
# Module file for s3_bucket_backup
provider "aws" {
region = var.region
profile = var.profile
alias = "backup"
}
variable "profile" {
type = string
description = "AWS profile"
}
variable "region" {
type = string
description = "AWS profile"
}
variable "has_backup" {
type = bool
description = "AWS profile"
}
variable "bucket_name" {
type = string
description = "VPC name"
}
resource "aws_s3_bucket" "backup" {
count = var.has_backup ? 1 : 0
provider = aws.backup
bucket = "${var.bucket_name}-backup"
}
In your main tf file, declare your provider profile using local variables, call the module passing the profile and a different region
# Main tf file
provider "aws" {
region = "us-east-1"
profile = local.profile
}
locals {
profile = "default"
has_backup = false
}
module "s3_backup" {
source = "./module"
profile = local.profile
region = "us-east-2"
has_backup = true
bucket_name = "my-bucket-name"
}
And there you have it, you can now build your s3_bucket_backup using the same "profile" with different regions.
In the case above, the region used by the main file is us-east-1 and the bucket is created on us-east-2.
If you set has_backup to false, it won't create anything.
Since the "backup provider" is build inside the module, your code won't look "dirty" for having multiple providers in the main tf file.

Resources