Create a S3 bucket on each AWS account created with terraform - terraform

I am using terraform to create multiple AWS accounts using aws_organizations_account. What I am now trying to do is to create a aws_S3_bucket on each new created accounts.
resource "aws_organizations_account" "this" {
for_each = local.all_user_ids
name = "Dev Sandbox ${each.value}"
email = "${var.manager}+sbx_${each.value}#example.com"
role_name = "Administrator"
parent_id = var.sandbox_organizational_unit_id
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
}
Everything is working as expected for aws_organizations_account during my terraform apply but my S3 bucket is created inside my current AWS project while I am trying to create a S3 bucket for every new AWS account.

Step 1: Create_terraform_s3_buckets.tf
# First configure the AWS Provider
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
// then use the resource block and create all the buckets in the variable array
// Here setup your accounts would in the variable for e.g. My_Accounts_s3_buckets
variable "My_Accounts_s3_buckets" {
type = list
default = ["Testbucket1.app", "Testbucket2.app", "Testbucket3.app"]
}
Look up the s3_bucket objectfor more help from Terraform ref. aws_s3_bucket
// resource "aws_s3_bucket" "rugged_buckets" "log_bucket" { <- different types of options on your buckets
resource "aws_s3_bucket" "b" {
count = length(var.My_Accounts_s3_buckets) // here are you 3 accounts
bucket = var.My_Accounts_s3_buckets[count.index]
acl = "private"
region = "us-east-1"
force_destroy = true
tags = {
Name = "My Test bucket"
Environment = "Dev"
}
}
Step 2: You can now automate this with the variables file.
# Make sure you keep this order
variable "My_Accounts_s3_buckets" {
type = list
default = ["mybucket1.app",
"mybucket2.app" // you can add more.. as needed
]
}

Related

How to append string and variable in terraform

How can I concatenate string and a variable in terraform. I am using terraform version 1.7
Name = "Test (Environment_Name)" where environment_name will be test,stage and prod.
resource "fusionauth_tenant" "tanant" {
name = "Test (Environment_name)"
email_configuration {
default_from_name = "FusionAuth [Environment_name]"
verification_email_template_id = fusionauth_email.verification_template.id
}
Examples of how to append a string and a variable.
settings.tf:
locals {
bucket_prefix = "test-bucket"
}
And then you want to create three S3 buckets.
s3.tf:
resource "aws_s3_bucket" "a" {
bucket = "${local.bucket_prefix}-app"
}
//name = test-bucket-app
resource "aws_s3_bucket" "b" {
bucket = local.bucket_prefix
}
//name = test-bucket
resource "aws_s3_bucket" "c" {
bucket = "my-bucket"
}
//name = my-bucket
If you want to append a variable from var or get a name from a resource, it will follow the same pattern. It will always be:
"${var.name.value}-my-string"

Is there a way to retrieve data from another Account in Terraform?

I want to use the resource "data" in Terraform for example for an sns topic but I don't want too look for a resource in the aws-account, for which I'm deploying my other resources. It should look up to my other aws-account (in the same organization) and find resources in there. Is there a way to make this happen?
data "aws_sns_topic" "topic_alarms_data" {
name = "topic_alarms"
}
Define an aws provider with credentials to the remote account:
# Default provider that you use:
provider "aws" {
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.account_id)
}
}
provider "aws" {
alias = "remote"
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.remote_account_id)
}
}
data "aws_sns_topic" "topic_alarms_data" {
provider = aws.remote
name = "topic_alarms"
}
Now the topics are loaded from the second provider.

loop over aws provider to create ressources in every aws account

I have a list of objects in Terraform called users and the structure is the following:
variable "accounts" {
type = list(object({
id = string #used in assume_role
email = string
is_enabled = bool
}))
}
What I am trying to achieve now is to create a simple S3 bucket in each of those AWS accounts (if the is_enabled is true). I was able to do it for a single account but I am not sure if there is a way to loop over a provider?
Code for a single account - main.tf
provider "aws" {
alias = "new_account"
region = "eu-west-3"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.this.id}:role/OrganizationAccountAccessRole"
session_name = "new_account_creation"
}
}
resource "aws_s3_bucket" "bucket" {
provider = aws.new_account
bucket = "new-account-bucket-${aws_organizations_account.this.id}"
acl = "private"
}
You need to define one provider for each aws account you want to use:
Create a module (i.e. a directory), where your bucket configuration lives:
├── main.tf
└── module
└── bucket.tf
bucket.tf should contain the resource definition: resource "aws_s3_bucket" "bucket" {...}
In main.tf , define multiple aws providers and call the module with each of them:
provider "aws" {
alias = "account1"
region = "eu-west-1"
...
}
provider "aws" {
alias = "account2"
region = "us-west-1"
...
}
module "my_module" {
source = "./module"
providers = {
aws.account1 = aws.account1
aws.account2 = aws.account2
}
}
I guess you could also get fancy by creating a variable containing the providers, and pass it to the module invocation (you could probably also use a filter on the list to take into account the is_enabled flag)
More details about providers: https://www.terraform.io/docs/language/modules/develop/providers.html
Found what I was looking for here: https://github.com/hashicorp/terraform/issues/19932
Thanks Bryan Karaffa
## Just some data... a list(map())
locals {
aws_accounts = [
{ "aws_account_id": "123456789012", "foo_value": "foo", "bar_value": "bar" },
{ "aws_account_id": "987654321098", "foo_value": "foofoo", "bar_value": "barbar" },
]
}
## Here's the proposed magic... `provider.for_each`
provider "aws" {
for_each = local.aws_accounts
alias = each.value.aws_account_id
assume_role {
role_arn = "arn:aws:iam::${each.value.aws_account_id}:role/TerraformAccessRole"
}
}
## Modules reference the provider dynamically using `each.value.aws_account_id`
module "foo" {
source = "./foo"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
foo = each.value.foo_value
}
module "bar" {
source = "./bar"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
bar = each.value.bar_value
}

Terraform optional provider for optional resource

I have a module where I want to conditionally create an s3 bucket in another region. I tried something like this:
resource "aws_s3_bucket" "backup" {
count = local.has_backup ? 1 : 0
provider = "aws.backup"
bucket = "${var.bucket_name}-backup"
versioning {
enabled = true
}
}
but it appears that I need to provide the aws.backup provider even if count is 0. Is there any way around this?
NOTE: this wouldn't be a problem if I could use a single provider to create buckets in multiple regions, see https://github.com/terraform-providers/terraform-provider-aws/issues/8853
Based on your description, I understand that you want to create resources using the same "profile", but in a different region.
For that case I would take the following approach:
Create a module file for you s3_bucket_backup, in that file you will build your "backup provider" with variables.
# Module file for s3_bucket_backup
provider "aws" {
region = var.region
profile = var.profile
alias = "backup"
}
variable "profile" {
type = string
description = "AWS profile"
}
variable "region" {
type = string
description = "AWS profile"
}
variable "has_backup" {
type = bool
description = "AWS profile"
}
variable "bucket_name" {
type = string
description = "VPC name"
}
resource "aws_s3_bucket" "backup" {
count = var.has_backup ? 1 : 0
provider = aws.backup
bucket = "${var.bucket_name}-backup"
}
In your main tf file, declare your provider profile using local variables, call the module passing the profile and a different region
# Main tf file
provider "aws" {
region = "us-east-1"
profile = local.profile
}
locals {
profile = "default"
has_backup = false
}
module "s3_backup" {
source = "./module"
profile = local.profile
region = "us-east-2"
has_backup = true
bucket_name = "my-bucket-name"
}
And there you have it, you can now build your s3_bucket_backup using the same "profile" with different regions.
In the case above, the region used by the main file is us-east-1 and the bucket is created on us-east-2.
If you set has_backup to false, it won't create anything.
Since the "backup provider" is build inside the module, your code won't look "dirty" for having multiple providers in the main tf file.

Terraform provision to multiple accounts

I use terraform to deploy lambda to one aws account and s3 trigger for lambda in other. Because of that, I created two separate folders and each of them holds state of specific account.
However, I'd like to merge everything into one template. Is it possible to do it? Example:
provider "aws" {
profile = "${var.aws_profile}"
region = "eu-west-1"
}
provider "aws" {
alias = "bucket-trigger-account"
region = "eu-west-1"
profile = "${var.aws_bucket_trigger_profile}
}
I want thie following resource to be provisioned by aws bucket-trigger-account. How can I do it?
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${var.notifications_bucket}"
lambda_function {
lambda_function_arn = "arn:aws:lambda:eu-west-1-xxx"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".test"
}
}
Found out that simply using provider argument on resource let's you use a different provider for that resource: provider = "aws.bucket-trigger-account"

Resources