Reference to other module resource in Terraform - terraform

I have a hierarchy like this in my Terraform Cloud git project:
├── aws
│   ├── flavors
│   │   └── main.tf
│   ├── main.tf
│   ├── security-rules
│   │   └── sec-rule1
│   │   └── main.tf
│   └── vms
│ │ └── vm1
│   │   └── main.tf
└── main.tf
All main main.tf files contain module definitions with child folders:
/main.tf:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
required_providers {
openstack = "~> 1.24.0"
}
}
module "aws" {
source = "./aws"
}
/aws/main.tf:
module "security-rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
}
/aws/security-rules/main-tf:
module "sec-rule1" {
source = "./sec-rule1"
}
/aws/vms/main-tf:
module "vm1" {
source = "./vm1"
}
Then I have this security rule defined.
/aws/security-rules/sec-rule1/main-tf:
resource "openstack_compute_secgroup_v2" "sec-rule1" {
name = "sec-rule1"
description = "Allow web port"
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
lifecycle {
prevent_destroy = false
}
}
And I want to reference it from one or more VMs, but I don't know how to reference by resource ID (or name). I use plain names instead of reference.
/aws/vms/vm1/main-tf:
resource "openstack_blockstorage_volume_v3" "vm1_volume" {
name = "vm1_volume"
size = 30
image_id = "foo-bar"
}
resource "openstack_compute_instance_v2" "vm1_instance" {
name = "vm1_instance"
flavor_name = "foo-bar"
key_pair = "foo-bar keypair"
image_name = "Ubuntu Server 18.04 LTS Bionic"
block_device {
uuid = "${openstack_blockstorage_volume_v3.vm1_volume.id}"
source_type = "volume"
destination_type = "volume"
boot_index = 0
delete_on_termination = false
}
network {
name = "SEG-tenant-net"
}
security_groups = ["default", "sec-rule1"]
config_drive = true
}
resource "openstack_networking_floatingip_v2" "vm1_fip" {
pool = "foo-bar"
}
resource "openstack_compute_floatingip_associate_v2" "vm1_fip" {
floating_ip = "${openstack_networking_floatingip_v2.vm1_fip.address}"
instance_id = "${openstack_compute_instance_v2.vm1_instance.id}"
}
I want to use security-rules (and more stuff) referencing by name or ID, because it would be more consistent. Besides when I create a new security rule and, at the same time, a VM, Terraform OpenStack provider plans it without error, but when applying it, an error is produced because VM is created first and it doesn't find not-yet created new security rule.
How can I do this?

You should make an output of sec_rule_allow_web_name for sec-rule1 and security-rules/ modules, then set the output of the security-rules/ module as an input of the vm1 and vms modules. This way you can keep a dependency of the vm1 module with the output of security_rules which is called Dependency Inversion.
# ./security-rules/<example>/outputs.tf
output "sec_rule_allow_web_name" {
value = "<some-resource-to-output>"
}
# ./vms/variables.tf
variable "security_rule_name" {}
Provided the outputs and inputs are defined in the correct modules.
# /aws/main.tf
# best practice to use underscores instead of dashes in names
# so security-roles/ directory is now called security_rules
module "security_rules" {
source = "./security-rules"
}
module "flavors" {
source = "./flavors"
}
module "vms" {
source = "./vms"
security_rule_name = module.security_rules.sec_rule_allow_web_name
}

Related

Terraform output not found in Module

I'm getting the following error running Terraform apply when trying to reference an output from another Module:
│ Error: Unsupported attribute
│
│ on ../app-db-modules/rds.tf line 14, in resource "aws_db_subnet_group" "ccDBSubnetGroup":
│ 14: subnet_ids = ["${data.terraform_remote_state.remote.outputs.ccPrivateSubnetId}"]
│ ├────────────────
│ │ data.terraform_remote_state.remote.outputs is object with no attributes
│
│ This object does not have an attribute named "ccPrivateSubnetId".
╵
Tree
.
├── app
│   ├── main.tf
│   ├── terraform.tfstate
│   └── terraform.tfstate.backup
├── app-db-modules
│   ├── main.tf
│   ├── rds.tf
│   └── variables.tf
├── app-network-modules
│   ├── main.tf
│   ├── outputs.tf
│   ├── variables.tf
│   └── vpc.tf
└── app-tf-state-infra-modules
├── main.tf
├── tf-state-infra.tf
└── variables.tf
main.tf (app dir)
terraform {
backend "s3" {
bucket = "MY_BUCKET_NAME"
key = "tf-infra/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locking"
encrypt = true
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
...
module "vpc-infra" {
source = "../app-network-modules"
# VPC Input Vars
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.0.0/24"
private_subnet_cidr = "10.0.1.0/24"
}
module "rds-infra" {
source = "../app-db-modules"
# RDS Input Vars
db_az = "us-east-1a"
db_name = "ccDatabaseInstance"
db_user_name = var.db_user_name
db_user_password = var.db_user_password
}
vpc.tf (app-network-modules)
...
resource "aws_subnet" "ccPrivateSubnet" {
vpc_id = aws_vpc.ccVPC.id
cidr_block = var.private_subnet_cidr
}
...
outputs.tf (app-network-modules)
output "ccPrivateSubnetId" {
description = "Will be used by rds Module to set subnet_ids"
value = aws_subnet.ccPrivateSubnet.id
}
The following {data.terraform_remote_state.remote.outputs.ccPrivateSubnetId} is causing the error:
rds.tf (app-db-modules)
data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "MY_BUCKET_NAME"
key = "tf-infra/terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_db_subnet_group" "ccDBSubnetGroup" {
subnet_ids = ["${data.terraform_remote_state.remote.outputs.ccPrivateSubnetId}"]
}
resource "aws_db_instance" "ccDatabaseInstance" {
db_subnet_group_name = "ccDBSubnetGroup"
availability_zone = var.db_az
allocated_storage = 20
storage_type = "standard"
engine = "postgres"
engine_version = "12.5"
instance_class = "db.t2.micro"
name = var.db_name
username = var.db_user_name
password = var.db_user_password
skip_final_snapshot = true
}
output "all_outputs" {
value = data.terraform_remote_state.remote.outputs
}
Any thoughts on why data.terraform_remote_state.remote.outputs is object with no attributes and/or why I'm unable to reference the ccPrivateSubnetId in rds.tf which was provided as output from another Module (vpc.tf) would be appreciated!
EDITing to provide solution based on comments provided below.
main.tf
...
module "vpc-infra" {
source = "../app-network-modules"
# VPC Input Vars
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = "10.0.0.0/24"
private_subnet_cidr = "10.0.1.0/24"
}
module "rds-infra" {
source = "../app-db-modules"
# RDS Input Vars
ccPrivateSubnetId = "${module.vpc-infra.ccPrivateSubnetId}"
db_az = "us-east-1a"
db_name = "ccDatabaseInstance"
db_user_name = var.db_user_name
db_user_password = var.db_user_password
}
...
outputs.tf
output "ccPrivateSubnetId" {
description = "Will be used by RDS Module to set subnet_ids"
value = "${aws_subnet.ccPrivateSubnet.id}"
}
vpc.tf
...
resource "aws_subnet" "ccPrivateSubnet" {
vpc_id = aws_vpc.ccVPC.id
cidr_block = var.private_subnet_cidr
}
...
rds.tf
resource "aws_db_subnet_group" "ccDBSubnetGroup" {
subnet_ids = ["${var.ccPrivateSubnetId}"]
}
...
Using data.terraform_remote_state in general is bad practice. I would call it a very advanced feature that should only be used in extreme edge cases. If you are referencing the same state that the current Terraform template is using, then it is an absolute anti-pattern.
Instead, make the values you need to reference part of the outputs of the network module, then pass those values as inputs to the DB module.

Missing Terraform provider? What am I doing wrong? (Terraform v0.13.5)

As you can see below, I'm trying to pass a specific provider to a module, which then passes it as the main provider (aws = aws.some_profile) to a second, nested module.
on terraform plan I get the following error:
Error: missing provider module.module_a.provider["registry.terraform.io/hashicorp/aws"].some_profile
I must be doing somethig basic wrong or assuming the language works in a way that it doesn't. Ideas?
File structure:
├── main.tf
├── module_a
│ ├── main.tf
│ └── module_b
│ └── main.tf
└── providers.tf
main.tf (top level):
module "module_a" {
source = "./module_a"
providers = {
aws.some_profile = aws.some_profile
}
}
main.tf (inside module_a):
module "module_b" {
source = "./module_b"
providers = {
aws = aws.some_profile
}
}
main.tf (inside module b):
resource "null_resource" "null" {}
providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.22.0"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
provider "aws" {
alias = "some_profile"
profile = "some_profile"
region = "us-west-2"
}
Ok, so after getting some answers on Reddit, it looks like although you are passing providers down to submodules you still need to declare said providers in each submodule like so:
provider "aws" { alias = "some_provider" }
And it seems like the terraform "required providers" block is only required on the very top level. However, if it isn't working you can try adding it to each submodule as well.
Hope this helps someone out.

Create multiple objects from one Terraform module using Terragrunt

I am using terraform via terragrunt.
I have a folder with a single terragrunt.hcl file in it. The purpose of this file is to create multiple subnetworks in GCP.
To create a subnetwork, I have a module that takes several inputs.
I want to be able to create several subnetworks in my terragrunt.hcl file.
I think the best way would be to create a list with dictionaries (or maps as terraform call them) and then iterate over them.
I have some code that is not working
Here is some non-working code.
#terragrunt.hcl
include {
path = find_in_parent_folders()
}
inputs = {
# Common tags to be assigned to all resources
subnetworks = [
{
"subnetName": "subnet1-euw"
"subNetwork": "10.2.0.0/16"
"region": "europe-west1"
},
{
"subnetName": "subnet1-usc1"
"subNetwork": "10.3.0.0/16"
"region": "us-central1"
}
]
}
terraform {
module "subnetworks" {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
vpc_name = "MyVPC"
vpc_subnetwork_name = [for network in subnetworks: network.subnetName]
vpc_subnetwork_cidr = [for network in subnetworks: network.subNetwork]
vpc_subnetwork_region = [for network in subnetworks: network.region]
}
}
Seems I cannot use "module" inside the "terraform" block. Hopefully the code at least show what I want to achieve.
For reference, the module I am calling looks like this
#main.tf
terraform {
# Intentionally empty. Will be filled by Terragrunt.
backend "gcs" {}
}
resource "google_compute_subnetwork" "vpc_subnetwork" {
name = var.vpc_subnetwork_name
ip_cidr_range = var.vpc_subnetwork_cidr
region = var.vpc_subnetwork_region
network = var.vpc_name
}
#variables.tf
variable "vpc_name" {
description = "Name of VPC"
type = string
}
variable "vpc_subnetwork_name" {
description = "Name of subnetwork"
type = string
}
variable "vpc_subnetwork_cidr" {
description = "Subnetwork CIDR"
type = string
}
variable "vpc_subnetwork_region" {
description = "Subnetwork region"
type = string
}
Terragrunt does not have a loop construct. In Terragrunt, you'd use a directory hierarchy to do what you want here. For example, to achieve your goals above, something like this:
└── live
├── empty.yaml
├── euw
│   ├── region.yaml
│   └── vpc
│   └── terragrunt.hcl
├── terragrunt.hcl
└── usc1
├── region.yaml
└── vpc
└── terragrunt.hcl
Within live/terragrunt.hcl, you make the other yaml files available within the terragrunt configuration:
# live/terragrunt.hcl
inputs = merge(
# Configure Terragrunt to use common vars encoded as yaml to help you keep often-repeated variables (e.g., account ID)
# DRY. We use yamldecode to merge the maps into the inputs, as opposed to using varfiles due to a restriction in
# Terraform >=0.12 that all vars must be defined as variable blocks in modules. Terragrunt inputs are not affected by
# this restriction.
yamldecode(
file("${get_terragrunt_dir()}/${find_in_parent_folders("region.yaml", "${path_relative_from_include()}/empty.yaml")}"),
)
)
In the region.yaml within each region, you simply state the region:
# live/euw/region.yaml
# These variables apply to this entire region. They are automatically pulled in using the extra_arguments
# setting in the root terraform.tfvars file's Terragrunt configuration.
region: "europe-west1"
# live/usc1/region.yaml
region: "us-central1"
Now you can refer to the region in your per-region terragrunt.hcl files as a variable:
# live/euw/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.2.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
Also:
# live/usc1/vpc/terragrunt.hcl
terraform {
source = "github.com/MyProject/infrastructure-modules.git//vpc/subnetwork"
}
include {
path = find_in_parent_folders()
}
inputs = {
vpc_subnetwork_name = "subnet1-${region}"
vpc_subnetwork_cidr = "10.3.0.0/16"
vpc_subnetwork_region = region
vpc_name = "MyVPC"
}
You might find the example terragrunt repository from Gruntwork helpful.

attach security group created in another app

folder structure.
I am creating the following for 2 seperate applications using same modules in terragrunt
LB
Instances
Security Groups
my question is how do I reference a security group created for app1 in app2?
eg.
in app1
I can references it as
security_groups = ["${aws_security_group.sec_group_A.id}"]
how can I refer the same security group in app2?
resource "aws_security_group" "sec_group_A" {
name = "sec_group_A"
...
...
}
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${aws_security_group.sec_group_A.id}"]
...
...
}
In app2, you can:
data "aws_security_group" "other" {
name = "sec_group_A"
}
and then use the ID:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${data.aws_security_group.other.id}"]
...
...
}
(caveat for using data is that you are running two separate terraform applys - one configuration creates the group, and other configuration references the group).
I have no experience of using terragrunt, but normally I would be calling my modules from a "main.tf" file in the root of the project. An example folder structure is below
.
├── main.tf
└── modules
├── app1
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── app2
├── main.tf
├── outputs.tf
└── variables.tf
My app1 outputs.tf declares a security group A output
output "sec_group_a" { value = "${aws_security_group.sec_group_A}" }
I can then call this output in my main.tf file in the root of the project. This would look something like the below
module "app1" {
source = "./modules/app1"
...
// Pass in my variables
}
module "app2" {
source = "./modules/app2"
sec_group_A = "${module.app1.sec_group_A}"
...
//Pass in the rest of my variables
}
Finally inside of the app2 module you can call this as you would any other variable.
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
security_groups = ["${var.sec_group_A.id}"]
...
...
}
I'd read up on modules here https://www.terraform.io/docs/modules/index.html to get a better understanding of how they fit together.
Alternatively you can grab the data from your remote state (if you have one configured) as long as sec_group_A declared as an output in app1. See https://www.terraform.io/docs/providers/terraform/d/remote_state.html

Terraform s3 backend not used by module

I've begun working with Terraform and have totally bought into it - amazing! having created my entire Dev environment in terms of AWS VPC, subnets, NACLs, SGs, Route tables etc etc etc, I have decided that I had better turn this into reusable modules.
So now I have turned it into modules, with variables etc. Now my dev template simply takes variables and uses them as inputs to the module. I end up with this:
terraform {
backend "s3" {
bucket = "redacted"
key = "dev/vpc/terraform.tfstate"
region = "eu-west-1"
encrypt = true
dynamodb_table = "terraform_statelock_redacted"
}
}
provider "aws"{
access_key = ""
secret_key = ""
region = "eu-west-1"
}
module "base_vpc" {
source = "git#github.com:closed/terraform-modules.git//vpc"
vpc_cidr = "${var.vpc_cidr}"
vpc_region = "${var.vpc_region}"
Environment = "${var.Environment}"
Public-subnet-1a = "${var.Public-subnet-1a}"
Public-subnet-1b = "${var.Public-subnet-1b}"
Private-subnet-1a = "${var.Private-subnet-1a}"
Private-subnet-1b = "${var.Private-subnet-1b}"
Private-db-subnet-1a = "${var.Private-db-subnet-1a}"
Private-db-subnet-1b = "${var.Private-db-subnet-1b}"
Onsite-computers = "${var.Onsite-computers}"
browse_access = "${var.browse_access}"
}
Now I have all state managed in an s3 backend, as you can see in the above configuration. I also have other state files for services/instances that are running. My problem is that now that I have turned this into a module and referenced it as above, it wants to blow away my state! I was under the impression that it would import the module and run it whilst respecting other configuration. The actual module code was copied from the original template, so nothing has changed there.
Is there a reason it is trying to blow everything away and start again? How does one manage separate states per environment in the case of using modules? I get no other errors. I have devs working on some of the servers at the moment so I'm paralysed now ha!
I guess I've misunderstood something, any help much appreciated :)
Thanks.
Edit - Using Terraform 0.9.8
OK so I think the bit I misunderstood was the way that using modules changes the paths in the state file. I realised I was on the right line while reading the Terraform docs around state migration.
I found this great blog entry to assist me with getting my head around it:
https://ryaneschinger.com/blog/terraform-state-move/
No comment section for me to thank that guy! Anyway, after seeing how easy it was, I just output the terraform state list command to a text file for the main file. Used PowerShell to quickly iterate over these and write up the commands for the module I was moving them into. When I tried to execute the lines with this script I got the "Terraform has crashed!!!!" error, so just cut and paste them into my shell one by one. Proper nooby, but there was only 50 or so resources, so not that time consuming. SO glad I'm doing this at Dev stage rather than deciding to do it retrospectively in prod.
So I'm sorted. Thanks for your input though JBirdVegas.
We ran into this issue and decided each environment needed a code representation we could view at the same time as the others if needed, ie compare dev configs to qa.
So now we have a folder for dev and one for qa and we launch terraform from there. Each is basically a list of variables that calls modules for each component.
Here is my tree for a visual representation
$ tree terraform/
terraform/
├── api_gateway
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── database
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── dev
│   └── main.tf
├── ec2
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── kms
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── network
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── qa
│   └── main.tf
└── sns
   ├── output.tf
   ├── main.tf
   └── variables.tf
dev/main.tf and qa/main.tf import the modules provided by the other folders supplying environment specific configurations for each module.
EDIT: here is a sanitized version of my dev/main.tf
provider "aws" {
region = "us-east-1"
profile = "blah-dev"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-dev-bucket"
key = "sweet/dev.terraform.tfstate"
region = "us-east-1"
profile = "blah-dev"
}
}
variable "aws_account" {
default = "000000000000"
}
variable "env" {
default = "dev"
}
variable "aws_region" {
default = "us-east-1"
}
variable "tag_product" {
default = "sweet"
}
variable "tag_business_region" {
default = "east"
}
variable "tag_business_unit" {
default = "my-department"
}
variable "tag_client" {
default = "some-client"
}
module build_env {
source = "../datasources"
}
module "kms" {
source = "../kms"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
}
module "network" {
source = "../network"
vpc_id = "vpc-000a0000"
subnet_external_1B = "subnet-000a0000"
subnet_external_1D = "subnet-000a0001"
subnet_db_1A = "subnet-000a0002"
subnet_db_1B = "subnet-000a0003"
}
module "database" {
source = "../database"
env = "dev"
vpc_id = "${module.network.vpc_id}"
subnet_external_1B = "${module.network.subnet_external_1B}"
subnet_external_1D = "${module.network.subnet_external_1D}"
subnet_db_1A = "${module.network.subnet_db_1A}"
subnet_db_1B = "${module.network.subnet_db_1B}"
database_instance_size = "db.t2.small"
database_name = "my-${var.tag_product}-db"
database_user_name = "${var.tag_product}"
database_passwd = "${module.kms.passwd_plaintext}"
database_identifier = "${var.tag_product}-rds-database"
database_max_connections = "150"
}
module sns {
source = "../sns"
aws_account = "${var.aws_account}"
}
module "api_gateway" {
source = "../api_gateway"
env = "${var.env}"
vpc_id = "${module.network.vpc_id}"
domain_name = "${var.tag_product}-dev.example.com"
dev_certificate_arn = "arn:aws:acm:${var.aws_region}:${var.aws_account}:certificate/abcd0000-a000-a000-a000-1234567890ab"
aws_account = "${var.aws_account}"
aws_region = "${var.aws_region}"
tag_client = "${var.tag_client}"
tag_business_unit = "${var.tag_business_unit}"
tag_product = "${var.tag_product}"
tag_business_region = "${var.tag_business_region}"
autoscaling_events_sns_topic_arn = "${module.sns.sns_topic_arn}"
db_subnet_id_1 = "${module.network.subnet_db_1A}"
db_subnet_id_2 = "${module.network.subnet_db_1B}"
ec2_role = "${var.tag_product}-assume-iam-role"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
module "ec2" {
source = "../ec2"
s3_bucket = "${var.tag_product}_dev_bucket"
aws_region = "${var.aws_region}"
env = "${var.env}"
ec2_key_name = "my-${var.tag_product}-key"
ec2_instance_type = "t2.micro"
aws_account = "${var.aws_account}"
vpc_id = "${module.network.vpc_id}"
binary_path = "${module.build_env.binary_path}"
binary_hash = "${module.build_env.binary_hash}"
git_hash_short = "${module.build_env.git_hash_short}"
private_key = "${format("%s/keys/%s-%s.pem", path.root, var.tag_product, var.env)}"
cloudfront_domain = "${module.api_gateway.cloudfront_domain}"
api_gateway_domain = "${module.api_gateway.api_gateway_cname}"
tag_client = "${var.tag_client}"
tag_business_region = "${var.tag_business_region}"
tag_product = "${var.tag_product}"
tag_business_unit = "${var.tag_business_unit}"
auto_scale_desired_capacity = "1"
auto_scale_max = "2"
auto_scale_min = "1"
autoscaling_events_sns_topic = "${module.sns.sns_topic_arn}"
subnet_external_b = "${module.network.subnet_external_b}"
subnet_external_a = "${module.network.subnet_external_a}"
kms_key_arn = "${module.kms.kms_arn}"
passwd_cypher_text = "${module.kms.passwd_cyphertext}"
}
Then my QA is basically the same (modify a couple vars at the top) however the most important difference is the very top of the qa/main.tf Mostly these vars:
provider "aws" {
region = "us-east-1"
profile = "blah-qa"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
terraform {
backend "s3" {
bucket = "sweet-qa-bucket"
key = "sweet/qa.terraform.tfstate"
region = "us-east-1"
profile = "blah-qa"
}
}
variable "aws_account" {
default = "000000000001"
}
variable "env" {
default = "qa"
}
Using this our backend for dev and qa have different state files in different buckets in different aws accounts. Idk what your requirements are but this has satisfied most projects I've worked with, in fact we are expanding our usage of this model across my org.

Resources