Terraform depends_on with modules - azure

I'm new at terraform and I created a custom azure policies on module structure.
each policy represents a custom module.
One of the modules that I have created is enabling diagnostics logs for any new azure resource created.
but, I need a storage account for that. (before enabling the diagnostics settings how can I implement "depends_on"? or any other methods?
I want to create first the storage account and then the module of diagnostics settings.
on the main.tf (where calling all the other modules) or inside the resource (module)?
Thanks for the help!! :)
this below code represents the main.tf file:
//calling the create storage account name
module "createstorageaccount" {
source = "./modules/module_create_storage_account"
depends_on = [
"module_enable_diagnostics_logs"
]
}
this one represents the create storage account module
resource "azurerm_resource_group" "management" {
name = "management-rg"
location = "West Europe"
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
}
depends_on = [
"module_enable_diagnostics_logs"
]

In most cases, the necessary dependencies just occur automatically as a result of your references. If the configuration for one resource refers directly or indirectly to another, Terraform automatically infers the dependency between them without the need for explicit depends_on.
This works because module variables and outputs are also nodes in the dependency graph: if a child module resource refers to var.foo then it indirectly depends on anything that the value of that variable depends on.
For the rare situation where automatic dependency detection is insufficient, you can still exploit the fact that module variables and outputs are nodes in the dependency graph to create indirect explicit dependencies, like this:
variable "storage_account_depends_on" {
# the value doesn't matter; we're just using this variable
# to propagate dependencies.
type = any
default = []
}
resource "azurerm_storage_account" "test" {
name = "diagnostics${azurerm_resource_group.management.name}"
resource_group_name = "${azurerm_resource_group.management.name}"
location = "${azurerm_resource_group.management.location}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "diagnostics"
}
# This resource depends on whatever the variable
# depends on, indirectly. This is the same
# as using var.storage_account_depends_on in
# an expression above, but for situations where
# we don't actually need the value.
depends_on = [var.storage_account_depends_on]
}
When you call this module, you can set storage_account_depends_on to any expression that includes the objects you want to ensure are created before the storage account:
module "diagnostic_logs" {
source = "./modules/diagnostic_logs"
}
module "storage_account" {
source = "./modules/storage_account"
storage_account_depends_on = [module.diagnostic_logs.logging]
}
Then in your diagnostic_logs module you can configure indirect dependencies for the logging output to complete the dependency links between the modules:
output "logging" {
# Again, the value is not important because we're just
# using this for its dependencies.
value = {}
# Anything that refers to this output must wait until
# the actions for azurerm_monitor_diagnostic_setting.example
# to have completed first.
depends_on = [azurerm_monitor_diagnostic_setting.example]
}
If your relationships can be expressed by passing actual values around, such as by having an output that includes the id, I'd recommend preferring that approach because it leads to a configuration that is easier to follow. But in rare situations where there are relationships between resources that cannot be modeled as data flow, you can use outputs and variables to propagate explicit dependencies between modules too.

module dependencies are now supported in Terraform 13, this is currently at the release candidate stage.
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}

Using depends_on at resource level is different from using depends_on at inter-module level i found very simple way to do to it at module level
module "eks" {
source = "../modules/eks"
vpc_id = module.vpc.vpc_id
vpc_cidr = [module.vpc.vpc_cidr_block]
public_subnets = flatten([module.vpc.public_subnets])
private_subnets_id = flatten([module.vpc.private_subnets])
depends_on = [module.vpc]
}
i created dependencies directly with module simple as simplest no complex relation reequired

Related

Using terraform to create multiple resources based on a set of variables

I have a set of variables in terraform.tfvars:
resource_groups = {
cow = {
name = "Cow"
location = "eastus"
},
horse = {
name = "Horse"
location = "eastus"
},
chicken = {
name = "Chicken"
location = "westus2"
},
}
my main.tf looks like this:
...
module "myapp" {
source = "./modules/myapp"
resource_groups = var.resource_groups
}
variable "resource_groups" {}
...
./modules/myapp.main.tf look like this:
module "resource_group" {
source = "../myapp.resource_group"
resource_groups = var.resource_groups
for_each = {
for key, value in try(var.resource_groups, {}) : key => value
if try(value.reuse, false) == false
}
}
variable "resource_groups" {}
and ../myapp.resource_group looks like this:
resource "azurerm_resource_group" "resource_group" {
name = var.resource_groups.cow.name
location = var.resource_groups.cow.location
}
variable "resource_groups" {}
My hope is that after terraform plan I would see that three new RGs would be set for addition. Infact I do get three new ones, but they all use the name and location of the cow RG, because I specified var.resource_groups.cow.name The problem is I have tried all kinds of different iterators in place of .cow. and I can't get terraform to use the other variables in the terraform.tfvars file. I have tried square brackets, asterisks, and other wildcards. I am stuck.
I am looking to define a resource in one place and then use that to create multiple instances of that resource per the map of variables.
Guidance would be much appreciated.
Thanks.
Bill
For this situation you'll need to decide whether your module represents one resource group or whether it represents multiple resource groups. For a module that only contains one resource anyway that decision is not particularly important, but I assume you're factoring this out into a separate module because there is something more to this than just the single resource group resource, and so you can decide between these two based on what else this module represents: do you want to repeat everything in this module, or just the resource group resource?
If you need the module to represent a single resource group then you should change its input variables to take the data about only a single resource group, and then pass the current resource group's data in your calling module block.
Inside the module:
variable "resource_group" {
type = object({
name = string
location = string
})
}
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group.name
location = var.resource_group.location
}
When calling the module:
variable "resource_groups" {
type = map(
object({
name = string
location = string
})
)
}
module "resource_group" {
source = "../myapp.resource_group"
for_each = var.resource_groups
# each.value is the value of the current
# element of var.resource_groups, and
# so it's just a single resource group.
resource_group = each.value
}
With this strategy, you will declare resource instances with the following addresses, showing that the repetition is happening at the level of the whole module rather than the individual resources inside it:
module.resource_group["cow"].azurerm_resource_group.resource_group
module.resource_group["horse"].azurerm_resource_group.resource_group
module.resource_group["chicken"].azurerm_resource_group.resource_group
If you need the module to represent the full set of resource groups then the module would take the full map of resource groups as an input variable instead of using for_each on the module block. The for_each argument will then belong to the nested resource instead.
Inside the module:
variable "resource_groups" {
type = map(
object({
name = string
location = string
})
)
}
resource "azurerm_resource_group" "resource_group" {
for_each = var.resource_groups
name = each.value.name
location = each.value.location
}
When calling the module:
variable "resource_groups" {
type = map(
object({
name = string
location = string
})
)
}
module "resource_group" {
source = "../myapp.resource_group"
# NOTE: No for_each here, because we need only
# one instance of this module which will itself
# then contain multiple instances of the resource.
resource_group = var.resource_groups
}
With this strategy, you will declare resource instances with the following addresses, showing that there's only one instance of the module but multiple instances of the resource inside it:
module.resource_group.azurerm_resource_group.resource_group["cow"]
module.resource_group.azurerm_resource_group.resource_group["horse"]
module.resource_group.azurerm_resource_group.resource_group["chicken"]
It's not clear from the information you shared which of these strategies would be more appropriate in your case, because you've described this module as if it is just a thin wrapper around the azurerm_resource_group resource type and therefore it isn't really clear what this module represents, and why it's helpful in comparison to just writing an inline resource "azurerm_resource_group" block in the root module.
When thinking about which of the above designs is most appropriate for your use-case, I'd suggest considering the advice in When to Write a Module in the Terraform documentation. It can be okay to write a module that contains only a single resource block, but that's typically for more complicated resource types where the module hard-codes some local conventions so that they don't need to be re-specified throughout an organization's Terraform configurations.
If you are just passing the values through directly to the resource arguments with no additional transformation and no additional hard-coded settings then that would suggest that this module is not useful, and that it would be simpler to write the resource block inline instead.

Terraform Invalid count argument that depends on another resource

I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.

Terraform using variables and other resources in modules

I am refactoring my terraform code to use modules. But I am getting alot of variable/resource not defined errors/found.
I found that I need to move my variable [name] {} blocks into the module. Isit impossible for modules to reference parent/another modules variables? Eg. sometimes I may have some reused variables eg. NODE_ENV.
Next ... after this, I find that it says missing required argument. I am just running terraform init because terraform says I need to do it... I tried adding -var-file but it does not seem to work for modules? How should I resolve this?
There are also alot of
resource 'aws_ecs_service.xxx-ecs-service' config: unknown module referenced: ecs
errors ... it appears I cannot reference resources the usual way anymore?
# ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${module.ecs.aws_ecs_task_definition.pinfare-ecs-task.arn}"
}
For the task_defination, I tried adding module.ecs since ecs is the name of my module
module "ecs" {
source = "./ecs"
name = "ecs"
}
To avoid confusion, would recommend complete directory separation.
module/ecs/ecs.tf
env/dev/myecs.tf
Below is illustration purpose and have not tested by myself.
Configurations
module/ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${aws_ecs_task_definition.xxx-ecs-task.arn}"
}
module/ecs/variables.tf
Define all the variables to be passed from the module user side (env/dev/myecs.tf).
variable "family" {
}
variable "role_arn" {
}
...
env/dev/myeecs.tf
module "ecs" {
source = "../../module/ecs"
family = "value of family here"
role_arn = "value of IAM role arn"
...
}
module path
Please beware of ${path.module} as in Paths and Embedded Files to specify a path to a file within a module. Confusingly both env/dev/ and module/ecs are module in Terraform where env/dev/ is the root module.
Creating Modules
Modules in Terraform are folders with Terraform files. In fact, when you run terraform apply, the current working directory holding the Terraform files you're applying comprise what is called the root module. This itself is a valid module.
When specify a path, basically it is that within the root module. Therefore within a called module (module/ecs), prefix with ${path.module}/ so that the called module will not try to load files inside env/dev.
Terraform Registry
I would recommend having a look at Terraform Registry, for instance Security Group module to get used to how to use Terraform module. Links to GitHub are there too.

How to create provide modules that support multiple AWS regions?

We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.

Terraform use case to create multiple almost identical copies of infrastructure

I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.

Resources