Conditional creation of parent/child resources - terraform

I have a Terraform parent-resource that gets created conditionally, by using the count meta arg. This works fine. However, if the parent-resource doesn't get created because count is set to 0, and it has dependent child-resources, Terraform will fail. Is there a practical way to tell Terraform to ignore the children-resources, if the parent doesn't get created? The only way I can think to do it is to perform a count operation on each resource, and this seems cumbersome.
Something like this:
create_dev_compartment = 0
create_dev_subnet *skip creation*
create_dev_instance *skip creation*
create_mgt_compartment = 1
create_mgt_subnet *create resource*
create_mgt_instance *create resource*

The Terraform documentation has a section Chaining for_each between resources which describes declaring chains of resources that have the same (or derived) for_each expressions so that they can all repeat based on the same source information.
The documentation doesn't include an explicit example of the equivalent pattern for count, but it follows a similar principle: the count expression for the downstream resource will derive from the value representing the upstream resource.
Since you didn't include any Terraform code I can only show a contrived example, but here's the general idea:
variable "manage_network" {
type = bool
}
resource "compartment" "example" {
count = var.manage_network ? 1 : 0
}
resource "subnet" "example" {
count = length(compartment.example)
compartment_id = compartment.example[count.index].id
}
resource "instance" "example" {
count = length(subnet.example)
subnet_id = subnet.example[count.index].id
}
In the case of chained for_each, the full object representing the corresponding upstream resource is temporarily available as each.value inside the downstream resource block. count can't carry values along with it in the same way, so the equivalent is to refer to the upstream resource directly and then index it with count.index, which exploits the fact that these resources all have the same count value and will thus all have the same indices. Currently the only possible index will be zero, because you have a maximum count of 1, but if you change count in future to specify two or more instances then the downstream resources will all grow in the same way, creating several correlated instances all at once.

Related

Is there a condition in terraform same as CloudFormation?

I see people using count to block resource creation in terraform. I want to create some resources if a condition is set to true. Is there such a thing same as in CloudFormation?
You answered yourself, the most similar thing is the count
You can use it combined with a conditional expression, like
resource "x" "y"{
count = var.tag == "to_deploy" ? 1 : 0
}
But this is just a stupid example, you can put everything, also use functions
count = max(var.array) >= 3 ? 1 : 0
And if you need to put a condition on something more complex, you can evaluate to use a locals block where do all elaboration you need, and just use some bool, or what you want, resultant from that in conditional expression.
I would like to help you more, but I should know your specific case, what are the conditions you would have.
In CloudFormation a "condition" is a top-level object type alongside resources, outputs, mappings, etc.
The Terraform language takes a slightly more general approach of just having values of various data types, combining and transforming them using expressions. Therefore there isn't a concept exactly equivalent to CloudFormation's "conditions", but you can achieve a similar effect in other ways using Terraform.
For example, if you want to encode the decision rule in only a single place and then refer to it many times then you can define a Local Value of boolean type and then refer to that from multiple resource blocks. A local value of boolean type is essentially equivalent to a condition object in CloudFormation. The CloudFormation documentation page you linked to has, at the time of writing, an example titled "Simple condition" and the following is a roughly-equivalent version of that example in the Terraform language:
variable "environment_type" {
type = string
validation {
condition = contains(["prod", "test"], var.environment_type)
error_message = "Must be either 'prod' or 'test'."
}
}
locals {
create_prod_resources = (var.environment_type == "prod")
}
resource "aws_instance" "example" {
ami = "ami-0ff8a91507f77f867"
instance_type = "..."
}
resource "aws_ebs_volume" "example" {
count = local.create_prod_resources ? 1 : 0
availability_zone = aws_instance.example.availability_zone
}
resource "aws_volume_attachment" "example" {
count = local.create_prod_resources ? 1 : 0
volume_id = aws_ebs_volume.example[count.index].id
instance_id = aws_instance.example.id
device = "/dev/sdh"
}
Two different resource blocks can both refer to local.create_prod_resources, in the same way that the two resources MountPoint and NewVolume can refer to the shared condition CreateProdResources in the CloudFormation example.

How to solve for_each + "Terraform cannot predict how many instances will be created" issue?

I am trying to create a GCP project with this:
module "project-factory" {
source = "terraform-google-modules/project-factory/google"
version = "11.2.3"
name = var.project_name
random_project_id = "true"
org_id = var.organization_id
folder_id = var.folder_id
billing_account = var.billing_account
activate_apis = [
"iam.googleapis.com",
"run.googleapis.com"
]
}
After that, I am trying to create a service account, like so:
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "4.0.3"
project_id = module.project-factory.project_id
generate_keys = "true"
names = ["backend-runner"]
project_roles = [
"${module.project-factory.project_id}=>roles/cloudsql.client",
"${module.project-factory.project_id}=>roles/pubsub.publisher"
]
}
To be honest, I am fairly new to Terraform. I have read a few answers on the topic (this and this) but I am unable to understand how that would apply here.
I am getting the error:
│ Error: Invalid for_each argument
│
│ on .terraform/modules/pubsub-exporter-service-account/main.tf line 47, in resource "google_project_iam_member" "project-roles":
│ 47: for_each = local.project_roles_map_data
│ ├────────────────
│ │ local.project_roles_map_data will be known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the
│ -target argument to first apply only the resources that the for_each depends on.
Looking forward to learn more about Terraform through this challenge.
With only parts of the configuration visible here I'm guessing a little bit, but let's see. You mentioned that you'd like to learn more about Terraform as part of this exercise, so I'm going to go into a lot of detail about the chain here to explain why I'm recommending what I'm going to recommend, though you can skip to the end if you find this extra detail uninteresting.
We'll start with that first module's definition of its project_id output value:
output "project_id" {
value = module.project-factory.project_id
}
module.project-factory here is referring to a nested module call, so we need to look one level deeper in the nested module terraform-google-modules/project-factory/google//modules/core_project_factory:
output "project_id" {
value = module.project_services.project_id
depends_on = [
module.project_services,
google_project.main,
google_compute_shared_vpc_service_project.shared_vpc_attachment,
google_compute_shared_vpc_host_project.shared_vpc_host,
]
}
Another nested module call! 😬 That one declares its project_id like this:
output "project_id" {
description = "The GCP project you want to enable APIs on"
value = element(concat([for v in google_project_service.project_services : v.project], [var.project_id]), 0)
}
Phew! 😅 Finally an actual resource. This expression in this case seems to be taking the project attribute of a google_project_service resource instance, or potentially taking it from var.project_id if that resource was disabled in this instance of the module. Let's have a look at the google_project_service.project_services definition:
resource "google_project_service" "project_services" {
for_each = local.services
project = var.project_id
service = each.value
disable_on_destroy = var.disable_services_on_destroy
disable_dependent_services = var.disable_dependent_services
}
project here is set to var.project_id, so it seems like either way this innermost project_id output just reflects back the value of the project_id input variable, so we need to jump back up one level and look at the module call to this module to see what that was set to:
module "project_services" {
source = "../project_services"
project_id = google_project.main.project_id
activate_apis = local.activate_apis
activate_api_identities = var.activate_api_identities
disable_services_on_destroy = var.disable_services_on_destroy
disable_dependent_services = var.disable_dependent_services
}
project_id is set to the project_id attribute of google_project.main:
resource "google_project" "main" {
name = var.name
project_id = local.temp_project_id
org_id = local.project_org_id
folder_id = local.project_folder_id
billing_account = var.billing_account
auto_create_network = var.auto_create_network
labels = var.labels
}
project_id here is set to local.temp_project_id, which is declared further up in the same file:
temp_project_id = var.random_project_id ? format(
"%s-%s",
local.base_project_id,
random_id.random_project_id_suffix.hex,
) : local.base_project_id
This expression includes a reference to random_id.random_project_id_suffix.hex, and .hex is a result attribute from random_id, and so its value won't be known until apply time due to how that random_id resource type is implemented. (It generates a random value during the apply step and saves it in the state so it'll stay consistent on future runs.)
This means that (after all of this indirection) module.project-factory.project_id in your module is not a value defined statically in the configuration, and might instead be decided dynamically during the apply step. That means it's not an appropriate value to use as part of the instance key of a resource, and thus not appropriate to use as a key in a for_each map.
Unfortunately the use of for_each here is hidden inside this other module terraform-google-modules/service-accounts/google, and so we'll need to have a look at that one too and see how it's making use of the project_roles input variable. First, let's look at the specific resource block the error message was talking about:
resource "google_project_iam_member" "project-roles" {
for_each = local.project_roles_map_data
project = element(
split(
"=>",
each.value.role
),
0,
)
role = element(
split(
"=>",
each.value.role
),
1,
)
member = "serviceAccount:${google_service_account.service_accounts[each.value.name].email}"
}
There's a couple somewhat-complex things going on here, but the most relevant thing for what we're looking at here is that this resource configuration is creating multiple instances based on the content of local.project_roles_map_data. Let's look at local.project_roles_map_data now:
project_roles_map_data = zipmap(
[for pair in local.name_role_pairs : "${pair[0]}-${pair[1]}"],
[for pair in local.name_role_pairs : {
name = pair[0]
role = pair[1]
}]
)
A little more complexity here that isn't super important to what we're looking for; the main thing to consider here is that this is constructing a map whose keys are built from element zero and element one of local.name_role_pairs, which is declared directly above, along with local.names that it refers to:
names = toset(var.names)
name_role_pairs = setproduct(local.names, toset(var.project_roles))
So what we've learned here is that the values in var.names and the values in var.project_roles both contribute to the keys of the for_each on that resource, which means that neither of those variable values should contain anything decided dynamically during the apply step.
However, we've also learned (above) that the project and role arguments of google_project_iam_member.project-roles are derived from the prefixes of elements in the two lists you provided as names and project_roles in your own module call.
Let's return back to where we started then, with all of this extra information in mind:
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "4.0.3"
project_id = module.project-factory.project_id
generate_keys = "true"
names = ["backend-runner"]
project_roles = [
"${module.project-factory.project_id}=>roles/cloudsql.client",
"${module.project-factory.project_id}=>roles/pubsub.publisher"
]
}
We've learned that names and project_roles must both contain only static values decided in the configuration, and so it isn't appropriate to use module.project-factory.project_id because that won't be known until the random project ID has been generated during the apply step.
However, we also know that this module is expecting the prefix of each item in project_roles (the part before the =>) to be a valid project ID, so there isn't any other value that would be reasonable to use there.
Therefore we're at a bit of an empasse: this second module has a rather awkward design decision that it's trying to derive a both a local instance key and a reference to a real remote object from the same value, and those two situations have conflicting requirements. But this isn't a module you created, so you can't easily modify it to address that design quirk.
Given that, I see two possible approaches to move forward, neither ideal but both workable with some caveats:
You could take the approach the error message offered as a workaround, asking Terraform to plan and apply the resources in the first module alone first, and then plan and apply the rest on a subsequent run once the project ID is already decided and recorded in the state:
terraform apply -target=module.factory
terraform apply
Although it's annoying to have to do this initial create in two steps, it does at least only matter for the initial creation of this infrastructure. If you update it later then you won't need to repeat this two-step process unless you've changed the configuration in a way that requires generating a new project ID.
While working through the above we saw that this approach of generating and returning a random project ID was optional based on that first module's var.random_project_id, which you set to "true" in your configuration. Without that, the project_id output would be just a copy of your given name argument, which seems to be statically defined by reference to a root module variable.
Unless you particularly need that random suffix on your project ID, you could leave random_project_id unset and thus just get the project ID set to the same static value as your var.project_name, which should then be an acceptable value to use as a for_each key.
Ideally this second module would be designed to separate the values it's using for instance keys from the values it's using to refer to real remote objects, and thus it would be possible to use the random-suffixed name for the remote object but a statically-defined name for the local object. If this were a module under your control then I would've suggested a design change like that, but I assume the current unusual design of that third-party module (packing multiple values into a single string with a delimiter) is a compromise resulting from wanting to retain backward compatibility with an earlier iteration of the module.

How to add a resource using the same module?

Terraform newbie here. I've a module which creates an instance in GCP. I'm using variables and terraform.tfvars to initialize them. I created one instance successfully - say instance-1. But when I modify the .tfvars file to create a second instance (in addition to the first), it says it has to destroy the first instance. How can I run the module to 'add' an instance, instead of 'replacing the instance'? I know the first instance which was created is in terraform.tfstate. But that doesn't explain the inability to 'add' an instance.
Maybe I'm wrong, but I'm looking at 'modules' (and its config files) as functions- such that I can call them anytime with different parameters. That does not appear to be the case.
Terraform will try to maintain the deployed resources matching your resources definition.
If you want two instances at the same time, then you should describe them both in your .tf file.
Ex. same instances, add a count to your definition
resource "some_resource" "example" {
count = 2
name = "example-${count.index}"
}
Ex. two different resources with specific values
resource "some_resource" "example-1" {
name = "example-1"
size = "small"
}
resource "some_resource" "example-2" {
name = "example-2"
size = "big"
}
Better you can set the specific values in tfvars for each resource
resource "some_resource" "example" {
count = 2
name = "example-${count.index}"
size = ${vars.mysize[count.index]}
}
variable mysize {}
with tfvars file:
mysize = ["small", "big"]

terraform route53 resolver setup

Just been trying to use the new terraform aws_route53_resolver_endpoint resource. It takes the subnet ids as a block type list. Unfortunately there appears to be no way to populate this from a list of subnets read from an output variable from the previous step.
Basically I have a set of subnets created using the count on the subnet resources in a previous step. Im trying to use these and setup aws_route53_resolver_endpoint in each of these subnets:
resource "null_resource" "management_subnet_list" {
count = "${length(var.subnet_ids)}"
triggers {
subnet_id = "${element(data.terraform_remote_state.app_network.management_subnet_ids, count.index)}"
}
}
resource "aws_route53_resolver_endpoint" "dns_endpoint" {
name = "${var.environment_name}-${var.network_env}-dns"
direction = "OUTBOUND"
security_group_ids = ["${var.security_groups}"]
ip_address = "${null_resource.management_subnet_list.*.triggers}"
}
The above when run, results in an error: ip_address: should be a list
If I modify the code as follow:
ip_address = ["${null_resource.management_subnet_list.*.triggers}"]
I get the error: ip_address: attribute supports 2 item as a minimum, config has 1 declared
I can't seem to figure out any other way to create the resource list dynamically from a list of subnets.
Any help will be appreciated.
Per the resource reference for aws_route53_resolver_endpoint, the subnet_id in the ip_address block is a single string value.
To specify multiple subnets, you need to have multiple ip_address blocks.
Since you state that you're creating subnets with a count argument, you could potentially reference each individually with the index like: aws_subnet.main[0].id, aws_subnet.main[1].id and so on, each in it's own ip_address block. (or for Terraform 0.11, I think it was "${aws_subnet.main.0.id}".)
However, a better way would be to use the Dynamic Blocks available in Terraform 0.12 +
Dynamic Blocks allow you to create repeatable nested blocks within top-level blocks.(resource, data, provider, and provisioner blocks currently support dynamic blocks).
A dynamic ip_address block within the aws_route53_resolver_endpoint resource could look like:
dynamic "ip_address" {
for_each = aws_subnet.main[*].id
iterator = subnet
content {
subnet_id = subnet.value
}
}
Which would result in a separate ip_address nested block for each subnet created in the aws_subnet.main resource.
The for_each argument is the complex value to iterate over. It accepts accepts any collection or structural value, typically a list or map with one element per desired nested block.
For complete info on the dynamic nested block expression, see the Terraform documentation at: https://www.terraform.io/docs/language/expressions/dynamic-blocks.html

creation order of subnet with terraform

I need to create 6 subnets with below cidr value but it's order has been changed while creating it with terraform.
private_subnets = {
"10.1.80.0/27" = "x"
"10.1.80.32/27" = "x"
"10.1.80.64/28" = "y"
"10.1.80.80/28" = "y"
"10.1.80.96/27" = "z"
"10.1.80.128/27" = "z"
}
Terraform is creating with 10.1.80.0/27 , 10.1.80.128/27,10.1.80.32/27,10.1.80.64/28,10.1.80.80/28,10.1.80.96/27 order
Module of terraform:
resource "aws_subnet" "private" {
vpc_id = "${var.vpc_id}"
cidr_block = "${element(keys(var.private_subnets), count.index)}"
availability_zone = "${element(var.availability_zones, count.index)}"
count = "${length(var.private_subnets)}"
tags {
Name = "${lookup(var.private_subnets, element(keys(var.private_subnets), count.index))}
}
}
Updated Answer:
Thanks to the discussion in the comments, I revise my answer:
You are assuming an order within a dictionary. This is not intended behaviour. As from your example, one can see that terraform orders the keys alphabetically internally, i.e., you can "think" of your variable as
private_subnets = {
"10.1.80.0/27" = "x"
"10.1.80.128/27" = "z"
"10.1.80.32/27" = "x"
"10.1.80.64/28" = "y"
"10.1.80.80/28" = "y"
"10.1.80.96/27" = "z"
}
You are running into problems, because you are having mismatches with your other variable var.availability_zones where you assume the index to be sorted the same as for var.private_subnets.
Relying on the above ordering (alphabetically), is not a good solution, since it may change with any version of terraform (order of keys is not guaranteed).
Hence, I propose to use a list of maps:
private_subnets = [
{
"cidr" = "10.1.80.0/27"
"name" = "x"
"availability_zone" = 1
},
{
"cidr" = "10.1.80.32/27"
"name" = "x"
"availability_zone" = 2
},
…
]
I encoded the availability zone as index of your var.availability_zones list. However, you could also consider using the availability zone directly.
The adaption of your code is straightforward: Get (element(…)) the list element to get the map and then lookup(…) the desired key.
Old Answer (not applicable here):
Before Terraform creates any resources, it creates a graphstructure to represent all the objects it wants to track (create, update, delete) and the dependencies upon one another.
In your example, 6 different aws_subnet objects are created in the graph which do not depend on each other (there is no variable in one subnet dependent on another subnet).
When Terraform now tries to create the attributes, it does so concurrently in (potentially) multiple threads and creates resources potentially simultaniously, if they do not depend on each other.
This is why you might see very different orders of execution within multiple runs of terraform.
Note that this is a feature, since if you have many resources to be created that have no dependency on each other, they all are created simultaneously saving a lot of time with long-running creation operations.
A solution to your problem is to explicitly model the dependencies you are thinking of. Why should one subnet be created before the other? And if so, how can you make them dependent (e.g. via depends_on parameter)?
Answering this questions should bring you into the right direction to model your code according to your required layout.

Resources