Combine Variable Values and Explicitly Defined Variable Values in Terraform Tags for AWS - terraform

Currently, I'm working on a requirement to make Terraform Tags for AWS resources more modular. In this instance, there will be one tag 'Function' that will be unique to each resource and the rest of the tags to be attached will apply to all resources. What I'm trying to do is combine the unique 'Function' value with the other tags for each resource.
Here's what I've got so far:
tags = {
Resource = "Example",
"${var.tags}
This tags value is defined as a map in the variables.tf file like so:
variable "tags" {
type = map
description = "Tags for infrastructure resources."
}
and populated in the tfvars file with:
tags = {
"Product" = "Name",
"Application" = "App",
"Owner" = "Email"
}
When I run TF Plan, however, I'm getting an error:
Expected an attribute value, introduced by an equals sign ("=").
How can variables be combined like this in Terraform? Thanks in advance for your help.

Figured this one out after further testing. Here you go:
tags = "${merge(var.tags,
map("Product", "Product Name",
"App", "${var.environment}")
)
}"
So, to reiterate: this code will merge a map variable of tags that (in my case) are applicable to many resources with the tag (Product and App) that are unique to each infrastructure resource. Hope this helps someone in the future. Happy Terraforming.

I tried to use map, it does work with new versions.
The lines below works for me:
tags = "${merge(var.resource_tags, {a="bb"})}"

Creating values in my tfvars file did not work for me...
Here is my approach....
I created a separate variable in my variables.tf file to call during the tagging process..
my default variable for tags are imported/pass from a parent module.
So therefore it doesnt need to specify any default data.
the extra tagging in the child module is done in the sub_tags variable..
imported/passed from parent/root module
variable "tags" {
type = "map"
}
tags in the child module
variable "sub_tags"{
type = "map"
default = {
Extra_Tags_key = "extra tagging value"
}
}
in the resource that needs the extra tagging.. i call it like this
tags = "${merge(var.tags, var.sub_tags)}"
this worked great for me

Related

How to solve for_each + "Terraform cannot predict how many instances will be created" issue?

I am trying to create a GCP project with this:
module "project-factory" {
source = "terraform-google-modules/project-factory/google"
version = "11.2.3"
name = var.project_name
random_project_id = "true"
org_id = var.organization_id
folder_id = var.folder_id
billing_account = var.billing_account
activate_apis = [
"iam.googleapis.com",
"run.googleapis.com"
]
}
After that, I am trying to create a service account, like so:
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "4.0.3"
project_id = module.project-factory.project_id
generate_keys = "true"
names = ["backend-runner"]
project_roles = [
"${module.project-factory.project_id}=>roles/cloudsql.client",
"${module.project-factory.project_id}=>roles/pubsub.publisher"
]
}
To be honest, I am fairly new to Terraform. I have read a few answers on the topic (this and this) but I am unable to understand how that would apply here.
I am getting the error:
│ Error: Invalid for_each argument
│
│ on .terraform/modules/pubsub-exporter-service-account/main.tf line 47, in resource "google_project_iam_member" "project-roles":
│ 47: for_each = local.project_roles_map_data
│ ├────────────────
│ │ local.project_roles_map_data will be known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the
│ -target argument to first apply only the resources that the for_each depends on.
Looking forward to learn more about Terraform through this challenge.
With only parts of the configuration visible here I'm guessing a little bit, but let's see. You mentioned that you'd like to learn more about Terraform as part of this exercise, so I'm going to go into a lot of detail about the chain here to explain why I'm recommending what I'm going to recommend, though you can skip to the end if you find this extra detail uninteresting.
We'll start with that first module's definition of its project_id output value:
output "project_id" {
value = module.project-factory.project_id
}
module.project-factory here is referring to a nested module call, so we need to look one level deeper in the nested module terraform-google-modules/project-factory/google//modules/core_project_factory:
output "project_id" {
value = module.project_services.project_id
depends_on = [
module.project_services,
google_project.main,
google_compute_shared_vpc_service_project.shared_vpc_attachment,
google_compute_shared_vpc_host_project.shared_vpc_host,
]
}
Another nested module call! 😬 That one declares its project_id like this:
output "project_id" {
description = "The GCP project you want to enable APIs on"
value = element(concat([for v in google_project_service.project_services : v.project], [var.project_id]), 0)
}
Phew! 😅 Finally an actual resource. This expression in this case seems to be taking the project attribute of a google_project_service resource instance, or potentially taking it from var.project_id if that resource was disabled in this instance of the module. Let's have a look at the google_project_service.project_services definition:
resource "google_project_service" "project_services" {
for_each = local.services
project = var.project_id
service = each.value
disable_on_destroy = var.disable_services_on_destroy
disable_dependent_services = var.disable_dependent_services
}
project here is set to var.project_id, so it seems like either way this innermost project_id output just reflects back the value of the project_id input variable, so we need to jump back up one level and look at the module call to this module to see what that was set to:
module "project_services" {
source = "../project_services"
project_id = google_project.main.project_id
activate_apis = local.activate_apis
activate_api_identities = var.activate_api_identities
disable_services_on_destroy = var.disable_services_on_destroy
disable_dependent_services = var.disable_dependent_services
}
project_id is set to the project_id attribute of google_project.main:
resource "google_project" "main" {
name = var.name
project_id = local.temp_project_id
org_id = local.project_org_id
folder_id = local.project_folder_id
billing_account = var.billing_account
auto_create_network = var.auto_create_network
labels = var.labels
}
project_id here is set to local.temp_project_id, which is declared further up in the same file:
temp_project_id = var.random_project_id ? format(
"%s-%s",
local.base_project_id,
random_id.random_project_id_suffix.hex,
) : local.base_project_id
This expression includes a reference to random_id.random_project_id_suffix.hex, and .hex is a result attribute from random_id, and so its value won't be known until apply time due to how that random_id resource type is implemented. (It generates a random value during the apply step and saves it in the state so it'll stay consistent on future runs.)
This means that (after all of this indirection) module.project-factory.project_id in your module is not a value defined statically in the configuration, and might instead be decided dynamically during the apply step. That means it's not an appropriate value to use as part of the instance key of a resource, and thus not appropriate to use as a key in a for_each map.
Unfortunately the use of for_each here is hidden inside this other module terraform-google-modules/service-accounts/google, and so we'll need to have a look at that one too and see how it's making use of the project_roles input variable. First, let's look at the specific resource block the error message was talking about:
resource "google_project_iam_member" "project-roles" {
for_each = local.project_roles_map_data
project = element(
split(
"=>",
each.value.role
),
0,
)
role = element(
split(
"=>",
each.value.role
),
1,
)
member = "serviceAccount:${google_service_account.service_accounts[each.value.name].email}"
}
There's a couple somewhat-complex things going on here, but the most relevant thing for what we're looking at here is that this resource configuration is creating multiple instances based on the content of local.project_roles_map_data. Let's look at local.project_roles_map_data now:
project_roles_map_data = zipmap(
[for pair in local.name_role_pairs : "${pair[0]}-${pair[1]}"],
[for pair in local.name_role_pairs : {
name = pair[0]
role = pair[1]
}]
)
A little more complexity here that isn't super important to what we're looking for; the main thing to consider here is that this is constructing a map whose keys are built from element zero and element one of local.name_role_pairs, which is declared directly above, along with local.names that it refers to:
names = toset(var.names)
name_role_pairs = setproduct(local.names, toset(var.project_roles))
So what we've learned here is that the values in var.names and the values in var.project_roles both contribute to the keys of the for_each on that resource, which means that neither of those variable values should contain anything decided dynamically during the apply step.
However, we've also learned (above) that the project and role arguments of google_project_iam_member.project-roles are derived from the prefixes of elements in the two lists you provided as names and project_roles in your own module call.
Let's return back to where we started then, with all of this extra information in mind:
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "4.0.3"
project_id = module.project-factory.project_id
generate_keys = "true"
names = ["backend-runner"]
project_roles = [
"${module.project-factory.project_id}=>roles/cloudsql.client",
"${module.project-factory.project_id}=>roles/pubsub.publisher"
]
}
We've learned that names and project_roles must both contain only static values decided in the configuration, and so it isn't appropriate to use module.project-factory.project_id because that won't be known until the random project ID has been generated during the apply step.
However, we also know that this module is expecting the prefix of each item in project_roles (the part before the =>) to be a valid project ID, so there isn't any other value that would be reasonable to use there.
Therefore we're at a bit of an empasse: this second module has a rather awkward design decision that it's trying to derive a both a local instance key and a reference to a real remote object from the same value, and those two situations have conflicting requirements. But this isn't a module you created, so you can't easily modify it to address that design quirk.
Given that, I see two possible approaches to move forward, neither ideal but both workable with some caveats:
You could take the approach the error message offered as a workaround, asking Terraform to plan and apply the resources in the first module alone first, and then plan and apply the rest on a subsequent run once the project ID is already decided and recorded in the state:
terraform apply -target=module.factory
terraform apply
Although it's annoying to have to do this initial create in two steps, it does at least only matter for the initial creation of this infrastructure. If you update it later then you won't need to repeat this two-step process unless you've changed the configuration in a way that requires generating a new project ID.
While working through the above we saw that this approach of generating and returning a random project ID was optional based on that first module's var.random_project_id, which you set to "true" in your configuration. Without that, the project_id output would be just a copy of your given name argument, which seems to be statically defined by reference to a root module variable.
Unless you particularly need that random suffix on your project ID, you could leave random_project_id unset and thus just get the project ID set to the same static value as your var.project_name, which should then be an acceptable value to use as a for_each key.
Ideally this second module would be designed to separate the values it's using for instance keys from the values it's using to refer to real remote objects, and thus it would be possible to use the random-suffixed name for the remote object but a statically-defined name for the local object. If this were a module under your control then I would've suggested a design change like that, but I assume the current unusual design of that third-party module (packing multiple values into a single string with a delimiter) is a compromise resulting from wanting to retain backward compatibility with an earlier iteration of the module.

Terraform . How to pass multiple values in command line using list (string) in variable.tf file?

I have a simple main and variable files for deploying webapp for containers in Azure.
But I would like that terraform plan uses variables from the command line to choose names like follows:
terraform plan -var resource_group_name=my-rg
This worked perfectly commenting the name of the default value for the RG like this.
main.tf
data "azurerm_resource_group" "my-rg" {
name = var.resource_group_name
}
variable.tf
variable "resource_group_name" {
# default = "Search-API"
}
But if I want to do the same for a list string I don´t know how to do it. I want to be able to do something that If I put 2 names 2 webapps are going to be created, if I put 3, 3 webapps and so.
I tried with this (also commenting default value) :
main.tf
resource "azurerm_app_service" "azure-webapp" {
count = length(var.webapp_server_name)
name = var.webapp_server_name[count.index]
variable.tf
variable "webapp_server_name" {
description = "Create Webapp with following names"
type = list(string)
#default = ["webapp-a", "webapp-b", "webapp-c"]
But I´m getting:
terraform plan -var webapp_server_name=webapp-a
Error: Variables not allowed
on <value for var.webapp_server_name> line 1:
(source code not available)
Variables may not be used here.
I also tried with empty string like:
variable "webapp_server_name" {
description = "Create Webapp with following names"
type = list(string)
default = []
}
Is there a way to do such a thing with terraform? to define an empty list and pass values (one, two, or more) from command?
thanks
UPDATE
Tried like this, following this post but now is asking to put the value even though I´m passing it through command line
terraform plan -var 'listvar=["webapp-a"]'
var.webapp_server_name
Create Webapp with following names
Enter a value:
If there is a variable declaration:
variable "webapp_server_name" {
description = "Create Webapp with following names"
type = list(string)
#default = ["webapp-a", "webapp-b", "webapp-c"]
}
You could use it like this with \ to escape the quotes".
terraform plan -var 'webapp_server_name=[\"webapp-a\", \"webapp-b\", \"webapp-c\"]'
For example, it worked with using the latest terraform provider version Terraform v0.13.4.
When you pass in the variable from the command line with -var webapp_server_name=webapp-a you are passing it in as a string.
But you've defined the variable as a list. So based on the docs you'll want the command line to look something like:
terraform plan -var='webapp_server_name=["webapp-a"]'

Terraform resource as a module input variable

When developing a terraform module, I sometimes find myself in the need to define different input variables for the same resources. For example, right now I need the NAME and ARN of the same AWS/ECS cluster for my module, so, I defined two variables in my module: ecs_cluster_arn and ecs_cluster_name.
For the sake of DRY, it would be very nice if I could just define the input variable ecs_cluster of type aws_ecs_cluster and the just use whatever I need inside my module.
I can't seem to find a way to do this. Does anyone know if it's possible?
You can define an input variable whose type constraint is compatible with the schema of the aws_ecs_cluster resource type. Typically you'd write a subset type constraint that contains only the attributes the module actually needs. For example:
variable "ecs_cluster" {
type = object({
name = string
arn = string
})
}
Elsewhere in the module, you can use var.ecs_cluster.name and var.ecs_cluster.arn to refer to those attributes. The caller of the module can pass in anything that's compatible with that type constraint, which includes a whole instance of the aws_ecs_cluster resource type, but would also include a literal object containing just those two attributes:
module "example" {
# ...
ecs_cluster = aws_ecs_cluster.example
}
module "example" {
# ...
ecs_cluster = {
name = "blah"
arn = "arn:aws:yada-yada:blah"
}
}
In many cases this would also allow passing the result of the corresponding data source instead of the managed resource type. Unfortunately for this pairing in particular the data source for some reason uses the different attribute name cluster_name and therefore isn't compatible. That's unfortunate, and not the typical design convention for pairs of managed resource type and data source with the same name; I assume it was a design oversight.
module "example" {
# ...
# This doesn't actually work for the aws_ecs_cluster
# data source because of a design quirk, but this would
# be possible for most other pairings such as
# the aws_subnet managed resource type and data source.
ecs_cluster = data.aws_ecs_cluster.example
}

How do I pick elements from a terraform list

I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.

How to add a resource using the same module?

Terraform newbie here. I've a module which creates an instance in GCP. I'm using variables and terraform.tfvars to initialize them. I created one instance successfully - say instance-1. But when I modify the .tfvars file to create a second instance (in addition to the first), it says it has to destroy the first instance. How can I run the module to 'add' an instance, instead of 'replacing the instance'? I know the first instance which was created is in terraform.tfstate. But that doesn't explain the inability to 'add' an instance.
Maybe I'm wrong, but I'm looking at 'modules' (and its config files) as functions- such that I can call them anytime with different parameters. That does not appear to be the case.
Terraform will try to maintain the deployed resources matching your resources definition.
If you want two instances at the same time, then you should describe them both in your .tf file.
Ex. same instances, add a count to your definition
resource "some_resource" "example" {
count = 2
name = "example-${count.index}"
}
Ex. two different resources with specific values
resource "some_resource" "example-1" {
name = "example-1"
size = "small"
}
resource "some_resource" "example-2" {
name = "example-2"
size = "big"
}
Better you can set the specific values in tfvars for each resource
resource "some_resource" "example" {
count = 2
name = "example-${count.index}"
size = ${vars.mysize[count.index]}
}
variable mysize {}
with tfvars file:
mysize = ["small", "big"]

Resources