terraform for_each implementation with values from .tfvars - terraform

I have a common.tfvars file with definition of a variables as:
bqtable_date_partition = [
{ dataset = "d1", table_name = "d1-t1", part_col = "partition_date",
part_type = "DAY", schema_file = "data_tables/d1-t1.json" },
{ dataset = "d1", table_name = "d1-t2", part_col = "tran_dt",
part_type = "DAY", schema_file = "data_tables/d1-t2.json" },
{ dataset = "d2", table_name = "d2-t1", part_col = "tran_dt",
part_type = "DAY", schema_file = "data_tables/d2-t1.json" },
]
and I am referencing this var in main.tf file with following resource defintion:
resource "google_bigquery_table" "bq_tables_dt_pt" {
count = length(var.bqtable_date_partition)
project = var.project_id
dataset_id = "${var.bqtable_date_partition[count.index].dataset}_${var.env}"
table_id = var.bqtable_date_partition[count.index].table_name
time_partitioning {
type = var.bqtable_date_partition[count.index].part_type
field = var.bqtable_date_partition[count.index].part_col
}
schema = file("${path.module}/tables/${var.bqtable_date_partition[count.index].schema_file}")
depends_on = [google_bigquery_dataset.crte_bq_dataset]
labels = {
env = var.env
ind = "corp"
}
}
I want to change the resource definition to use "for_each" instead of "count" to loop through the list:
My motive to change from count to for_each is to eliminate the dependency on the order in which I have written the elements of the variable "bqtable_date_partition "
I did this:
resource "google_bigquery_table" "bq_tables_dt_pt" {
for_each = var.bqtable_date_partition
project = var.project_id
dataset_id = "${each.value.dataset}_${var.env}"
table_id = each.value.table_name
time_partitioning {
type = each.value.part_type
field = each.value.part_col
}
schema = file("${path.module}/tables/${each.value.schema_file}")
depends_on = [google_bigquery_dataset.crte_bq_dataset]
labels = {
env = var.env
ind = "corp"
}
}
I got the following error as expected:
The given "for_each" argument value is unsuitable: the "for_each"
argument must be a map or set of strings, and you have provided a
value of type list of map of string.
Can anyone help me with what changes I need do to make in the resource definition to use "for_each"?
Terraform version - 0.14.x

Error says it only accepts the map or set of strings. So we have to convert our input variable to either map or set of strings.
https://www.terraform.io/docs/language/expressions/for.html
resource "google_bigquery_table" "bq_tables_dt_pt" {
for_each = { for index, data_partition in var.bqtable_date_partition : index => data_partition }
project = var.project_id
dataset_id = "${each.value.dataset}_${var.env}"
table_id = each.value.table_name
time_partitioning {
type = each.value.part_type
field = each.value.part_col
}
schema = file("${path.module}/tables/${each.value.schema_file}")
depends_on = [google_bigquery_dataset.crte_bq_dataset]
labels = {
env = var.env
ind = "corp"
}
}
So basically, here we are converting for_each input into the following format. and only referencing value in from newly created map.
{
"0" = {
"dataset" = "d1"
"part_col" = "partition_date"
"part_type" = "DAY"
"schema_file" = "data_tables/d1-t1.json"
"table_name" = "d1-t1"
}
"1" = {
"dataset" = "d1"
"part_col" = "tran_dt"
"part_type" = "DAY"
"schema_file" = "data_tables/d1-t2.json"
"table_name" = "d1-t2"
}
"2" = {
"dataset" = "d2"
"part_col" = "tran_dt"
"part_type" = "DAY"
"schema_file" = "data_tables/d2-t1.json"
"table_name" = "d2-t1"
}
}

There are two main requirements for using for_each:
You must have a collection that has one element for each resource instance you want to declare.
There must be some way to derive a unique identifier from each element of that collection which Terraform will then use as the unique instance key.
It seems like your collection meets both of these criteria, assuming that table_name is a unique string across all of those values, and so all that remains is to project the collection into a map so that Terraform can see from the keys that you intend to use the table_name for the unique tracking keys:
resource "google_bigquery_table" "bq_tables_dt_pt" {
for_each = {
for o in var.bqtable_date_partition : o.table_name => o
}
# ...
}
Here I've used a for expression to project from a sequence to a mapping, where each element is identified by the value in its table_name attribute.
If you are in a situation where you're able to change the interface to this module then you could simplify things by changing the variable's declaration to expect a map instead of a list, which would then avoid the need for the projection and make it explicit to the module caller that the table IDs must be unique:
variable "bqtable_date_partition" {
type = map(object({
dataset = string
part_col = string
part_type = string
schema_file = string
}))
}
Then you could just assign var.bqtable_date_partition directly to for_each as you tried before, because it'll already be of a suitable type. But would also require changing your calling module to pass a map value instead of a list value, and so this might not be practical if your module has many callers that would all need to be updated to remain compatible.

Related

How to merge a default list of Object with values from tfvars (Terraform)

I'm trying to achieve something with Terraform and I getting some trouble to find any solution.
I define a variable like this:
variable "node_pools" {
type = list(object({
name = string
location = string
cluster = string
initial_node_count = number
tag = string
}))
default = [
{
name = "default-pool"
cluster = "cluster_name"
location = "usa"
initial_node_count = 3
tag = "default"
}]
}
Then, in my tfvar file, I define my node_pools like that:
node_pools = [
{
name = "pool-01"
tag = "first-pool"
},
{
name = "pool-highmemory"
tag = "high-memory"
}
]
Then, in my main.tf I try to use my node_pools variable but I need them filled with the default values which are not specified in the tfvar, like location, cluster, etc..
I think I need to merge, but I don't find any way to achieve that.
Thanks for any help.
The default value for a whole variable is only for situations where the variable isn't assigned any value at all.
If you want to specify default values for attributes within a nested object then you can use optional object type attributes and specify the default values inline inside the type constraint, like this:
variable "node_pools" {
type = list(object({
name = string
location = optional(string, "usa")
cluster = optional(string, "cluster_name")
initial_node_count = optional(number, 3)
tag = optional(string, "default")
}))
default = [
{
name = "default-pool"
},
]
}
Because these default values are defined directly for the nested attributes they apply separately to each element of the list of objects.
If the caller doesn't set a value for this variable at all then the single object in the default value will be expanded using the default values to produce this effective default value:
tolist([
{
name = "default-pool"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "default"
},
])
With the tfvars file you showed the effective value would instead be the following:
tolist([
{
name = "pool-01"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "first-pool"
},
{
name = "pool-highmemory"
location = "usa"
cluster = "cluster_name"
initial_node_count = 3
tag = "high-memory"
},
])

Terraform Invalid for_each argument local will be known only after apply

I would like to create an AWS account with SSO Account Assignments in the same first terraform run without hit the for_each limitation with dynamic values that cannot be predicted during plan.
I've tried to separate the aws_organizations_account resource from aws_ssoadmin_account_assignment in completely separate TF module and also I tried to use depends_on between those resources and modules.
What is the simplest and correct way to fix this issue?
Terraform v1.2.4
AWS SSO Account Assignments Module
Closed Pull Request that did not fix this issue
main.tf file (aws module)
resource "aws_organizations_account" "account" {
name = var.aws_account_name
email = "${var.aws_account_name}#gmail.com"
tags = {
Name = var.aws_account_name
}
parent_id = var.aws_org_folder_id
}
data "aws_identitystore_group" "this" {
for_each = local.group_list
identity_store_id = local.identity_store_id
filter {
attribute_path = "DisplayName"
attribute_value = each.key
}
}
data "aws_identitystore_user" "this" {
for_each = local.user_list
identity_store_id = local.identity_store_id
filter {
attribute_path = "UserName"
attribute_value = each.key
}
}
data "aws_ssoadmin_instances" "this" {}
locals {
assignment_map = {
for a in var.account_assignments :
format("%v-%v-%v-%v", aws_organizations_account.account.id, substr(a.principal_type, 0, 1), a.principal_name, a.permission_set_name) => a
}
identity_store_id = tolist(data.aws_ssoadmin_instances.this.identity_store_ids)[0]
sso_instance_arn = tolist(data.aws_ssoadmin_instances.this.arns)[0]
group_list = toset([for mapping in var.account_assignments : mapping.principal_name if mapping.principal_type == "GROUP"])
user_list = toset([for mapping in var.account_assignments : mapping.principal_name if mapping.principal_type == "USER"])
}
resource "aws_ssoadmin_account_assignment" "this" {
for_each = local.assignment_map
instance_arn = local.sso_instance_arn
permission_set_arn = each.value.permission_set_arn
principal_id = each.value.principal_type == "GROUP" ? data.aws_identitystore_group.this[each.value.principal_name].id : data.aws_identitystore_user.this[each.value.principal_name].id
principal_type = each.value.principal_type
target_id = aws_organizations_account.account.id
target_type = "AWS_ACCOUNT"
}
main.tf (root)
module "sso_account_assignments" {
source = "./modules/aws"
account_assignments = [
{
permission_set_arn = "arn:aws:sso:::permissionSet/ssoins-0000000000000000/ps-31d20e5987f0ce66",
permission_set_name = "ReadOnlyAccess",
principal_type = "GROUP",
principal_name = "Administrators"
},
{
permission_set_arn = "arn:aws:sso:::permissionSet/ssoins-0000000000000000/ps-955c264e8f20fea3",
permission_set_name = "ReadOnlyAccess",
principal_type = "GROUP",
principal_name = "Developers"
},
{
permission_set_arn = "arn:aws:sso:::permissionSet/ssoins-0000000000000000/ps-31d20e5987f0ce66",
permission_set_name = "ReadOnlyAccess",
principal_type = "GROUP",
principal_name = "Developers"
},
]
}
The important thing about a map for for_each is that all of the keys must be made only of values that Terraform can "see" during the planning step.
You defined local.assignment_map this way in your example:
assignment_map = {
for a in var.account_assignments :
format("%v-%v-%v-%v", aws_organizations_account.account.id, substr(a.principal_type, 0, 1), a.principal_name, a.permission_set_name) => a
}
I'm not personally familiar with the aws_organizations_account resource type, but I'm guessing that aws_organizations_account.account.id is an attribute whose value gets decided by the remote system during the apply step (once the object is created) and so this isn't a suitable value to use as part of a for_each map key.
If so, I think the best path forward here is to use a different attribute of the resource that is defined statically in your configuration. If var.aws_account_name has a static value defined in your configuration (that is, it isn't derived from an apply-time attribute of another resource) then it might work to use the name attribute instead of the id attribute:
assignment_map = {
for a in var.account_assignments :
format("%v-%v-%v-%v", aws_organizations_account.account.name, substr(a.principal_type, 0, 1), a.principal_name, a.permission_set_name) => a
}
Another option would be to remove the organization reference from the key altogether. From what you've shared it seems like there is only one account and so all of these keys would end up starting with exactly the same account name anyway, and so that string isn't contributing to the uniqueness of those keys. If that's true then you could drop that part of the key and just keep the other parts as the unique key:
assignment_map = {
for a in var.account_assignments :
format(
"%v-%v-%v",
substr(a.principal_type, 0, 1),
a.principal_name,
a.permission_set_name,
) => a
}

Iterate over list of maps

I'm trying to iterate over a simple list of maps. Here's a segment of what my module code looks like:
resource "helm_release" "nginx-external" {
count = var.install_ingress_nginx_chart ? 1 : 0
name = "nginx-external"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = var.nginx_external_version
namespace = "default"
lint = true
values = [
"${file("chart_values/nginx-external.yaml")}"
]
dynamic "set" {
for_each = { for o in var.nginx_external_overrides : o.name => o }
content {
name = each.value.name
value = each.value.value
}
}
}
variable "nginx_external_overrides" {
description = "A map of maps to override customizations from the default chart/values file."
type = any
}
And here's a snippet of how I'm trying to call it from terragrunt:
nginx_external_overrides = [
{ name = "controller.metrics.enabled", value = "false" }
]
When trying to use this in a dynamic block, I'm getting:
Error: each.value cannot be used in this context
A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.
Ideally, I would be able to pass any number of maps in nginx_external_overrides to override the settings in the yaml being passed, but am struggling to do so. Thanks for the help.
If you are using for_each in dynamic blocks, you can't use each. Instead, in your case, it should be set:
dynamic "set" {
for_each = { for o in var.nginx_external_overrides : o.name => o }
content {
name = set.value.name
value = set.value.value
}
}

How to loop through locals and list at the same time to generate resources

I have the following tf file:
locals {
schemas = {
"ODS" = {
usage_roles = ["TRANSFORMER"]
}
"EXT" = {
usage_roles = []
}
"INT" = {
usage_roles = ["REPORTER"]
}
"DW" = {
usage_roles = ["LOADER"]
}
}
}
resource "snowflake_schema" "schema" {
for_each = local.schemas
name = each.key
database = ???????
usage_roles = each.value.usage_roles
}
I want to maintain the locals as it is (different usage_roles for each schema and hardcoded here) while having several values as database for each schema. In pseudo-code it would be:
for database in ['db_1', 'db_2', 'db_3']:
resource "snowflake_schema" "schema" {
for_each = local.schemas
name = each.key
database = database
usage_roles = each.value.usage_roles
}
So that we have the same schema resource in the three different databases. I have read some articles that point me to the belief that it is possible to make this loop but pre assigning all the values, meaning that I would have to put usage_roles in a list or something instead of hardcoding here in locals, which I think is less readable. For instance:
Terraform - how to use for_each loop on a list of objects to create resources
Is even possible what I am asking for? If so, how? Thank you very much in advance
The main requirement for for_each is that the map you provide must have one element per instance of the resource you want to create. In your case, I think that means you need a map with one element per every combination of database and schema.
The operation of finding every combination of values in two sets is formally known as the cartesian product, and Terraform has the setproduct function to perform that operation. In your case, the two sets to apply it to are the set of database names and the set of keys in your schemas map, like this:
locals {
databases = toset(["db_1", "db_2", "db_3"])
database_schemas = [
for pair in setproduct(local.databases, keys(local.schemas)) : {
database_name = pair[0]
schema_name = pair[1]
usage_roles = local.schemas[pair[1]].usage_roles
}
]
}
The local.database_schemas value would then contain an object for each combination, like this:
[
{
database_name = "db_1"
schema_name = "ODS"
usage_roles = ["TRANSFORMER"]
},
{
database_name = "db_1"
schema_name = "EXT"
usage_roles = []
},
# ...
{
database_name = "db_2"
schema_name = "ODS"
usage_roles = ["TRANSFORMER"]
},
{
database_name = "db_2"
schema_name = "EXT"
usage_roles = []
},
# ...
{
database_name = "db_3"
schema_name = "ODS"
usage_roles = ["TRANSFORMER"]
},
{
database_name = "db_3"
schema_name = "EXT"
usage_roles = []
},
# ...
]
This meets the requirement of having one element per instance you want to create, but we still need to convert it to a map with a unique key per element to give Terraform a unique tracking key for each instance, so we can do one more for projection in the for_each argument:
resource "snowflake_schema" "schema" {
for_each = {
for s in local.database_schemas :
"${s.database_name}:${s.schema_name}" => s
}
name = each.value.schema_name
database = each.value.database_name
usage_roles = each.value.usage_roles
}
Terraform will track these instances with addresses like this:
snowflake_schema.schema["db_1:ODS"]
snowflake_schema.schema["db_1:EXT"]
...
snowflake_schema.schema["db_2:ODS"]
snowflake_schema.schema["db_2:EXT"]
...
snowflake_schema.schema["db_3:ODS"]
snowflake_schema.schema["db_3:EXT"]
...

map list of maps to a list of selected field values in terraform

If resources use a count parameter to specify multi resources in terraform there is a simple syntax for providing a list/array of dedicated fields for the resource instances.
for example
aws_subnet.foo.*.id
Since quite a number of versions it is possible to declare variables with a complex structure, for example lists of maps.
variable "data" {
type = "list"
default = [
{
id = "1"
...
},
{
id = "10"
...
}
]
}
I'm looking for a possibility to do the same for varaibles I can do for multi resources: a projection of an array to an array of field values of the array elements.
Unfortunately
var.data.*.id
does not work as for resources. Is there any possibility to do this?
UPDATE
Massive fancy features have been added into terraform since Terraform 0.12 was released, e.g., list comprehension, with which the solution is super easy.
locals {
ids = [for d in var.data: d.id]
#ids = [for d in var.data: d["id"]] #same
}
# Then you could get the elements this way,
# local.ids[0]
Solution before terraform 0.12
template_file can help you out.
data "template_file" "data_id" {
count = "${length(var.data)}"
template = "${lookup(var.data[count.index], "id")}"
}
Then you get a list "${data.template_file.data_id.*.rendered}", whose elements are value of "id".
You can get its element by index like this
"${data.template_file.data_id.*.rendered[0]}"
or through function element()
"${element(data.template_file.data_id.*.rendered, 0)}"
NOTE: This answer and its associated question are very old at this point, and this answer is now totally stale. I'm leaving it here for historical reference, but nothing here is true of modern Terraform.
At the time of writing, Terraform doesn't have a generalized projection feature in its interpolation language. The "splat syntax" is implemented as a special case for resources.
While deep structure is possible, it is not yet convenient to use, so it's recommended to still keep things relatively flat. In future it is likely that new language features will be added to make this sort of thing more usable.
If have found a working solution using template rendering to by-pass the list of map's issue:
resource "aws_instance" "k8s_master" {
count = "${var.master_count}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${aws_security_group.k8s_sg.id}"]
associate_public_ip_address = false
subnet_id = "${element(var.subnet_ids,count.index % length(var.subnet_ids))}"
user_data = "${file("${path.root}/files/user_data.sh")}"
iam_instance_profile = "${aws_iam_instance_profile.master_profile.name}"
tags = "${merge(
local.k8s_tags,
map(
"Name", "k8s-master-${count.index}",
"Environment", "${var.environment}"
)
)}"
}
data "template_file" "k8s_master_names" {
count = "${var.master_count}"
template = "${lookup(aws_instance.k8s_master.*.tags[count.index], "Name")}"
}
output "k8s_master_name" {
value = [
"${data.template_file.k8s_master_names.*.rendered}",
]
}
This will result in the following output:
k8s_master_name = [
k8s-master-0,
k8s-master-1,
k8s-master-2
]
A potentially simpler answer is to use the zipmap function.
Starting with an environment variable map compatible with ECS template definitions:
locals {
shared_env = [
{
name = "DB_CHECK_NAME"
value = "postgres"
},
{
name = "DB_CONNECT_TIMEOUT"
value = "5"
},
{
name = "DB_DOCKER_HOST_PORT"
value = "35432"
},
{
name = "DB_DOCKER_HOST"
value = "localhost"
},
{
name = "DB_HOST"
value = "my-db-host"
},
{
name = "DB_NAME"
value = "my-db-name"
},
{
name = "DB_PASSWORD"
value = "XXXXXXXX"
},
{
name = "DB_PORT"
value = "5432"
},
{
name = "DB_QUERY_TIMEOUT"
value = "30"
},
{
name = "DB_UPGRADE_TIMEOUT"
value = "300"
},
{
name = "DB_USER"
value = "root"
},
{
name = "REDIS_DOCKER_HOST_PORT"
value = "6380"
},
{
name = "REDIS_HOST"
value = "my-redis"
},
{
name = "REDIS_PORT"
value = "6379"
},
{
name = "SCHEMA_SCRIPTS_PATH"
value = "db-scripts"
},
{
name = "USE_LOCAL"
value = "false"
}
]
}
In the same folder launch terraform console for testing built-in functions. You may need to terraform init if you haven't already.
terraform console
Inside the console type:
zipmap([for m in local.shared_env: m.name], [for m in local.shared_env: m.value])
Observe the output of each list-item-map being a name-value-pair of a single map:
{
"DB_CHECK_NAME" = "postgres"
"DB_CONNECT_TIMEOUT" = "5"
"DB_DOCKER_HOST" = "localhost"
"DB_DOCKER_HOST_PORT" = "35432"
"DB_HOST" = "my-db-host"
"DB_NAME" = "my-db-name"
"DB_PASSWORD" = "XXXXXXXX"
"DB_PORT" = "5432"
"DB_QUERY_TIMEOUT" = "30"
"DB_UPGRADE_TIMEOUT" = "300"
"DB_USER" = "root"
"REDIS_DOCKER_HOST_PORT" = "6380"
"REDIS_HOST" = "my-redis"
"REDIS_PORT" = "6379"
"SCHEMA_SCRIPTS_PATH" = "db-scripts"
"USE_LOCAL" = "false"
}

Resources