Retrieving elements from a list of objects based on a criteria - terraform

I have data source that returns a list of objects containing id, name, type.
data " data_source" "some_source" {
filter = ["env:a"]
...
}
I have a another resource that requires a set of ids
resource "another_rerouce" "bar" {
...
set_of_ids = [for i in data.data_source.some_source.objects : i.id]
...
}
Now what I require is only take the ids of the objects which has for e.g. type as live or pending etc. Is there a way i can incorporate this requirement inside [for i in data.data_source.some_source.objects : i.id]?
I am using Terraform v1.2.3.

I would say the answer would be different depending on the number of values you want to check for, i.e., if it is only live and pending, you could use the suggested solution from the comments (h/t: Matt Schuchard):
set_of_ids = [ for i in data.data_source.some_source.objects :
i.id if (i.type == "live" || i.type == "pending")
]
Alternatively, if there will be more than two values, you could create a local variable of type list(string) and assign all the values that are acceptable:
locals {
acceptable_types = ["live", "pending"]
}
Then, in the for loop, you could do the following:
set_of_ids = [ for i in data.data_source.some_source.objects :
i.id if contains(local.acceptable_types, i.type)
]
Here you would use the contains built-in function [1] to check if the type is inside of the acceptable/allowed types list.
[1] https://www.terraform.io/language/functions/contains

Related

How to combine and sort key-value pair in Terraform

since the last update of the Logicmonitor provider in Terraform we're struggling with a sorting isse.
In LogicMonitor the properties of a device are a name-value pair, and they are presented alfabetically by name. Also in API requests the result is alphabetical. So far nothing fancy.
But... We build our Cloud devices using a module. Calling the module we provide some LogicMonitor properties specially for this device, and a lot more are provided in the module itself.
In the module this looks like this:
`
custom_properties = concat([
{
name = "host_fqdn"
value = "${var.name}.${var.dns_domain}"
},
{
name = "ocid"
value = oci_core_instance.server.id
},
{
name = "private_ip"
value = oci_core_instance.server.private_ip
},
{
name = "snmp.version"
value = "v2c"
}
],
var.logicmonitor_properties)
`
The first 4 properties are from the module and combined with anyting what is in var.logicmonitor_properties. On the creation of the device in LogicMonitor all properties are set in the order the are and no problem.
The issue arises when there is any update on a terraform file in this environment. Due to the fact the properties are presented in alphabetical order, Terraform is showing a lot of changes if finds (but which are in fact just a mixed due to sorting).
The big question is: How can I sort the complete list of properties bases on the "name".
Tried to work with maps, sort and several other functions and examples, but got nothing working on key-value pairs. Merging single key's works fine in a map, but how to deal with name/value pairs/
I think you were on the right track with maps and sorting. Terraform maps do not preserve any explicit ordering themselves, and so whenever Terraform needs to iterate over the elements of a map in some explicit sequence it always do so by sorting the keys lexically (by Unicode codepoints) first.
Therefore one answer is to project this into a map and then project it back into a list of objects again. The projection back into list of objects will implicitly sort the map elements by their keys, which I think will get the effect you wanted.
variable "logicmonitor_properties" {
type = list(object({
name = string
value = string
}))
}
locals {
base_properties = tomap({
host_fqdn = "${var.name}.${var.dns_domain}"
ocid = oci_core_instance.server.id
private_ip = oci_core_instance.server.private_ip
"snmp.version" = "v2c"
})
extra_properties = tomap({
for prop in var.logicmonitor_properties : prop.name => prop.value
})
final_properties = merge(local.base_properties, local.extra_properties)
# This final step will implicitly sort the final_properties
# map elements by their keys.
final_properties_list = tolist([
for k, v in local.final_properties : {
name = k
value = v
}
])
}
With all of the above, local.final_properties_list should be similar to the custom_properties structure you showed in your question except that the elements of the list will be sorted by their names.
This solution assumes that the property names will be unique across both base_properties and extra_properties. If there are any colliding keys between both of those maps then the merge function will prefer the value from extra_properties, overriding the element of the same key from base_properties.
First, use the sort() function to sort the keys in alphabetical order:
sorted_keys = sort(keys(var.my_map))
Next, use the map() function to create a new map with the sorted keys and corresponding values:
sorted_map = map(sorted_keys, key => var.my_map[key])
Finally, you can use the jsonencode() function to print the sorted map in JSON format:
jsonencode(sorted_map)```

How to concatenate strings in Terraform output with for loop?

I have multiple aws_glue_catalog_table resources and I want to create a single output that loops over all resources to show the S3 bucket location of each one. The purpose of this is to test if I am using the correct location (because it is a concatenation of variables) for each resource in Terratest. I cannot use aws_glue_catalog_table.* or aws_glue_catalog_table.[] because Terraform does not allow to reference a resource without specifying its name.
So I created a variable "table_names" with r1, r2, rx. Then, I can loop over the names. I want to create the string aws_glue_catalog_table.r1.storage_descriptor[0].location dynamically, so I can check if the location is correct.
resource "aws_glue_catalog_table" "r1" {
name = "r1"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/r1"
}
...
}
resource "aws_glue_catalog_table" "rX" {
name = "rX"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/rX"
}
}
variable "table_names" {
description = "The list of Athena table names"
type = list(string)
default = ["r1", "r2", "r3", "rx"]
}
output "athena_tables" {
description = "Athena tables"
value = [for n in var.table_names : n]
}
First attempt: I tried to create an output "athena_tables_location" with the syntax aws_glue_catalog_table.${table} but does does.
output "athena_tables_location" {
// HOW DO I ITERATE OVER ALL TABLES?
value = [for t in var.table_names : aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
Second attempt: I tried to create a variable "table_name_locations" but IntelliJ already shows an error ${t} in the for loop [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"].
variable "table_name_locations" {
description = "The list of Athena table locations"
type = list(string)
// THIS ALSO DOES NOT WORK
default = [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
How can I list all table locations in the output and then test it with Terratest?
Once I can iterate over the tables and collect the S3 location I can do the following test using Terratest:
athenaTablesLocation := terraform.Output(t, terraformOpts, "athena_tables_location")
assert.Contains(t, athenaTablesLocation, "s3://rX/test-config/rX",)
It seems like you have an unusual mix of static and dynamic here: you've statically defined a fixed number of aws_glue_catalog_table resources but you want to use them dynamically based on the value of an input variable.
Terraform doesn't allow dynamic references to resources because its execution model requires building a dependency graph between all of the objects, and so it needs to know which exact resources are involved in a particular expression. However, you can in principle build your own single value that includes all of these objects and then dynamically choose from it:
locals {
tables = {
r1 = aws_glue_catalog_table.r1
r2 = aws_glue_catalog_table.r2
r3 = aws_glue_catalog_table.r3
# etc
}
}
output "table_locations" {
value = {
for t in var.table_names : t => local.tables[t].storage_descriptor[0].location
}
}
With this structure Terraform can see that output "table_locations" depends on local.tables and local.tables depends on all of the relevant resources, and so the evaluation order will be correct.
However, it also seems like your table definitions are systematic based on var.table_names and so could potentially benefit from being dynamic themselves. You could achieve that using the resource for_each feature to declare multiple instances of a single resource:
variable "table_names" {
description = "Athena table names to create"
type = set(string)
default = ["r1", "r2", "r3", "rx"]
}
resource "aws_glue_catalog_table" "all" {
for_each = var.table_names
name = each.key
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/${each.key}"
}
...
}
output "table_locations" {
value = {
for k, t in aws_glue_catalog_table.all : k => t.storage_descriptor[0].location
}
}
In this case aws_glue_catalog_table.all represents all of the tables together as a single resource with multiple instances, each one identified by the table name. for_each resources appear in expressions as maps, so this will declare resource instances with addresses like this:
aws_glue_catalog_table.all["r1"]
aws_glue_catalog_table.all["r2"]
aws_glue_catalog_table.all["r3"]
...
Because this is already a map, this time we don't need the extra step of constructing the map in a local value, and can instead just access this map directly to build the output value, which will be a map from table name to storage location:
{
r1 = "s3://BUCKETNAME/ENVNAME-config/r1"
r2 = "s3://BUCKETNAME/ENVNAME-config/r2"
r3 = "s3://BUCKETNAME/ENVNAME-config/r3"
# ...
}
In this example I've assumed that all of the tables are identical aside from their names, which I expect isn't true in practice but I was going only by what you included in the question. If the tables do need to have different settings then you can change var.table_names to instead be a variable "tables" whose type is a map of object type where the values describe the differences between the tables, but that's a different topic kinda beyond the scope of this question, so I won't get into the details of that here.

How to extract data as a List from Set with dynamic keys in terraform

I have below set in terraform (which is extracted from data module)
{
key1 {
id = "5"
name = "A"
},
key2 {
id = "6"
name = "A"
}
}
keys are dynamic, and it will be n number of them, any value.
How can I get the below result? Please notice it's List of strings
[
"5",
"6"
]
I tried below, but it says Unsupported attribute
output "email_channels_keys" {
value = var.emails.*.id
}
We can use a for lambda expression to iterate on the map(object)) and extract the value for the id key in the object within each map:
output "email_channels_keys" {
value = [ for key, value in data.data_name.block_name.attribute : value.id ]
}
We use a list constructor to instantiate the type as a list. We then iterate over the map and store the string key in the temporary lambda scope variable key, and the object value in the temporary lambda scope variable value. We then access the value of the id key within the object with the normal usage of .id (["id"] is also valid syntax, but conventionally map values are accessed with ["<key>"] and object values with .<key> syntax). The returned value is assigned to email_channels_keys in your outputs.
Note that for your specific use case, you will need to update the data namespace for your specific data that you reference at the beginning of the question, and you may want to update key and value variables for more specific names.

Any way to conditionalize variable in jsonencoded data?

Say I have the simplified following snippet to create a task definition as json.
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
}])
...
Say I want to add a variable to optionally create an additional definition so it looks something like this:
variable "another_definition" {
type = any
default = {}
}
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
},
var.another_definition
])
And define it as follows.
another_definition = {
name = "another_container"
image = "another_container"
cpu = 10
memory = 512
essential = true
}
I am able to get this to to output as expected as long as the variable is defined.
...
+ {
+ cpu = 10
+ essential = true
+ image = "another_container"
+ memory = 512
+ name = "another_container"
},
But if the variable is not defined, I see empty {} added to the output when I do a terraform plan, which is not what I expect. I have tried using null as well as the default but get an error.
...
+ {},
Is there a way to toggle this variable off so that if it is not defined then it doesn't show up in the outputted json definition? Is there a better approach than what I am attempting?
I was a little confused at first as to what you were asking, thinking that you were asking for the functionality of the merge function, and I mention that only in case I was right the first time, but I think I now understand your problem as that you want this task_container_definitions to have either one or two elements, depending on whether var.another_definition is set.
There's no single function for that particular situation, but I think we can combine some language features together to get that result.
First, let's decide that the variable being set means that it has a non-null value, and thus its default value should be null to represent the "unset" case:
variable "another_definition" {
type = any
default = null
validation {
# The time constraint above is looser than we really
# want, so this validation rule also enforces that
# the caller can't set this to something inappropriate,
# like a single string or a list.
condition = (
var.another_definition != null ?
can(keys(var.another_definition)) :
true
)
error_message = "Additional task container definition must be an object."
}
}
In Terraform it's a pretty common situation to need to convert between a value that might be null and a list that might have zero or one elements, or vice-versa, and so Terraform has some language features to help with that. In this case we can use a splat expression to concisely represent that. Let's see how that looks in terraform console first just to give a sense of what we're achieving with this:
$ terraform console
> null[*]
[]
> "hello"[*]
[
"hello",
]
> { object = "example" }[*]
[
{
"object" = "example"
},
]
Notice that when I applied the [*] operator to null it returned an empty list, but when I applied it to these other values it converted them to a single-element list. This is how the [*] operator behaves when you apply it to something that isn't a list; see the splat operator docs if you want to learn about the different behavior for lists, which isn't really relevant here because of the validation rule I added above which prevents the var.another_definition value from being a list.
Another tool we have in our Terraform toolbox here is the concat function, which takes one or more lists and returns a single list with the input elements all concatenated together in the given order. We can use this to combine your predefined list that's populated from var.name, var.cpu, etc with the zero-or-one element list created by [*], in order to create a list with their one or two elements:
locals {
task_container_definitions = concat(
[
name = var.name
image = "${var.image}:${var.tag}"
cpu = var.cpu
memory = var.memory
],
var.another_definition[*],
)
task_container_definitions_json = jsonencode(local.task_container_definitions)
}
If any of the arguments to concat are empty lists then they are effectively ignored altogether, because they contribute no elements to the result, and so this achieves (what I hope is) the desired result, by making the "other definition" appear in the result only when it's set to something other than null.

Preserve list ordering when creating maps in Terraform 0.12?

I have the following snippets in my configuration - the idea is to change current logic/syntax from 0.11 to 0.12. First, I am creating a map from lists,
my_vars = zipmap(
var.foo_vars,
flatten(data.terraform_remote_state.foo.*.outputs.some_id)
)
Then iterate over it to produce some key value pairs.
...
"var": [for key in keys(local.my_vars) :
{
name = key
value = lookup(local.my_vars, key)
}
],
...
And here is the relevant tfvars configuration.
foo_vars = [
"A",
"B",
"C"
]
The problem is that this logic doesn't seem to preserver order and I can't figure out a good way to make this happen. From what I understand, once you turn the lists into a map with zipmap, the order is recalculated. Is there anything that can be done to have the original order preserved?
I'm not tied to the current solution, so maybe there is a way to generate the key/values that doesn't require a map to be created first and can be done instead with only the two lists?
~ foo = [
{
name = "A"
value = "1"
},
- {
- name = "B"
- value = "2"
},
{
name = "C"
value = "3"
},
+ {
+ name = "B"
+ valueFrom = "2"
},
]
The important thing here is that, as you've noticed, Terraform's map type is an unordered map which identifies elements only by their keys, not by permission. Therefore if you have a situation where you need to preserve the order of a sequence then a map is not a suitable data structure to use.
I have a suspicion that keeping things ordered may not actually be necessary to solve your underlying problem here, but I can't tell from the information you've shared what the real-world meaning of all of these values is, so I'm going to answer on the assumption that you do need to preserve the order. If you are working with ordered sequences only because you are creating multiple instances of a resource using count, I'd suggest that you consider using resource for_each instead, which may allow you to solve your underlying problem in a way that is not sensitive to the order of items in var.foo_vars.
Given two lists of the same length, you can produce a new list that combines the corresponding elements from each list by writing a for expression like this:
locals {
my_vars = [
for i, some_id in data.terraform_remote_state.foo.*.outputs.some_id : {
name = var.foo_vars[i]
value = some_id
}
]
}
The above relies on the fact that i index values from one list are correlated with the element of the same index in the other list, and so we can use the i from the data source instances to access the corresponding element of var.foo_vars.

Resources