Preserve list ordering when creating maps in Terraform 0.12? - terraform

I have the following snippets in my configuration - the idea is to change current logic/syntax from 0.11 to 0.12. First, I am creating a map from lists,
my_vars = zipmap(
var.foo_vars,
flatten(data.terraform_remote_state.foo.*.outputs.some_id)
)
Then iterate over it to produce some key value pairs.
...
"var": [for key in keys(local.my_vars) :
{
name = key
value = lookup(local.my_vars, key)
}
],
...
And here is the relevant tfvars configuration.
foo_vars = [
"A",
"B",
"C"
]
The problem is that this logic doesn't seem to preserver order and I can't figure out a good way to make this happen. From what I understand, once you turn the lists into a map with zipmap, the order is recalculated. Is there anything that can be done to have the original order preserved?
I'm not tied to the current solution, so maybe there is a way to generate the key/values that doesn't require a map to be created first and can be done instead with only the two lists?
~ foo = [
{
name = "A"
value = "1"
},
- {
- name = "B"
- value = "2"
},
{
name = "C"
value = "3"
},
+ {
+ name = "B"
+ valueFrom = "2"
},
]

The important thing here is that, as you've noticed, Terraform's map type is an unordered map which identifies elements only by their keys, not by permission. Therefore if you have a situation where you need to preserve the order of a sequence then a map is not a suitable data structure to use.
I have a suspicion that keeping things ordered may not actually be necessary to solve your underlying problem here, but I can't tell from the information you've shared what the real-world meaning of all of these values is, so I'm going to answer on the assumption that you do need to preserve the order. If you are working with ordered sequences only because you are creating multiple instances of a resource using count, I'd suggest that you consider using resource for_each instead, which may allow you to solve your underlying problem in a way that is not sensitive to the order of items in var.foo_vars.
Given two lists of the same length, you can produce a new list that combines the corresponding elements from each list by writing a for expression like this:
locals {
my_vars = [
for i, some_id in data.terraform_remote_state.foo.*.outputs.some_id : {
name = var.foo_vars[i]
value = some_id
}
]
}
The above relies on the fact that i index values from one list are correlated with the element of the same index in the other list, and so we can use the i from the data source instances to access the corresponding element of var.foo_vars.

Related

How to combine and sort key-value pair in Terraform

since the last update of the Logicmonitor provider in Terraform we're struggling with a sorting isse.
In LogicMonitor the properties of a device are a name-value pair, and they are presented alfabetically by name. Also in API requests the result is alphabetical. So far nothing fancy.
But... We build our Cloud devices using a module. Calling the module we provide some LogicMonitor properties specially for this device, and a lot more are provided in the module itself.
In the module this looks like this:
`
custom_properties = concat([
{
name = "host_fqdn"
value = "${var.name}.${var.dns_domain}"
},
{
name = "ocid"
value = oci_core_instance.server.id
},
{
name = "private_ip"
value = oci_core_instance.server.private_ip
},
{
name = "snmp.version"
value = "v2c"
}
],
var.logicmonitor_properties)
`
The first 4 properties are from the module and combined with anyting what is in var.logicmonitor_properties. On the creation of the device in LogicMonitor all properties are set in the order the are and no problem.
The issue arises when there is any update on a terraform file in this environment. Due to the fact the properties are presented in alphabetical order, Terraform is showing a lot of changes if finds (but which are in fact just a mixed due to sorting).
The big question is: How can I sort the complete list of properties bases on the "name".
Tried to work with maps, sort and several other functions and examples, but got nothing working on key-value pairs. Merging single key's works fine in a map, but how to deal with name/value pairs/
I think you were on the right track with maps and sorting. Terraform maps do not preserve any explicit ordering themselves, and so whenever Terraform needs to iterate over the elements of a map in some explicit sequence it always do so by sorting the keys lexically (by Unicode codepoints) first.
Therefore one answer is to project this into a map and then project it back into a list of objects again. The projection back into list of objects will implicitly sort the map elements by their keys, which I think will get the effect you wanted.
variable "logicmonitor_properties" {
type = list(object({
name = string
value = string
}))
}
locals {
base_properties = tomap({
host_fqdn = "${var.name}.${var.dns_domain}"
ocid = oci_core_instance.server.id
private_ip = oci_core_instance.server.private_ip
"snmp.version" = "v2c"
})
extra_properties = tomap({
for prop in var.logicmonitor_properties : prop.name => prop.value
})
final_properties = merge(local.base_properties, local.extra_properties)
# This final step will implicitly sort the final_properties
# map elements by their keys.
final_properties_list = tolist([
for k, v in local.final_properties : {
name = k
value = v
}
])
}
With all of the above, local.final_properties_list should be similar to the custom_properties structure you showed in your question except that the elements of the list will be sorted by their names.
This solution assumes that the property names will be unique across both base_properties and extra_properties. If there are any colliding keys between both of those maps then the merge function will prefer the value from extra_properties, overriding the element of the same key from base_properties.
First, use the sort() function to sort the keys in alphabetical order:
sorted_keys = sort(keys(var.my_map))
Next, use the map() function to create a new map with the sorted keys and corresponding values:
sorted_map = map(sorted_keys, key => var.my_map[key])
Finally, you can use the jsonencode() function to print the sorted map in JSON format:
jsonencode(sorted_map)```

Retrieving elements from a list of objects based on a criteria

I have data source that returns a list of objects containing id, name, type.
data " data_source" "some_source" {
filter = ["env:a"]
...
}
I have a another resource that requires a set of ids
resource "another_rerouce" "bar" {
...
set_of_ids = [for i in data.data_source.some_source.objects : i.id]
...
}
Now what I require is only take the ids of the objects which has for e.g. type as live or pending etc. Is there a way i can incorporate this requirement inside [for i in data.data_source.some_source.objects : i.id]?
I am using Terraform v1.2.3.
I would say the answer would be different depending on the number of values you want to check for, i.e., if it is only live and pending, you could use the suggested solution from the comments (h/t: Matt Schuchard):
set_of_ids = [ for i in data.data_source.some_source.objects :
i.id if (i.type == "live" || i.type == "pending")
]
Alternatively, if there will be more than two values, you could create a local variable of type list(string) and assign all the values that are acceptable:
locals {
acceptable_types = ["live", "pending"]
}
Then, in the for loop, you could do the following:
set_of_ids = [ for i in data.data_source.some_source.objects :
i.id if contains(local.acceptable_types, i.type)
]
Here you would use the contains built-in function [1] to check if the type is inside of the acceptable/allowed types list.
[1] https://www.terraform.io/language/functions/contains

Terraform: Creating maps with matching key fails with "duplicate object keys"

I am trying to create a map of secondary ranges for the GCP VPC module here and have the following defined in my locals:
secondary_ranges = {
for name, config in var.subnet_config : config.subnet_name => [
{
range_name = local.ip_range_pods
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.0.0/17"
},
{
range_name = local.ip_range_services
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.128.0/17"
}
]
}
subnet_config is defined as follows:
subnet_config = {
cluster1 = {
region = "us-east1"
subnet_name = "default"
},
cluster2 = {
region = "us-west1"
subnet_name = "default"
}
}
This creates the secondary subnets just fine if the subnet names are unique but fails with the error below if the subnet names (which end up being the key values) are not unique:
Two different items produced the key "default" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
I'm trying to figure out if I can use grouping mode if the value is a list and if so, how?
Any help would be greatly appreciated.
If you use the grouping mode in this case then it would be to group the outermost for expression, which is producing a map, because that's the one whose keys you'd be grouping by.
We can start by adding the grouping mode modifier to that and see what happens:
secondary_ranges_pairs = {
for name, config in var.subnet_config : config.subnet_name => [
{
range_name = local.ip_range_pods
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.0.0/17"
},
{
range_name = local.ip_range_services
ip_cidr_range = "10.${index(keys(var.subnet_config), name)}.128.0/17"
}
]...
}
The effect of the expression above would be to create a map of lists of lists of objects, where the deepest lists are each pairs of objects because of how your inner for expression is written.
To turn that into the map of lists of objects which I think you're hoping for, you can then use flatten in a separate step:
secondary_ranges = {
for k, pairs in local.secondary_ranges_pairs : k => flatten(pairs)
}
flatten recursively walks a data structure where there are lists of lists and concatenates all of the nested lists together into a single flat list.
A word of caution: you seem to be using a lexical sort of the subnet_config keys in order to derive network numbering. That means that if you add new elements to your var.subnet_config whose keys sort earlier than any existing ones (for example, if you were to add in a cluster0 into what you showed in your question) then you'll implicitly renumber all of the subsequent networks, which is likely to cause a lot of churn recreating objects, and the change might not even be possible if those networks contain other objects.
I'd typically recommend instead being explicit about what number you've assigned to each network, by including then as part of the var.subnet_config objects. You can then clearly see which numbers you've assigned and make sure that any new networks will always be assigned a later number without disturbing any existing assignments.
There's also an official Terraform module hashicorp/subnets/cidr which aims to encapsulate subnet numbering calculations. The design of that module means that it wouldn't be completely straightforward to adopt it for your use-case (since you're allocating two levels of subnet at once) but it might be useful to study to see whether any of the design tradeoffs made there are relevant to your module.

Any way to conditionalize variable in jsonencoded data?

Say I have the simplified following snippet to create a task definition as json.
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
}])
...
Say I want to add a variable to optionally create an additional definition so it looks something like this:
variable "another_definition" {
type = any
default = {}
}
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
},
var.another_definition
])
And define it as follows.
another_definition = {
name = "another_container"
image = "another_container"
cpu = 10
memory = 512
essential = true
}
I am able to get this to to output as expected as long as the variable is defined.
...
+ {
+ cpu = 10
+ essential = true
+ image = "another_container"
+ memory = 512
+ name = "another_container"
},
But if the variable is not defined, I see empty {} added to the output when I do a terraform plan, which is not what I expect. I have tried using null as well as the default but get an error.
...
+ {},
Is there a way to toggle this variable off so that if it is not defined then it doesn't show up in the outputted json definition? Is there a better approach than what I am attempting?
I was a little confused at first as to what you were asking, thinking that you were asking for the functionality of the merge function, and I mention that only in case I was right the first time, but I think I now understand your problem as that you want this task_container_definitions to have either one or two elements, depending on whether var.another_definition is set.
There's no single function for that particular situation, but I think we can combine some language features together to get that result.
First, let's decide that the variable being set means that it has a non-null value, and thus its default value should be null to represent the "unset" case:
variable "another_definition" {
type = any
default = null
validation {
# The time constraint above is looser than we really
# want, so this validation rule also enforces that
# the caller can't set this to something inappropriate,
# like a single string or a list.
condition = (
var.another_definition != null ?
can(keys(var.another_definition)) :
true
)
error_message = "Additional task container definition must be an object."
}
}
In Terraform it's a pretty common situation to need to convert between a value that might be null and a list that might have zero or one elements, or vice-versa, and so Terraform has some language features to help with that. In this case we can use a splat expression to concisely represent that. Let's see how that looks in terraform console first just to give a sense of what we're achieving with this:
$ terraform console
> null[*]
[]
> "hello"[*]
[
"hello",
]
> { object = "example" }[*]
[
{
"object" = "example"
},
]
Notice that when I applied the [*] operator to null it returned an empty list, but when I applied it to these other values it converted them to a single-element list. This is how the [*] operator behaves when you apply it to something that isn't a list; see the splat operator docs if you want to learn about the different behavior for lists, which isn't really relevant here because of the validation rule I added above which prevents the var.another_definition value from being a list.
Another tool we have in our Terraform toolbox here is the concat function, which takes one or more lists and returns a single list with the input elements all concatenated together in the given order. We can use this to combine your predefined list that's populated from var.name, var.cpu, etc with the zero-or-one element list created by [*], in order to create a list with their one or two elements:
locals {
task_container_definitions = concat(
[
name = var.name
image = "${var.image}:${var.tag}"
cpu = var.cpu
memory = var.memory
],
var.another_definition[*],
)
task_container_definitions_json = jsonencode(local.task_container_definitions)
}
If any of the arguments to concat are empty lists then they are effectively ignored altogether, because they contribute no elements to the result, and so this achieves (what I hope is) the desired result, by making the "other definition" appear in the result only when it's set to something other than null.

How do I pick elements from a terraform list

I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.

Resources