Convert string or array to map in Terraform? - terraform

Terraform v0.10.7
AWS provider version = "~> 1.54.0"
Are there any examples how to translate a string or list into a map in Terraform?
We are setting up Consul key/value store like this:
consul kv put common/rules/alb/service1 name=service1,port=80,hcproto=http,hcport=80
I can access keys and values properly, and now I am trying to use values as a map in Terraform:
data "consul_key_prefix" "common" {
path_prefix = "common/rules"
}
output "common"{
value = "${jsonencode(lookup(var.CommonRules,element(keys(var.CommonRules),1))) }"
}
$ terraform output
common = "{name=service1,port=80,hcproto=http,hcport=80}"
But when I try to access it as a map, it doesn't work:
output "common"{
value = "${lookup(jsonencode(lookup(var.CommonRules,element(keys(var.CommonRules),1))),"name") }"
}
$ terraform output
(no response)
I tried few things here - e.g. splitting these values and joining them again into a list, and then running "map" function but it doesn't work either:
$ terraform output
common = [
name,
service1,
port,
80,
hcproto,
http,
hcport,
80
]
and then trying to create map of that list:
output "common2" {
value = "${map(split(",",join(",",split("=",lookup(var.CommonRules,element(keys(var.CommonRules),1))))))}"
}
but it doesn't work either.
So my question would be - does anyone has working example where he did translated string (or list) into a map?
Thanks in advance.

jsondecode function in upcoming Terraform v0.12 would be the tool to solve this problem.
jsondecode function github issue

Related

print terraform output from list of list to a list of strings

I am working with vm deployments over AWS with terraform(v1.0.9) as infrastructure as code. i have Terraform output.tf to print two lan a ips and code prints, list of lists like [["ip_a",],["ip_b",]] but i want a list like this ["ip_a", "ip_b"].
output.tf code
`output "foo" {
value = {
name = "xyz"
all_ips = tolist(aws_network_interface.vm_a_eni_lan_a.*.private_ips)
}
}`
printing -->
"name" = "xyz" "lan_a_ips" = tolist(\[ toset(\[ "10.0.27.116",\]), toset(\[ "10.0.28.201",\]), \])
but i want "lan_a_ips" = ["10.0.27.116", "10.0.28.201"]
I beleive tweaking output.tf can help. Any help is appreciated.
In your case, you have just set the splat expression [1] in a wrong place, i.e., instead of setting aws_network_interface.vm_a_eni_lan_a.private_ips[*] you set it to aws_network_interface.vm_a_eni_lan_a.*.private_ips. So you only need to change the output value:
output "foo" {
value = {
name = "xyz"
all_ips = aws_network_interface.vm_a_eni_lan_a.private_ips[*]
}
}
EDIT: The above applies when only a single instance of an aws_network_interface resource is created. For situations where there are multiple instance of this resource created with count meta-argument, the following can be used to get a list of IPs:
output "foo" {
value = {
name = "xyz"
all_ips = flatten([for i in aws_network_interface.test[*] : i.private_ips[*]])
}
}
Here, the for [2] loop is used to iterate over all the instances of a resource, hence the splat expression when referencing them aws_network_interface.test[*]. Additionally, since this will create a list of lists (as private_ips[*] returns a list), flatten [3] built-in function can be used to create a single list of IP addresses.
[1] https://www.terraform.io/language/expressions/splat
[2] https://www.terraform.io/language/expressions/for
[3] https://www.terraform.io/language/functions/flatten

Terraform: loop over directory to create a single resource

I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}

Extract IP from a range with Terraform

I'm trying to extract IP addresses from a range with Terraform.
For example, I defined this range 192.168.1.10-192.168.1.20 as a string and I would like to get a list like this: [192.168.1.10,192.168.1.11,…,192.168.1.20].
I checked for Terraform functions but didn’t find a way to do that.
Is this possible?
For further context, I am deploying MetalLB in a Kubernetes cluster and need to define the VIP range as a string like this 192.168.1.10-192.168.1.20.
The Kubernetes cluster is deployed on OpenStack and I need to configure Neutron OpenStack port to accept all IP addresses from this range:
resource "openstack_networking_port_v2" "k8s_worker_mgmt_port" {
name = "k8s_worker_mgmt_port"
network_id = data.openstack_networking_network_v2.k8s_openstack_mgmt_network_name.id
admin_state_up = "true"
allowed_address_pairs {
ip_address = "192.168.1.10"
}
allowed_address_pairs {
ip_address = "192.168.1.11"
}
}
....
}
If you can rely on the first 3 octets of the IP range being the same then you can get away with using a combination of split, slice, join, range and formatlist functions to do this natively inside Terraform with something like the following:
variable "ip_range" {
default = "192.168.1.10-192.168.1.20"
}
locals {
ip_range_start = split("-", var.ip_range)[0]
ip_range_end = split("-", var.ip_range)[1]
# Note that this naively only works for IP ranges using the same first three octects
ip_range_first_three_octets = join(".", slice(split(".", local.ip_range_start), 0, 3))
ip_range_start_fourth_octet = split(".", local.ip_range_start)[3]
ip_range_end_fourth_octet = split(".", local.ip_range_end)[3]
list_of_final_octet = range(local.ip_range_start_fourth_octet, local.ip_range_end_fourth_octet)
list_of_ips_in_range = formatlist("${local.ip_range_first_three_octets}.%s", local.list_of_final_octet)
}
output "list_of_ips_in_range" {
value = local.list_of_ips_in_range
}
This outputs the following:
list_of_ips_in_range = [
"192.168.1.10",
"192.168.1.11",
"192.168.1.12",
"192.168.1.13",
"192.168.1.14",
"192.168.1.15",
"192.168.1.16",
"192.168.1.17",
"192.168.1.18",
"192.168.1.19",
]
If you need to offset that range so you end up with IP addresses from .11 to .20 from the same input then you can do that by changing the local.list_of_final_octet like so:
list_of_final_octet = range(local.ip_range_start_fourth_octet + 1, local.ip_range_end_fourth_octet + 1)
Unfortunately Terraform doesn't have any built in functions for doing more elaborate CIDR math beyond cidrhost, cidrnetmask, cidrsubnet, cidrsubnets functions so if you have more complex requirements then you may need to delegate this to an external script that can calculate it and be called via the external data source.

How do I get a list of s3 objects with the aws_s3_bucket_object data source?

My specific problem is the same as this guys, but I found his answer not detailed enough, also terraform has new features now that maybe can solve this better.
The problem is I'm using aws_elastic_beanstalk_application_version to register beanstalk version, but terraform removes the old version before registering the new one. This is because aws_elastic_beanstalk_application_version is replaced each time, what I need to do is generate a new one.
I'm attempting to do this with "count" and the aws_s3_bucket_object data source but I can't figure out how to get s3 objects as a list. I tried wildcards but that doesn't work:
data "aws_s3_bucket_object" "eb-bucket-data" {
bucket = "mybucket"
key = "*"
}
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(data.aws_s3_bucket_object.eb-bucket-data.id)}"
name = "${element(data.aws_s3_bucket_object.eb-bucket-data.key, count.index)}"
application = "myapp"
bucket = "mybucket"
key = "${element(data.aws_s3_bucket_object.eb-bucket-data.key, count.index)}"
}
The aws_s3_bucket_object data source currently only returns a single item. You could iterate through a list of items but that puts you back to your initial problem of needing to find the list of items in the first place.
Short of creating a pull request for an aws_s3_bucket_objects data source that returns a list of objects (as with things like aws_availability_zone and aws_availability_zones) you can maybe achieve this through shelling out using the external data source and calling the AWS CLI.
An (untested) example for this might look something like this:
data "external" "bucket_objects" {
program = ["aws", "s3", "ls", "mybucket", "|", "awk", "'{print", "$4}'", "|", "jq", "-R", "-s", "-c", "'split(\"\n\")'", "|", "jq", "'.[:-1]'"]
}
This runs
aws s3 ls mybucket | awk '{print $4}' | jq -R -s -c 'split("\n")' | jq '.[:-1]'
which lists the objects in the bucket, takes just the filename elements, splits them into a JSON array using jq and then removes the trailing newline element from the JSON array because the external data source expects a valid JSON object to be returned.
You should then be able to access that with something like:
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(data.external.bucket_objects.result)}"
name = "${data.external.bucket_objects.result[count.index]}"
application = "myapp"
bucket = "mybucket"
key = "${data.external.bucket_objects.result[count.index]"
}
There is a Pull Request for this data source, aws_s3_bucket_objects:
https://github.com/terraform-providers/terraform-provider-aws/pull/6968

What is the terraform syntax to create an AWS Route53 TXT record that has a map as JSON as payload?

My intention is to create an AWS Route53 TXT record, that contains a JSON representation of a terraform map as payload.
I would expect the following to do the trick:
variable "payload" {
type = "map"
default = {
foo = "bar"
baz = "qux"
}
}
resource "aws_route53_record" "TXT-json" {
zone_id = "${module.domain.I-zone_id}"
name = "test.${module.domain.I-fqdn}"
type = "TXT"
ttl = "${var.ttl}"
records = "${list(jsonencode(var.payload))}"
}
terraform validate and terraform plan are ok with that. terraform apply starts happily, but AWS reports an error:
* aws_route53_record.TXT-json: [ERR]: Error building changeset: InvalidChangeBatch: Invalid Resource Record: FATAL problem: InvalidCharacterString (Value should be enclosed in quotation marks) encountered with '"{"baz":"qux","foo":"bar"}"'
status code: 400, request id: 062d4536-3ad3-11e7-af24-0fbcd067fb9e
Terraform version is
Terraform v0.9.4
String handling is very difficult in HCL. I found many references surrounding this issue on the 'net, but I can't seem to find the actual solution. A solution based on the workaround noted in terraform#10048 doesn't work. "${list(substr(jsonencode(var.payload), 1, -1))}" removes the starting curly brace {, not the first quote. That seems to be added later.
Adding quotes (as the error message from AWS suggests) doesn't help; it just adds more quotes, and there already are (the AWS error message is misleading).
The message you're getting is not generated by Terraform. It is a validation error raised by Route53. You'd get the same error if you added eg. {"a":2,"foo":"bar"} as value via the AWS console.
On the other hand, escaping the JSON works ie. I was able to add "{\"a\":2,\"foo\":\"bar\"}" as a TXT value through the AWS console.
If you're OK with that, you can perform a double jsonencode, meaning that you can jsonencode the JSON string generated by jsonencode such as:
variable "payload" {
type = "map"
default = {
foo = "bar"
baz = "qux"
}
}
output "test" {
value = "${jsonencode(jsonencode(var.payload))}"
}
which resolves to:
➜ ~ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
test = "{\"baz\":\"qux\",\"foo\":\"bar\"}"
(you would of course have to use the aws_route53_record resource instead of output)
so basically this works:
resource "aws_route53_record" "record_txt" {
zone_id = "${data.aws_route53_zone.primary.zone_id}"
name = "${var.my_domain}"
type = "TXT"
ttl = "300"
records = ["{\\\"my_value\\\", \\\"${var.my_value}\\\"}"]
}
U're welcome.

Resources