I am trying to get a value from a key in a yaml file after decoding it in locals:
document.yaml
name: RandomName
emailContact: email#domain.com
tags:
- key: "BusinessUnit"
value: "BUnit"
- key: "Criticality"
value: "Criticality-Eng"
- key: "OpsCommitment"
value: "OpsCommitment-Eng"
- key: "OpsTeam"
value: "OpsTeam-Eng"
- key: "BudgetAmount"
value: "100"
Then I have locals in main.tf:
locals {
file = yamldecode(file(document.yaml))
}
And a have a budget.tf file where I need to retrieve the BudgetAmount of 100 dollars based on the tag key: BudgetAmount
resource "azurerm_consumption_budget_subscription" "budget" {
name = format("%s_%s", lower(var.subscription_name), "budget")
subscription_id = data.azurerm_subscription.current.id
amount = local.landing_zone.tags[5].value
time_grain = "Monthly"
time_period {
start_date = formatdate("YYYY-MM-01'T'00:00:00'Z'", timestamp())
end_date = local.future_10_years
}
notification {
enabled = true
threshold = 80.0
operator = "EqualTo"
contact_emails = [
]
contact_roles = [
"Owner"
]
}
}
This local.landing_zone.tags[5].value works, but it's not a good idea if I have multiple yaml files and the position changes
Q: how do I get the BudgetAmount value of 100 from the yaml file without specifying its location inside the file, but referring to the tag's name?
I did try this:
matchkeys([local.file .tags[*].key], [local.file .tags[*].value], ["BudgetAmount"])
but it keeps telling me the value needs to be a number (obviously is getting a value, but it's a text, from one of the many key/value pairs I have in the yaml file)
I managed to get the budget by converting the list of maps into a single map with each tag being a key value.
The way you were doing it would result in the following data structure under local.file.tags:
[
{
"key" = "BusinessUnit"
"value" = "BUnit"
},
{
"key" = "Criticality"
"value" = "Criticality-Eng"
},
{
"key" = "OpsCommitment"
"value" = "OpsCommitment-Eng"
},
{
"key" = "OpsTeam"
"value" = "OpsTeam-Eng"
},
{
"key" = "BudgetAmount"
"value" = "100"
},
]
That was hard to work with and I couldn't think of any functions to help at the time so I went with changing it via the following locals:
locals {
file = yamldecode(file("document.yaml"))
tags = {
for tag in local.file.tags :
tag.key => tag.value
}
}
which got the tags to a structure of:
> local.tags
{
"BudgetAmount" = "100"
"BusinessUnit" = "BUnit"
"Criticality" = "Criticality-Eng"
"OpsCommitment" = "OpsCommitment-Eng"
"OpsTeam" = "OpsTeam-Eng"
}
You can reference each of the tags in this state by using something like:
budget = local.tags["BudgetAmount"]
This was tested on Terraform v1.0.10 via terraform console
Related
I will be providing values with terragrunt. The variable, push_subscriptions, is a list of maps and I want to modify the values of the map. For example, append a prefix to the push subscription name in place like so (within the main.tf):
push_subscriptions[index]['name'] = "$pbsb-push-${var.product_environment_code}-push_subscriptions[index]['name']"
main.tf
module "pubsub" {
push_subscriptions = var.push_subscriptions
}
terragrunt.hcl
include "product_vars" {
path = find_in_parent_folders("_terragrunt.hcl")
}
inputs = {
push_subscriptions = [
{
name = "push-sub-1"
ack_deadline_seconds = 20
push_endpoint = "https://example.com"
},
{
name = "push-sub-2"
ack_deadline_seconds = 20
push_endpoint = "https://example.com"
}
]
}
Shouldn't be a problem. Just create a local where you'll be using it, that iterates over the list and returns another list of objects with the updated values.
In this example, local.subs is used in lieu of your variable, but you would just replace local.subs with var.push_subscriptions in your case.
locals {
subs = [
{ name = "foo" },
{ name = "bar" },
]
updated = [for sub in local.subs : { name = "some-prefix-${sub.name}" }]
}
output "updated" {
value = local.updated
}
Which gives:
Changes to Outputs:
+ updated = [
+ {
+ name = "some-prefix-foo"
},
+ {
+ name = "some-prefix-bar"
},
]
So that is a new value you can use with prefixes.
Or you could do this entirely in line, with something like:
module "pubsub" {
push_subscriptions = [for sub in var.push_subscriptions : merge(sub, {
name = "pbsb-push-${var.product_environment_code}-some-prefix-${sub.name}"
})]
}
Using merge here allows you to maintain all the other values.
I am trying to provision aws service catalog product using terraform resource
resource "aws_servicecatalog_provisioned_product" "example" {}
Terraform resource output description
one of the export value of the resource is outputs which is in form of set and i am collecting that into an output variable using below
output "Provisioned_Product_Outputs" {
value = aws_servicecatalog_provisioned_product.example.outputs
}
Output Looks Like
Provisioned_Product_Outputs = toset([
{
"description" = "Backup plan"
"key" = "BackupPlan"
"value" = "light"
},
{
"description" = "Current user zone to run"
"key" = "CurrentAZ"
"value" = "primary"
},
{
"description" = "InstanceID of Vm"
"key" = "EC2InstanceID"
"value" = "i-04*******"
},
{
"description" = "InstanceHostName"
"key" = "InstanceHostName"
"value" = "{\"fqdn\":\"foo.domain.com\"}"
},
{
"description" = "The ARN of the launched Cloudformation Stack"
"key" = "CloudformationStackARN"
"value" = "arn:aws:cloudformation:{region}:{AccountID}:stack/SC-{AccountID}-pp-iy******"
},
])
i would like to have only selected outputs values rather than entire set like below.
output "EC2InstanceID" {
value = "i-04*******"
}
output "InstanceHostName" {
value = ""{\"fqdn\":\"foo.domain.com\"}""
}
output "CloudformationStackARN" {
value = "arn:aws:cloudformation:{region}:{AccountID}:stack/SC-{AccountID}-pp-iy******"
}
Is there a way to apply or have some condition which allows me to check for the right values using key value pair and apply the value in the outputs
regards
Since you know that your output is set, you can create a filter on the objects inside the set using contains:
output "outputs" {
value = {
for output in aws_servicecatalog_provisioned_product.example.outputs : output.key =>
output.value if contains(["EC2InstanceID", "InstanceHostName", "CloudformationStackARN"], output.key)
}
}
The output will be similar to this:
outputs = {
"CloudformationStackARN" = "arn:aws:cloudformation:{region}:{AccountID}:stack/SC-{AccountID}-pp-iy******"
"EC2InstanceID" = "i-04*******"
"InstanceHostName" = "{\"fqdn\":\"foo.domain.com\"}"
}
If you want to have separate outputs, you have to type out each output manually:
output "EC2InstanceID" {
value = [for output in aws_servicecatalog_provisioned_product.example.outputs : output.value if output.key == "EC2InstanceID"][0]
}
output "InstanceHostName" {
value = [for output in aws_servicecatalog_provisioned_product.example.outputs : output.value if output.key == "InstanceHostName"][0]
}
output "CloudformationStackARN" {
value = [for output in aws_servicecatalog_provisioned_product.example.outputs : output.value if output.key == "CloudformationStackARN"][0]
}
You can not have a for_each attribute for outputs. Currently resource and module blocks support for_each attributes.
My goal is to have something like a common.tfvars file eg:
users = {
"daniel.meier" = {
path = "/"
force_destroy = true
tag_email = "foo#example.com"
github = "dme86"
}
"linus.torvalds" = {
path = "/"
force_destroy = true
tag_email = "bar#example.com"
github = "torvalds"
}
}
Via data you'll be able to retrieve informations about the github accounts:
data "github_user" "this" {
for_each = var.users
username = each.value["github"]
}
Output of ssh keys is also possible:
output "current_github_ssh_key" {
value = values(data.github_user.this).*.ssh_keys
}
But how can i get the SSH keys from output into a resource like:
resource "aws_key_pair" "deployer" {
for_each = var.users
key_name = each.value["github"]
public_key = values(data.github_user.this).*.ssh_keys
}
If i'm trying like in this example terraform errors
Inappropriate value for attribute "public_key": string required.
which makes sense, cause the keys are a list AFAIK - but how to convert this correctly?
Output looks like this:
Changes to Outputs:
+ current_github_ssh_key = [
+ [
+ "ssh-rsa AAAAB3NzaC1yc2EAAAAD(...)ElQ==",
],
+ [
+ "ssh-rsa AAAAB3NzaC1yc2EAAGVD(...)TXxrF",
],
]
If you want to test this code you have to include a github token for your provider like:
provider "github" {
token = "123456"
}
Consider I have a variable that is a list of list of maps.
Example:
processes = [
[
{start_cmd: "a-server-start", attribute2:"type_a"},
{start_cmd: "a-worker-start", attribute2:"type_b"}
{start_cmd: "a--different-worker-start", attribute2:"type_c"}
],
[
{start_cmd: "b-server-start", attribute2:"type_a"},
{start_cmd: "b-worker-start", attribute2:"type_b"}
]
]
In each iteration, I need to take out the array of maps, then iterate over that array and take out the values of the map. How do I achieve this in terraform?
I have considered having two counts and doing some arithmetic to trick terraform into performing a lookalike nested iteration Check reference here. But in our case the number of maps in the inner array can vary.
Also we are currently using the 0.11 terraform version but dont mind using the alpha 0.12 version of terraform if it is possible to achieve this in that version.
Edit:
Added how I would use this variable:
resource “create_application” “applications” {
// Create a resource for every array in the variable processes. 2 in this case
name = ""
migration_command = ""
proc {
// For every map create this attribute for the resource.
name = ““
init_command = “a-server-start”
type = “server”
}
}
Not sure if this clears up the requirement. Please do ask if it is still not clear.
Using terraform 0.12.x
locals {
processes = [
[
{ start_cmd: "a-server-start", type: "type_a", name: "inglorious bastards" },
{ start_cmd: "a-worker-start", type: "type_b", name: "kill bill" },
{ start_cmd: "a--different-worker-start", type: "type_c", name: "pulp fiction" },
],
[
{ start_cmd: "b-server-start", type: "type_a", name: "inglorious bastards" },
{ start_cmd: "b-worker-start", type: "type_b", name: "kill bill" },
]
]
}
# just an example
data "archive_file" "applications" {
count = length(local.processes)
type = "zip"
output_path = "applications.zip"
dynamic "source" {
for_each = local.processes[count.index]
content {
content = source.value.type
filename = source.value.name
}
}
}
$ terraform apply
data.archive_file.applications[0]: Refreshing state...
data.archive_file.applications[1]: Refreshing state...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
If a create_application resource existed, it can be modeled like so
resource "create_application" "applications" {
count = length(local.processes)
name = ""
migration_command = ""
dynamic "proc" {
for_each = local.processes[count.index]
content {
name = proc.value.name
init_command = proc.value.start_cmd
type = proc.value.type
}
}
}
Here is my solution that work like charm. Just note the tricks google_service_account.purpose[each.value["name"]].name where I can retrieve the named array element by using its name.
variable "my_envs" {
type = map(object({
name = string
bucket = string
}))
default = {
"dev" = {
name = "dev"
bucket = "my-bucket-fezfezfez"
}
"prod" = {
name = "prod"
bucket = "my-bucket-ezaeazeaz"
}
}
}
resource "google_service_account" "purpose" {
for_each = var.my_envs
display_name = "blablabla (terraform)"
project = each.value["name"]
account_id = "purpose-${each.value["name"]}"
}
resource "google_service_account_iam_binding" "purpose_workload_identity_binding" {
for_each = var.my_envs
service_account_id = google_service_account.purpose[each.value["name"]].name
role = "roles/iam.whatever"
members = [
"serviceAccount:${each.value["name"]}.svc.id.goog[purpose/purpose]",
]
}
resource "google_storage_bucket_iam_member" "purpose_artifacts" {
for_each = var.my_envs
bucket = each.value["bucket"]
role = "roles/storage.whatever"
member = "serviceAccount:${google_service_account.purpose[each.value["name"]].email}"
}
If resources use a count parameter to specify multi resources in terraform there is a simple syntax for providing a list/array of dedicated fields for the resource instances.
for example
aws_subnet.foo.*.id
Since quite a number of versions it is possible to declare variables with a complex structure, for example lists of maps.
variable "data" {
type = "list"
default = [
{
id = "1"
...
},
{
id = "10"
...
}
]
}
I'm looking for a possibility to do the same for varaibles I can do for multi resources: a projection of an array to an array of field values of the array elements.
Unfortunately
var.data.*.id
does not work as for resources. Is there any possibility to do this?
UPDATE
Massive fancy features have been added into terraform since Terraform 0.12 was released, e.g., list comprehension, with which the solution is super easy.
locals {
ids = [for d in var.data: d.id]
#ids = [for d in var.data: d["id"]] #same
}
# Then you could get the elements this way,
# local.ids[0]
Solution before terraform 0.12
template_file can help you out.
data "template_file" "data_id" {
count = "${length(var.data)}"
template = "${lookup(var.data[count.index], "id")}"
}
Then you get a list "${data.template_file.data_id.*.rendered}", whose elements are value of "id".
You can get its element by index like this
"${data.template_file.data_id.*.rendered[0]}"
or through function element()
"${element(data.template_file.data_id.*.rendered, 0)}"
NOTE: This answer and its associated question are very old at this point, and this answer is now totally stale. I'm leaving it here for historical reference, but nothing here is true of modern Terraform.
At the time of writing, Terraform doesn't have a generalized projection feature in its interpolation language. The "splat syntax" is implemented as a special case for resources.
While deep structure is possible, it is not yet convenient to use, so it's recommended to still keep things relatively flat. In future it is likely that new language features will be added to make this sort of thing more usable.
If have found a working solution using template rendering to by-pass the list of map's issue:
resource "aws_instance" "k8s_master" {
count = "${var.master_count}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${aws_security_group.k8s_sg.id}"]
associate_public_ip_address = false
subnet_id = "${element(var.subnet_ids,count.index % length(var.subnet_ids))}"
user_data = "${file("${path.root}/files/user_data.sh")}"
iam_instance_profile = "${aws_iam_instance_profile.master_profile.name}"
tags = "${merge(
local.k8s_tags,
map(
"Name", "k8s-master-${count.index}",
"Environment", "${var.environment}"
)
)}"
}
data "template_file" "k8s_master_names" {
count = "${var.master_count}"
template = "${lookup(aws_instance.k8s_master.*.tags[count.index], "Name")}"
}
output "k8s_master_name" {
value = [
"${data.template_file.k8s_master_names.*.rendered}",
]
}
This will result in the following output:
k8s_master_name = [
k8s-master-0,
k8s-master-1,
k8s-master-2
]
A potentially simpler answer is to use the zipmap function.
Starting with an environment variable map compatible with ECS template definitions:
locals {
shared_env = [
{
name = "DB_CHECK_NAME"
value = "postgres"
},
{
name = "DB_CONNECT_TIMEOUT"
value = "5"
},
{
name = "DB_DOCKER_HOST_PORT"
value = "35432"
},
{
name = "DB_DOCKER_HOST"
value = "localhost"
},
{
name = "DB_HOST"
value = "my-db-host"
},
{
name = "DB_NAME"
value = "my-db-name"
},
{
name = "DB_PASSWORD"
value = "XXXXXXXX"
},
{
name = "DB_PORT"
value = "5432"
},
{
name = "DB_QUERY_TIMEOUT"
value = "30"
},
{
name = "DB_UPGRADE_TIMEOUT"
value = "300"
},
{
name = "DB_USER"
value = "root"
},
{
name = "REDIS_DOCKER_HOST_PORT"
value = "6380"
},
{
name = "REDIS_HOST"
value = "my-redis"
},
{
name = "REDIS_PORT"
value = "6379"
},
{
name = "SCHEMA_SCRIPTS_PATH"
value = "db-scripts"
},
{
name = "USE_LOCAL"
value = "false"
}
]
}
In the same folder launch terraform console for testing built-in functions. You may need to terraform init if you haven't already.
terraform console
Inside the console type:
zipmap([for m in local.shared_env: m.name], [for m in local.shared_env: m.value])
Observe the output of each list-item-map being a name-value-pair of a single map:
{
"DB_CHECK_NAME" = "postgres"
"DB_CONNECT_TIMEOUT" = "5"
"DB_DOCKER_HOST" = "localhost"
"DB_DOCKER_HOST_PORT" = "35432"
"DB_HOST" = "my-db-host"
"DB_NAME" = "my-db-name"
"DB_PASSWORD" = "XXXXXXXX"
"DB_PORT" = "5432"
"DB_QUERY_TIMEOUT" = "30"
"DB_UPGRADE_TIMEOUT" = "300"
"DB_USER" = "root"
"REDIS_DOCKER_HOST_PORT" = "6380"
"REDIS_HOST" = "my-redis"
"REDIS_PORT" = "6379"
"SCHEMA_SCRIPTS_PATH" = "db-scripts"
"USE_LOCAL" = "false"
}