Terraform iterate over nested data - terraform

I'm trying to create New Relic's service-level objects based on a yaml config file that provides the relevant configuration.
My yaml configuration:
slo:
targets:
- first_slo:
name: "My First SLO"
endpoints:
- path: /api/method1
method: GET
- path: /api/method2
method: PUT
objectives:
availability:
threshold: 99.9
- first_slo:
name: "My Second SLO"
endpoints:
- path: /api/method12
method: GET
- path: /api/method23
method: PUT
objectives:
availability:
threshold: 99.99
I want to iterate over this example configuration to build the object, but I'm struggling to form the right NRQL query using a nested iteration.
My terraform file:
resource "newrelic_service_level" "availability" {
for_each = var.config.slo.targets
guid = var.guid
name = "${each.value.name} - Availability"
description = "Proportion of requests that are served successfully."
events {
account_id = var.account_id
valid_events {
from = "Transaction"
where = "transactionType='Web' AND entityGuid = '${var.guid}' AND (OR_CONDITION_BETWEEN_ALL_THE_METHODS_AND_URIS)"
}
bad_events {
from = "Transaction"
where = "transactionType= 'Web' AND entityGuid = '${var.guid}' AND numeric(response.status) >= 500 AND (OR_CONDITION_BETWEEN_ALL_THE_METHODS_AND_URIS)"
}
}
objective {
target = each.value.objectives.availability.threshold
time_window {
rolling {
count = 7
unit = "DAY"
}
}
}
}
So basically what I'm trying to do here, is create a service level with an NRQL query that filters only for the specific combination of URI and method that are relevant for this specific target - the URI and methods that I have in my config file.
So for the first SLO, OR_CONDITION_BETWEEN_ALL_THE_METHODS_AND_URIS should translate to something like this:
(request.uri = '/api/method1' AND request.method = 'GET') OR (request.uri = '/api/method2' AND request.method = 'PUT')
My current solution would be to build the query manually and add it to the configurations for each SLO, but it is not readable and hard to maintain.
I would highly appreciate any suggestions on how to build the query dynamically.

You can certainly build that query with Terraform. Here's a wee .tf file that shows how you could do it:
locals {
config = yamldecode(file("${path.root}/vars.yaml"))
parsed = [for d in local.config.slo.targets : {
name : d["name"]
condition : join(" OR ", [for e in d["endpoints"] : "(request.uri = '${e["path"]}' AND request.method = '${e["method"]}')"])
}]
}
output "parsed" {
value = local.parsed
}
This expects your yaml file to be sitting next to it with name vars.yaml, and produces:
$ terraform plan
Changes to Outputs:
+ parsed = [
+ {
+ condition = "(request.uri = '/api/method1' AND request.method = 'GET') OR (request.uri = '/api/method2' AND request.method = 'PUT')"
+ name = "My First SLO"
},
+ {
+ condition = "(request.uri = '/api/method12' AND request.method = 'GET') OR (request.uri = '/api/method23' AND request.method = 'PUT')"
+ name = "My Second SLO"
},
]
For your module, you can just use the join(...) part in place of OR_CONDITION_BETWEEN_ALL_THE_METHODS_AND_URIS. Having it repeated should be fine (as long as you document it, naturally), but if you don't like the big long line, you can create a sub-module to encapsulate it. Or you could build the query string in a pre-processing locals block, possibly using the merge function to just add the query string alongside the rest of each target's config.

Related

Terraform - ordered generation of resources which are related based on a list variable

I currently try to automate nested SumoLogic forder creation as part of my custom module. I have to use this resource. I need to create a folder path similar to:
parent_folder_path = "SRE/Test/Troubleshooting"
and due to the fact that this variable will change between environments I cannot hardcode creation of the underlying resources. The problematic part is that all shown folders (SRE, Test, Troubleshooting) need to be created in a sequence because the latter needs id of the former (eg. Test folder needs id of already created SRE folder) to be created.
The end result at which I am aiming is automatically generated code as below:
resource "sumologic_folder" "SRE" {
provider = sumologic
name = "SRE"
description = ""
parent_id = "0000000000XXXXX"
}
resource "sumologic_folder" "Test" {
provider = sumologic
name = "Test"
description = ""
parent_id = sumologic_folder.SRE.id
}
resource "sumologic_folder" "Troubleshooting" {
provider = sumologic
name = "Troubleshooting"
description = ""
parent_id = sumologic_folder.Test.id
}
I tried an approach which uses templatefile() and local_file:
parent_directories.tftpl
%{~ for index, path_part in parent_folder_path ~}
%{~ if index == 0 ~}
resource "sumologic_folder" "${replace(path_part, " ", "_")}" {
provider = sumologic
name = "${path_part}"
description = ""
parent_id = "${root_folder_id}"
}
%{~ else }
resource "sumologic_folder" "${replace(path_part, " ", "_")}" {
provider = sumologic
name = "${path_part}"
description = ""
parent_id = sumologic_folder.${replace(parent_folder_path[index - 1], " ", "_")}.id
}
%{~ endif ~}
%{~ endfor ~}
main.tf
resource "local_file" "parent_directories" {
content = templatefile("${path.module}/parent_directories.tftpl", { parent_folder_path = split("/", var.parent_folder_path), root_folder_id = var.root_folder_id })
filename = "${path.module}/parent_directories.tf"
}
and the file was correctly generated during terraform apply run but I was not able to include it in the scope of the run dynamically.
Does anyone know how to handle such usecase?
Thanks in advance for all help.
Best Regards,
Rafal.
I understand what you are trying to achieve - you want to create multiple resources of the same type, but relying on each one of them created before (previous one on the list), at the same time not knowing how many there would be (more folders in path). I am afraid it is not how Terraform works. You would create a cycle between the list or map of the same resources.
That said, however, I can offer you the ugly solution. If you can limit to some number of subdirectories, let's say up to five or ten levels, you can do that in the code that will create three folders if there are three dirs in the path, and four if there are four, and so on. You just stop creating resources if this level is empty.
Let's say you have a sumo module:
variable "parent_path" {}
variable "name" {}
data "sumologic_folder" "parent" {
path = var.parent_path
}
resource "sumologic_folder" "folder" {
provider = sumologic
name = var.name
description = ""
parent_id = data.sumologic_folder.parent.id
}
output "path" {
value = "${var.path}/${var.name}"
}
And then you can split the path to the list of folders and create as many resources as there are folders in the path, for example: AA/BB/CC/DD = 4 sumofolders.
locals {
desired_path = "SRE/Test/Troubleshooting" # example - 3 folders
regex = regexall("[^//]+", local.desired_path)
path0 = "/"
}
module "sumo" {
source = "./sumo"
name = local.regex[0]
parent_path = local.path0 # var.parent_path
}
module "sumo_child_1" {
source = "./sumo"
count = try(local.regex[1], null) == null ? 0 : 1
name = try(local.regex[1], "none")
parent_path = module.sumo.path
}
module "sumo_child_2" {
source = "./sumo"
count = try(local.regex[2], null) == null ? 0 : 1
name = try(local.regex[2], "none")
parent_path = module.sumo_child_1.path
}
module "sumo_child_3" { # this is NOT going to be even created in our example
source = "./sumo"
count = try(local.regex[3], null) == null ? 0 : 1
name = try(local.regex[3], "none")
parent_path = module.sumo_child_2.path
}
# and so on... if there are no more folders in the path, the resources won't be created anyway.
Now let me say that again, this is a very ugly solution... but it works. Cheers.

adding multiple destinations to new relic workflows using terraform

I am trying to create a newrelic workflow using terraform modules. I am fine with creating a workflow with signle destination. But, I am trying to create a workflow with more than one destination.
slack channel ids
variable "channel_ids" {
type = set(string)
default = ["XXXXXXXXXX","YYYYYYYYY"]
}
creating notification channels using slack channel ids
resource "newrelic_notification_channel" "notification_channel" {
for_each = var.channel_ids
name = "test" # will modify if required
type = "SLACK" # will parameterize this
destination_id = "aaaaaaaaa-bbbbb-cccc-ddddd-eeeeeeeeee"
product = "IINT"
property {
key = "channelId"
value = each.value
}
}
Now I want to create something like below (two destinations)
resource "newrelic_workflow" "newrelic_workflow" {
name = "my-workflow"
muting_rules_handling = "NOTIFY_ALL_ISSUES"
issues_filter {
name = "Filter-name"
type = "FILTER"
predicate {
attribute = "accumulations.policyName"
operator = "EXACTLY_MATCHES"
values = [ "policy_name" ]
}
}
destination {
channel_id = newrelic_notification_channel.notification_channel.id
}
destination {
channel_id = newrelic_notification_channel.notification_channel.id
}
}
I tried using for_each and for loop but no luck. Any idea on how to get my desired output?
Is it possible to loop through and create multiple destinations within the same resource, like attaching multiple destination to a single workflow?
I was able to achieve this by using a dynamic block, which produces a dynamic number of destination blocks based on the number of elements of newrelic_notification_channel.notification_channel.
resource "newrelic_workflow" "newrelic_workflow" {
name = "my-workflow"
muting_rules_handling = "NOTIFY_ALL_ISSUES"
issues_filter {
name = "Filter-name"
type = "FILTER"
predicate {
attribute = "accumulations.policyName"
operator = "EXACTLY_MATCHES"
values = [ "policy_name" ]
}
}
dynamic "destination" {
for_each = newrelic_notification_channel.notification_channel
content {
channel_id = destination.value.id
}
}
}

creating a list of list objects terraform

I'm setting up a terraform repo for my snowflake instance and bringing in a list of users to start managing.
I have a module called users
and have the following files:
I have a variable defined as follows.
variable "users" {
type = list(object(
{
name = string
comment = string
default_role = string
disabled = bool
must_change_password = bool
display_name = string
email = string
first_name = string
last_name = string
default_warehouse = string
}
)
)
}
now inside users.tf I want to hold a list of all my users based on the above variable, I thought I could define it as follows:
users {
user_1 = {
name = 'x'
},
user_2 = {
name = 'y'
}
}
however, when I run Terraform validate on this it gives me the error that a user block is not expected here.
Can someone tell me my error and give me some guidance if I'm doing this correctly?
My intention is to have a file to hold all my users that I then define with a dynamic block inside my main.tf file within this module.
I can then reference the dynamic block inside the outputs.tf which will give me access to the users inside said module in the global project namespace.
Looks to me like you are attempting to configuring your users as an object:
users {
user_1 = {
name = "x"
},
user_2 = {
name = "y"
}
}
but you actually set your variable constraint to a list of objects. So it should be:
users = [
{
name = "user_1"
# other fields
},
{
name = "user_2"
# other fields
}
]
Here is a full working example:
modules/users/variables.tf
variable "users" {
type = list(object({
name = string
}))
}
modules/users/outputs.tf
output "users" {
value = var.users
}
main.tf
module "users" {
source = "./modules/users"
users = [
{ name = "user_1" },
{ name = "user_2" }
]
}
output "users" {
value = module.users.users
}
plan output
Changes to Outputs:
+ users = [
+ {
+ name = "user_1"
},
+ {
+ name = "user_2"
},
]
Your config syntax and usage is completely correct here. Your config file organization is the issue here. users.tf is a Terraform variables file, and therefore should have the .tfvars extension. If you rename the file from users.tf to e.g. users.tfvars, then you can specify it as an input with the -var-file=users.tfvars argument with the CLI or otherwise as per standard usage. You can see more information in the documentation.
On a side note: it is not really best practices to manage an entire module just for managing a set of users for a specific service. If you follow this design pattern in the future, then your codebase will not scale very well, and could easily become unmanageably large.

Terraform reduce the amount of loops at the moment of generating a JSON

I have this terraform code that is generating me this JSON.
{
host = {
path = "/xxxx/yyyy"
}
name = "NAME"
}
Currently it's working, but I have 3 loops, consider it not efficient, wondering if I can reduce it to 2 or probably 1 loop? Or this isn't possible.
My first loop validates that container_mounts isn't empty. Don't want to generate it, if that comes empty. The second and the third is specific for getting the information as container_mounts is a map of strings.
variable "container_mounts" {
type = map(string)
default = { "app/data" = "/xxxx/yyyyy" }
}
json = jsonencode(
[
for i in range(length(var.container_mounts)) :
{
name = [for sourceVolume in keys(var.container_mounts) :
replace(substr(sourceVolume, 1, length(sourceVolume)), "/", "-")][0]
host = {
sourcePath = [for key, value in var.container_mounts : value][0]
}
}
]
)
Is there a way to improve it? I assume that yes, but running into different scenarios were it's not working.
So after talking with a friend, looks I complicated my life doing what I did.
json = jsonencode(
[
for key, value in var.container_mounts :{
host = { "sourcePath" = value }
name = replace(substr(key, 1, length(key)), "/", "-")
}
]
)
It can be done only with one loop.

Look for the resource to match a key word

Suppose I have two kineses, I'd like to get the one of them with the key word _consumer.
variable "kinesis" {
default = ["kinesis_publisher", "kinesis_consumer"]
}
resource "aws_kinesis_stream" "test_stream" {
count = "${length(var.kinesis)}"
name = "${var.kinesis[count.index]}"
shard_count = 1
retention_period = 48
shard_level_metrics = [
"IncomingBytes",
"OutgoingBytes",
]
tags = {
Environment = "test"
}
}
How do I get consumer arn only?
output "kinesis_consumer_arn" {
value = "??? lookup or matchkeys with _consumer ???"
}
It is not always the same sequence and will be many kinesis. So I can't use 0 or 1 directly.
You may create modules for each kafka stream , thereby giving more control on variables passed into and derived from the resources.
module "kinesis_publisher" {
source = "../modules/test_stream"
stream_name = "kinesis_publisher"
}
module "kinesis_consumer" {
source = "../modules/test_stream"
stream_name = "kinesis_consumer"
}
And then output can be filtered on basis of modules.
output "kinesis_consumer_arn" {
value = "{module.kinesis_consumer.arn}"
}

Resources