How to multi line string in terraform - terraform

I am trying below from an example
resource "google_logging_metric" "my_metric" {
description = "Check for logs of some cron job\t"
name = "mycj-logs"
filter = "resource.type=\"k8s_container\" AND resource.labels.cluster_name=\"${local.k8s_name}\" AND resource.labels.namespace_name=\"workable\" AND resource.labels.container_name=\"mycontainer-cronjob\" \nresource.labels.pod_name:\"my-pod\""
project = "${data.terraform_remote_state.gke_k8s_env.project_id}"
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
}
}
Is there a way to make the filter field multiline?

A quick search in Google sends you to the documentation: https://www.terraform.io/docs/language/expressions/strings.html#heredoc-strings
You just need to write something like
<<EOT
hello
world
EOT

If it's a matter of formatting the output, this answer covers it.
If you want to make your code more readable, i.e. split the long string over a few lines but keep it as a single line in the output, using the join() function might be worth considering:
resource "google_logging_metric" "my_metric" {
description = "Check for logs of some cron job\t"
name = "mycj-logs"
project = "${data.terraform_remote_state.gke_k8s_env.project_id}"
filter = join(" AND ",
[
"resource.type=\"k8s_container\"",
"resource.labels.cluster_name=\"${local.k8s_name}\"",
"resource.labels.namespace_name=\"workable\"",
"resource.labels.container_name=\"mycontainer-cronjob\"",
"resource.labels.pod_name:\"my-pod\""
]
)
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
}
}
Note that setting out the code this way shows that in the OP there is a missing AND in the filter expression.
Because the original expression is one long line, this is very hard to see and slow to read/maintain-- and mistakes are easy to make.

Related

Conditional outputs on data sources in terraform

I have a sql server terraform module that outputs the name of a sql server for the databases to get created in. However, some environments should use an external server outside of the terraform project instead. Most datacenters we have do not have this external server, just a few.
I've set up the external server using data sources as usual, and made both the output, normal server and datasource conditional on a variable thats passed in like this:
variable "use_external_sql_server" {
type = bool
}
resource "azurerm_mssql_server" "sqlserver" {
count = var.use_external_sql_server ? 0 : 1
name = "sql-interal-sql_server"
....
}
data "azurerm_mssql_server" "external_sql_server" {
count = var.use_external_sql_server ? 1 : 0
name = "sql-${var.env}-${var.location}"
resource_group_name = "rg-${var.env}-${var.location}"
}
output "sql_server_name" {
value = var.use_external_sql_server ? data.azurerm_mssql_server.external_sql_server.name : azurerm_mssql_server.sqlserver[0].name
depends_on = [
azurerm_mssql_server.sqlserver,
data.azurerm_mssql_server.external_sql_server
]
}
However, I'm running into issues with the output. It requires data.azurerm_mssql_server.external_sql_server to exist to evaulate the condition, even if "use_external_server" is false. This is not ideal as I have to manual create dummy servers to fix this condition, so that that conditional can evaulate to true.
Is there a way to do this conditional without having to have "data.azurerm_mssql_server.external_sql_server" actually exist?
You could get rid of the conditional in the output and just use a try.
try evaluates all of its argument expressions in turn and returns the result of the first one that does not produce any errors.
This is a special function that is able to catch errors produced when evaluating its arguments, which is particularly useful when working with complex data structures whose shape is not well-known at implementation time.
You could then possibly write something like
output "sql_server_name" {
value = try(data.azurerm_mssql_server.external_sql_server[0].name, azurerm_mssql_server.sqlserver[0].name, "")
depends_on = [
azurerm_mssql_server.sqlserver,
data.azurerm_mssql_server.external_sql_server
]
}

Terraform output from several resources generated with count

I have these outputs:
output "ec2_id" {
value = aws_instance.ec2instance[*].id
}
output "ec2_name" {
value = aws_instance.ec2instance[*].tags["Name"]
}
output "ec2_mgmt_eip" {
value = aws_eip.eip_mgmt_ec2instance[*].public_ip
}
I want to make an output like:
"<instanceName>: <instanceID> -> <publicIP>"
(all data in same line for same ec2 instance).
In any non-declarative language i can use something like for (var i=0; i<length(myarray);i++) and use "i" as index for each list, in every index concatenate in a new string, but in terraform I can't find how to do it.
Thanks!
Even though you got the answer in the comments, I will add an example. So, the thing you want does exist in terraform as it also has for loops [1]. A for loop along with the right syntax will give you a desired output, which is a map in terraform:
output "ec2_map" {
value = { for i in aws_instance.ec2instance: i.tags.Name => "${i.id}:${i.public_ip}" }
}
The output you said you want is quite similar to this. Also, there is no concept of "same line" in terraform. In this case, since this is a map, the keys will be instance names and value will be a combination of instance id and the public IP, but that will be a string.
[1] https://www.terraform.io/language/expressions/for

Terraform dynamically generate strings using data sources output

Is there any way I can feed the Terraform data source output to another Terraform file as input
The scenario is, I have a terraform code to fetch the private IP addresses (here 3 IPs 10.1.1.1,10.1.1.2,10.1.1.3) for particular tags(here jenkins) using data source
data "aws_instances" "mytag" {
filter {
name = "tag:Application"
values = ["jenkins"]
}
}
output "output from aws" {
value = data.aws_instances.mytag_private_ips
}
Whenever, I do the terraform apply, the on the pattern section in the
below metric-filter code should be able to fetch the resultant IPs from the above code and make them available in the live state ( aws console )
resource "aws_cloudwatch_log_metric_filter" "test" {
name = "test-metric-filter"
pattern = "[w1,w2,w3,w4!=\"*<IP1>*\"&&w4!=\"*<IP2>*\"&&w4!=\"*<IP3>*\",w5=\"*admin*\"]"
log_group_name = var.test_log_group_name
metric_transformation {
name ="test-metric-filter"
namespace = "General"
}
}
So, the final result of metric pattern in the aws console should be like below
[w1,w2,w3,w4!="*10.1.1.1*"&&w4!="*10.1.1.2*"&&w4!="*10.1.1.3*",w5="*admin*"]
The end goal is whenever if the new IPs are generated, it will get populated to pattern (in aws-console) without changing the metric-filter code.
Any help is appreciated, as I could not find any precise document on terraform allows us to dynamically generate strings using data sources
Not sure why you need two files for something this simple...
Here is what I would do:
provider "aws" {
region = "us-east-1"
}
data "aws_instances" "test" {
filter {
name = "architecture"
values = ["x86_64"]
}
}
resource "aws_cloudwatch_log_metric_filter" "test" {
name = "test-metric-filter"
pattern = "[w1,w2,w3,w4!=\"*${data.aws_instances.test.private_ips[0]}*\",w5=\"*admin*\"]"
log_group_name = "test_log_group_name"
metric_transformation {
name = "test-metric-filter"
namespace = "General"
value = 1
}
}
And a terraform plan will show
Terraform will perform the following actions:
# aws_cloudwatch_log_metric_filter.test will be created
+ resource "aws_cloudwatch_log_metric_filter" "test" {
+ id = (known after apply)
+ log_group_name = "test_log_group_name"
+ name = "test-metric-filter"
+ pattern = "[w1,w2,w3,w4!=\"*172.31.70.170*\",w5=\"*admin*\"]"
+ metric_transformation {
+ name = "test-metric-filter"
+ namespace = "General"
+ unit = "None"
+ value = "1"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Concatenating strings is easy: "foo ${var.bar} 123"
and on this case our private_ips is an array so we need the [x]
For more complex concatenations look into the format function:
https://www.terraform.io/docs/language/functions/format.html
I did changed the filter to be able to test on my environment and also used a shorter pattern than yours, but that is the basis for what you need, just add more of make changes to suit your needs.
What you are looking for is string interpolation in Terraform.
I believe you would want to do the following:
pattern = "[w1,w2,w3,w4!=\"*${data.aws_instances.mytag.private_ips[0]}*\"&&w4!=\"*${data.aws_instances.mytag.private_ips[1]}*\"&&w4!=\"*${data.aws_instances.mytag.private_ips[2]}*\",w5=\"*admin*\"]"
I suggest being careful with this statement, because it will fail if you don't have at least 3 instances. You would want to have something dynamic instead.

How to import Datadog JSON template in terraform DSL?

{
"title":"xxxx",
"description":"xxx",
"widgets":[
{
"id":0,
"definition":{
"type":"timeseries",
"requests":[
{
"q":"xxxxxx{xxxx:xx}",
"display_type":"bars",
"style":{
"palette":"cool",
"line_type":"solid",
"line_width":"normal"
}
}
]
}
]
}
I have the above datadog json template with me which I have to just import in terraform instead of recreating it as terraform dsl.
I'm not particularly familiar with this Datadog JSON format but the general pattern I would propose here has multiple steps:
Decode the serialized data into a normal Terraform value. In this case that would be using jsondecode, because the data is JSON-serialized.
Transform and normalize that raw data into a consistent shape that is more convenient to use in a declarative Terraform configuration. This will usually involve at least one named local value containing an expression that uses for expressions and the try function, along with the type conversion functions, to try to force the raw data into a more consistent shape.
Use the transformed/normalized result with Terraform's resource and block repetition constructs (resource for_each and dynamic blocks) to describe how the data maps onto physical resource types.
Here's a basic example of that to show the general principle. It will need more work to capture all of the details you included in your initial example.
variable "datadog_json" {
type = string
}
locals {
raw = jsondecode(var.datadog_json)
screenboard = {
title = local.raw.title
description = try(local.raw.description, tostring(null))
widgets = [
for w in local.raw.widgets : {
type = w.definition.type
title = w.definition.title
title_size = try(w.definition.title_size, 16)
title_align = try(w.definition.title_align, "center")
x = try(w.definition.x, tonumber(null))
y = try(w.definition.y, tonumber(null))
width = try(w.definition.x, tonumber(null))
height = try(w.definition.y, tonumber(null))
requests = [
for r in w.definition.requests : {
q = r.q
display_type = r.display_type
style = tomap(try(r.style, {}))
}
]
}
]
}
}
resource "datadog_screenboard" "acceptance_test" {
title = local.screenboard.title
description = local.screenboard.description
read_only = true
dynamic "widget" {
for_each = local.screenboard.widgets
content {
type = widget.value.type
title = widget.value.title
title_size = widget.value.title_size
title_align = widget.value.title_align
x = widget.value.x
y = widget.value.y
width = widget.value.width
height = widget.value.height
tile_def {
viz = widget.value.type
dynamic "request" {
for_each = widget.value.requests
content {
q = request.value.q
display_type = request.value.display_type
style = request.value.style
}
}
}
}
}
}
The separate normalization step to build local.screenboard here isn't strictly necessary: you could instead put the same sort of normalization expressions (using try to set defaults for things that aren't set) directly inside the resource "datadog_screenboard" block arguments if you wanted. I prefer to treat normalization as a separate step because then this leaves a clear definition in the configuration for what we're expecting to find in the JSON and what default values we'll use for optional items, separate from defining how that result is then mapped onto the physical datadog_screenboard resource.
I wasn't able to test the example above because I don't have a Datadog account. I'm sorry if there are minor typos/mistakes in it that lead to errors. My hope was to show the general principle of mapping from a serialized data file to a resource rather than to give a ready-to-use solution, so I hope the above includes enough examples of different situations that you can see how to extend it for the remaining Datadog JSON features you want to support in this module.
If this JSON format is a interchange format formally documented by Datadog, it could make sense for Terraform's Datadog provider to have the option of accepting a single JSON string in this format as configuration, for easier exporting. That may require changes to the Datadog provider itself, which is beyond what I can answer here but might be worth raising in the GitHub issues for that provider to streamline this use-case.

How can I output a data source that uses count?

I want to output each VM created and their UUID e.g
data "vsphere_virtual_machine" "vms" {
count = "${length(var.vm_names)}"
name = "${var.vm_names[count.index]}"
datacenter_id = "12345"
}
output "vm_to_uuid" {
# value = "${data.vsphere_virtual_machine.newvms[count.index].name}"
value = "${data.vsphere_virtual_machine.newvms[count.index].id}"
}
Example output I'm looking for:
"vm_to_uuids":[
{
"name":"node1",
"id":"123456",
},
{
"name":"node2",
"id":"987654",
}
]
Use the wildcard attribute in the expression given for the output value to get the list of ids for the created VMs. e.g.
output "vm_to_uuids" {
value = "${data.vsphere_virtual_machine.*.id}"
}
The required syntax provided in your question is one exemption where to prefer function over form.
Writing a terraform configuration that provides that isn't straightforward.
Perhaps, I suggest to adopt other simpler ways to output this same information.
Names mapped to ids can be output:
output "vm_to_uuids" {
value = "${zipmap(
data.vsphere_virtual_machine.*.name,
data.vsphere_virtual_machine.*.id)}"
}
A map of names and ids can be output in a columnar manner:
output "vm_to_uuids" {
value = "${map("name",
data.vsphere_virtual_machine.*.name,
"id",
data.vsphere_virtual_machine.*.id)}"
}
A list of names and ids can be output in a columnar manner:
output "vm_to_uuids" {
value = "${list(
data.vsphere_virtual_machine.*.name,
data.vsphere_virtual_machine.*.id)}"
}
One thing you could do (if you wanted exactly that output), is use formatlist(format, args, ...)
data "vsphere_virtual_machine" "vms" {
count = "${length(var.vm_names)}"
name = "${var.vm_names[count.index]}"
datacenter_id = "12345"
}
output "vm_to_uuid" {
value = "${join(",", formatlist("{\"name\": \"%s\", \"id\": \"%s\"}", data.vsphere_virtual_machine.newvms.*.name, data.vsphere_virtual_machine.newvms.*.id))}"
}
Haven't tested the code, but you get the idea. Especially the quote escape is just a guess, but that's easy to figure out from here.
What happens is you take two lists (names and ID's) and format dict strings from each entry, after which you join them together using comma separation.

Resources