How to import Datadog JSON template in terraform DSL? - terraform

{
"title":"xxxx",
"description":"xxx",
"widgets":[
{
"id":0,
"definition":{
"type":"timeseries",
"requests":[
{
"q":"xxxxxx{xxxx:xx}",
"display_type":"bars",
"style":{
"palette":"cool",
"line_type":"solid",
"line_width":"normal"
}
}
]
}
]
}
I have the above datadog json template with me which I have to just import in terraform instead of recreating it as terraform dsl.

I'm not particularly familiar with this Datadog JSON format but the general pattern I would propose here has multiple steps:
Decode the serialized data into a normal Terraform value. In this case that would be using jsondecode, because the data is JSON-serialized.
Transform and normalize that raw data into a consistent shape that is more convenient to use in a declarative Terraform configuration. This will usually involve at least one named local value containing an expression that uses for expressions and the try function, along with the type conversion functions, to try to force the raw data into a more consistent shape.
Use the transformed/normalized result with Terraform's resource and block repetition constructs (resource for_each and dynamic blocks) to describe how the data maps onto physical resource types.
Here's a basic example of that to show the general principle. It will need more work to capture all of the details you included in your initial example.
variable "datadog_json" {
type = string
}
locals {
raw = jsondecode(var.datadog_json)
screenboard = {
title = local.raw.title
description = try(local.raw.description, tostring(null))
widgets = [
for w in local.raw.widgets : {
type = w.definition.type
title = w.definition.title
title_size = try(w.definition.title_size, 16)
title_align = try(w.definition.title_align, "center")
x = try(w.definition.x, tonumber(null))
y = try(w.definition.y, tonumber(null))
width = try(w.definition.x, tonumber(null))
height = try(w.definition.y, tonumber(null))
requests = [
for r in w.definition.requests : {
q = r.q
display_type = r.display_type
style = tomap(try(r.style, {}))
}
]
}
]
}
}
resource "datadog_screenboard" "acceptance_test" {
title = local.screenboard.title
description = local.screenboard.description
read_only = true
dynamic "widget" {
for_each = local.screenboard.widgets
content {
type = widget.value.type
title = widget.value.title
title_size = widget.value.title_size
title_align = widget.value.title_align
x = widget.value.x
y = widget.value.y
width = widget.value.width
height = widget.value.height
tile_def {
viz = widget.value.type
dynamic "request" {
for_each = widget.value.requests
content {
q = request.value.q
display_type = request.value.display_type
style = request.value.style
}
}
}
}
}
}
The separate normalization step to build local.screenboard here isn't strictly necessary: you could instead put the same sort of normalization expressions (using try to set defaults for things that aren't set) directly inside the resource "datadog_screenboard" block arguments if you wanted. I prefer to treat normalization as a separate step because then this leaves a clear definition in the configuration for what we're expecting to find in the JSON and what default values we'll use for optional items, separate from defining how that result is then mapped onto the physical datadog_screenboard resource.
I wasn't able to test the example above because I don't have a Datadog account. I'm sorry if there are minor typos/mistakes in it that lead to errors. My hope was to show the general principle of mapping from a serialized data file to a resource rather than to give a ready-to-use solution, so I hope the above includes enough examples of different situations that you can see how to extend it for the remaining Datadog JSON features you want to support in this module.
If this JSON format is a interchange format formally documented by Datadog, it could make sense for Terraform's Datadog provider to have the option of accepting a single JSON string in this format as configuration, for easier exporting. That may require changes to the Datadog provider itself, which is beyond what I can answer here but might be worth raising in the GitHub issues for that provider to streamline this use-case.

Related

How to concatenate strings in Terraform output with for loop?

I have multiple aws_glue_catalog_table resources and I want to create a single output that loops over all resources to show the S3 bucket location of each one. The purpose of this is to test if I am using the correct location (because it is a concatenation of variables) for each resource in Terratest. I cannot use aws_glue_catalog_table.* or aws_glue_catalog_table.[] because Terraform does not allow to reference a resource without specifying its name.
So I created a variable "table_names" with r1, r2, rx. Then, I can loop over the names. I want to create the string aws_glue_catalog_table.r1.storage_descriptor[0].location dynamically, so I can check if the location is correct.
resource "aws_glue_catalog_table" "r1" {
name = "r1"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/r1"
}
...
}
resource "aws_glue_catalog_table" "rX" {
name = "rX"
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/rX"
}
}
variable "table_names" {
description = "The list of Athena table names"
type = list(string)
default = ["r1", "r2", "r3", "rx"]
}
output "athena_tables" {
description = "Athena tables"
value = [for n in var.table_names : n]
}
First attempt: I tried to create an output "athena_tables_location" with the syntax aws_glue_catalog_table.${table} but does does.
output "athena_tables_location" {
// HOW DO I ITERATE OVER ALL TABLES?
value = [for t in var.table_names : aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
Second attempt: I tried to create a variable "table_name_locations" but IntelliJ already shows an error ${t} in the for loop [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"].
variable "table_name_locations" {
description = "The list of Athena table locations"
type = list(string)
// THIS ALSO DOES NOT WORK
default = [for t in var.table_names : "aws_glue_catalog_table.${t}.storage_descriptor[0].location"]
}
How can I list all table locations in the output and then test it with Terratest?
Once I can iterate over the tables and collect the S3 location I can do the following test using Terratest:
athenaTablesLocation := terraform.Output(t, terraformOpts, "athena_tables_location")
assert.Contains(t, athenaTablesLocation, "s3://rX/test-config/rX",)
It seems like you have an unusual mix of static and dynamic here: you've statically defined a fixed number of aws_glue_catalog_table resources but you want to use them dynamically based on the value of an input variable.
Terraform doesn't allow dynamic references to resources because its execution model requires building a dependency graph between all of the objects, and so it needs to know which exact resources are involved in a particular expression. However, you can in principle build your own single value that includes all of these objects and then dynamically choose from it:
locals {
tables = {
r1 = aws_glue_catalog_table.r1
r2 = aws_glue_catalog_table.r2
r3 = aws_glue_catalog_table.r3
# etc
}
}
output "table_locations" {
value = {
for t in var.table_names : t => local.tables[t].storage_descriptor[0].location
}
}
With this structure Terraform can see that output "table_locations" depends on local.tables and local.tables depends on all of the relevant resources, and so the evaluation order will be correct.
However, it also seems like your table definitions are systematic based on var.table_names and so could potentially benefit from being dynamic themselves. You could achieve that using the resource for_each feature to declare multiple instances of a single resource:
variable "table_names" {
description = "Athena table names to create"
type = set(string)
default = ["r1", "r2", "r3", "rx"]
}
resource "aws_glue_catalog_table" "all" {
for_each = var.table_names
name = each.key
database_name = var.db_name
storage_descriptor {
location = "s3://${var.bucket_name}/${var.environment}-config/${each.key}"
}
...
}
output "table_locations" {
value = {
for k, t in aws_glue_catalog_table.all : k => t.storage_descriptor[0].location
}
}
In this case aws_glue_catalog_table.all represents all of the tables together as a single resource with multiple instances, each one identified by the table name. for_each resources appear in expressions as maps, so this will declare resource instances with addresses like this:
aws_glue_catalog_table.all["r1"]
aws_glue_catalog_table.all["r2"]
aws_glue_catalog_table.all["r3"]
...
Because this is already a map, this time we don't need the extra step of constructing the map in a local value, and can instead just access this map directly to build the output value, which will be a map from table name to storage location:
{
r1 = "s3://BUCKETNAME/ENVNAME-config/r1"
r2 = "s3://BUCKETNAME/ENVNAME-config/r2"
r3 = "s3://BUCKETNAME/ENVNAME-config/r3"
# ...
}
In this example I've assumed that all of the tables are identical aside from their names, which I expect isn't true in practice but I was going only by what you included in the question. If the tables do need to have different settings then you can change var.table_names to instead be a variable "tables" whose type is a map of object type where the values describe the differences between the tables, but that's a different topic kinda beyond the scope of this question, so I won't get into the details of that here.

Any way to conditionalize variable in jsonencoded data?

Say I have the simplified following snippet to create a task definition as json.
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
}])
...
Say I want to add a variable to optionally create an additional definition so it looks something like this:
variable "another_definition" {
type = any
default = {}
}
...
task_container_definitions = jsonencode([{
name : var.name,
image : "${var.image}:${var.tag}",
cpu : var.cpu,
memory : var.memory,
},
var.another_definition
])
And define it as follows.
another_definition = {
name = "another_container"
image = "another_container"
cpu = 10
memory = 512
essential = true
}
I am able to get this to to output as expected as long as the variable is defined.
...
+ {
+ cpu = 10
+ essential = true
+ image = "another_container"
+ memory = 512
+ name = "another_container"
},
But if the variable is not defined, I see empty {} added to the output when I do a terraform plan, which is not what I expect. I have tried using null as well as the default but get an error.
...
+ {},
Is there a way to toggle this variable off so that if it is not defined then it doesn't show up in the outputted json definition? Is there a better approach than what I am attempting?
I was a little confused at first as to what you were asking, thinking that you were asking for the functionality of the merge function, and I mention that only in case I was right the first time, but I think I now understand your problem as that you want this task_container_definitions to have either one or two elements, depending on whether var.another_definition is set.
There's no single function for that particular situation, but I think we can combine some language features together to get that result.
First, let's decide that the variable being set means that it has a non-null value, and thus its default value should be null to represent the "unset" case:
variable "another_definition" {
type = any
default = null
validation {
# The time constraint above is looser than we really
# want, so this validation rule also enforces that
# the caller can't set this to something inappropriate,
# like a single string or a list.
condition = (
var.another_definition != null ?
can(keys(var.another_definition)) :
true
)
error_message = "Additional task container definition must be an object."
}
}
In Terraform it's a pretty common situation to need to convert between a value that might be null and a list that might have zero or one elements, or vice-versa, and so Terraform has some language features to help with that. In this case we can use a splat expression to concisely represent that. Let's see how that looks in terraform console first just to give a sense of what we're achieving with this:
$ terraform console
> null[*]
[]
> "hello"[*]
[
"hello",
]
> { object = "example" }[*]
[
{
"object" = "example"
},
]
Notice that when I applied the [*] operator to null it returned an empty list, but when I applied it to these other values it converted them to a single-element list. This is how the [*] operator behaves when you apply it to something that isn't a list; see the splat operator docs if you want to learn about the different behavior for lists, which isn't really relevant here because of the validation rule I added above which prevents the var.another_definition value from being a list.
Another tool we have in our Terraform toolbox here is the concat function, which takes one or more lists and returns a single list with the input elements all concatenated together in the given order. We can use this to combine your predefined list that's populated from var.name, var.cpu, etc with the zero-or-one element list created by [*], in order to create a list with their one or two elements:
locals {
task_container_definitions = concat(
[
name = var.name
image = "${var.image}:${var.tag}"
cpu = var.cpu
memory = var.memory
],
var.another_definition[*],
)
task_container_definitions_json = jsonencode(local.task_container_definitions)
}
If any of the arguments to concat are empty lists then they are effectively ignored altogether, because they contribute no elements to the result, and so this achieves (what I hope is) the desired result, by making the "other definition" appear in the result only when it's set to something other than null.

Get type of a variable in Terraform

Is there a way to detect the type of a variable in Terraform? Say, I have a module input variable of type any, can I do some kind of switch, depending on the type?
variable "details" {
type = any
}
local {
name = var.details.type == map ? var.details["name"] : var.details
}
What I want to archive is, to be able to pass either a string as shorthand or a complex object with additional keys.
module "foo" {
details = "my-name"
}
or
module "foo" {
details = {
name = "my-name"
age = "40"
}
}
I know this example doesn't make much sense and you would like to suggest to instead use two input vars with defaults. This example is just reduced to the minimal (non)working example. The end goal is to have a list of IAM policy statements, so it is going to be a list of lists of objects.
Terraform v0.12.20 introduced a new function try which can be used to concisely select between different ways of retrieving a value, taking the first one that wouldn't produce an error.
variable "person" {
type = any
# Optional: add a validation rule to catch invalid types,
# though this feature remains experimental in Terraform v0.12.20.
# (Since this is experimental at the time of writing, it might
# see breaking changes before final release.)
validation {
# If var.person.name succeeds then var.person is an object
# which has at least the "name" attribute.
condition = can(var.person.name) || can(tostring(var.person))
error_message = "The \"person\" argument must either be a person object or a string giving a person's name."
}
}
locals {
person = try(
# The value of the first successful expression will be taken.
{name = tostring(var.person)}, # If the value is just a string
var.person, # If the value is not a string (directly an object)
)
}
Elsewhere in the configuration you can then write local.person.name to obtain the name, regardless of whether the caller passed an object or a string.
The remainder of this answer is an earlier response that now applies only to Terraform versions between v0.12.0 and v0.12.20.
There is no mechanism for switching behavior based on types in Terraform. Generally Terraform favors selecting specific types so that module callers are always consistent and Terraform can fully validate the given values, even if that means a little extra verbosity in simpler cases.
I would recommend just defining details as an object and having the caller explicitly write out the object with the name attribute, in order to be more explicit and consistent:
variable "details" {
type = object({
name = string
})
}
module "example" {
source = "./modules/example"
details = { name = "example" }
}
If you need to support two different types, the closest thing in the Terraform language would be to define two variables and detect which one is null:
variable "details" {
type = object({
name = string
})
default = null
}
variable "name" {
type = string
default = null
}
local {
name = var.name != null ? var.name : var.details.name
}
However since there is not currently a way to express that exactly one of those two must be specified, the module configuration you write must be ready to deal with the possibility that both will be set (in the above example, var.name takes priority) or that neither will be set (in the above example, the expression would produce an error, but not a very caller-friendly one).
terraform v1.0+ introduces a new function type() for this purpose. See https://www.terraform.io/language/functions/type

How to create unstructured data in terraform

I'm trying to create configurations in terraform that I can later pass to modules (I'm doing this to work around the lack of "count" in modules).
The closest thing I got was using a null_data_source but the problem with that is that it only supports a single level of properties in inputs:
data "null_data_source" "my_data" {
count = var.my_data_count
inputs = {
settings = { ... } //this doesn't work
}
}
Then I looked at the docs of how to create a custom provider but couldn't work around the types that terraform supports - TypeMap will automatically turned into map[string]string unless I pass in the Elem property but that also only accepts terraform defined types (it doesn't accept standard golang types e.g.: map[string]interface{} or interface{}).
Does anyone know a way to get unstructured data as config like this?
There is no such thing as "unstructured data" in Terraform: every value has an associated type. However, in Terraform 0.12 introduced two structural types that allow for different element/attribute types to be mixed together inside a single value, which is not possible for the collection types.
You can use Local Values if you need to factor out the expressions for these structural values for use in multiple locations:
locals {
your_data = {
settings = {
foo = "bar"
baz = []
}
}
}
Although the details of this often don't matter, Terraform will see the above as being of the following type:
object({
settings = object({
foo = string
baz = tuple([])
})
})
As the author of a module, you can associated with each variable a type constraint that can both check that the given value has the appropriate type and give Terraform some hints to interpret such a value differently. For example, if baz in the above example were a list of strings whose length isn't fixed by the module (often the case) then you can specify it as such in your type constraint:
variable "example" {
type = object({
settings = object({
foo = string
baz = list(string)
})
})
}
Then the caller can pass in the local value we constructed earlier:
module "example" {
source = "./modules/example"
example = local.your_data
}
Terraform will then take the tuple([]) value from the local value and convert it automatically to list(string), in this case creating an empty list of strings.
For Terraform 0.11 your options are more limited, because it does not have structural types. In that case, the usual approach is to flatten the structure into many separate variables and set them separately, but then it's not possible to conveniently construct them all in one place and pass them as a single value.

How do I pick elements from a terraform list

I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.

Resources