Determining whether a value is known at plan-time - terraform

Terraform allows values to be marked as "unknown" during the plan step, since many values may only be known after apply of certain resources.
Is there any way, during the plan step, to check if a value is known or unknown?
Specifically, I'd like to be able to do something like this:
locals {
foo = "hello world"
bar = uuid()
}
output "foo_known" {
value = knownatplan(local.foo)
}
output "bar_known" {
value = knownatplan(local.bar)
}
Outputs:
foo_known = true
bar_known = false
Where knownatplan would be a function, or some sort of other mechanism, to determine if the value is known at plan time.

There is no mechanism to do this because to do so would break an assumption that Terraform relies on to do its work: that replacing unknown values with known values during the final apply step can only add information, never change information. The above guarantee is fundamental to the concept of planning a change before taking any side-effects.
Other systems whose language doesn't have this mechanism can only provide a hypothetical "dry run" of changes that may not be complete or accurate, whereas Terraform aims to make the additional promise that it will either make the changes as shown in the plan or return an error explaining why it cannot. Any situation where applying the plan succeeds but generates a result other than what the plan reported is always considered to be a bug, either in Terraform Core itself or in the relevant provider. Unknown values are a big part of how Terraform keeps that promise.
I wrote about this in more detail in a blog article Unknown Values: The Secret to Terraform Plan.

I had a little bit time now, and could implement a cool "feature" the inventors of terraform would not much like. ^-^
I am author of terraform-provider-value (github) (terraform registry) and there I have two resources called value_is_fully_known (link) and value_is_known (link). Sounds good? Just look look in the documentation or look here for some examples.
Fundamentally they allow you to have a true or false for a value that might be "(known after apply)" before you actually apply! Here an example:
terraform {
required_providers {
value = {
source = "pseudo-dynamic/value"
version = "0.5.1"
}
}
}
locals {
foo = "hello world"
bar = uuid()
}
resource "value_unknown_proposer" "default" {}
resource "value_is_known" "foo" {
value = local.foo
guid_seed = "foo"
proposed_unknown = value_unknown_proposer.default.value
}
resource "value_is_known" "bar" {
value = local.bar
guid_seed = "bar"
proposed_unknown = value_unknown_proposer.default.value
}
output "foo_known" {
value = value_is_known.foo.result
}
output "bar_known" {
value = value_is_known.bar.result
}
which results into:
Outputs:
bar_known = false
foo_known = true
Before actual apply?
Concretely you are running through two plan-phases, one I call plan-phase and the other one I call apply-phase, which includes another, independent and implicit plan-phase.
The plan-phase is characteristical when you see the plan, the potencial results (most of the times with a lot of "known after apply"-values") and the message "Terraform will perform the following actions:".
The apply-phase on the other side is, when all values are calculated in the view of the provider author, terraform or you when you see "Apply complete!".
It was very challenging to implement an acceptable solution because during the plan-phase the provider has no chance to to save anything because nothing is persistent. Only changes in the implicit plan-phase are stored when apply-phase was successful.
My thoughts
To know before you apply whether a value is known or unknown, can enable you some cool workflows in terraform but they should be very limited. I can only encourage you to use this mechanism as rarely as possible. But if you have some cool usages for it, do you mind to share them? Feel free to open pull request to add an example or begin a discussion.
Any form of feedback (constructive critism, bugs and ideas) I appreciate a lot. =)

Related

Terraform v0.13 - Check if password or secret provided, use a randomly generated one if not

I'm working to fine-tune some of my Terraform modules, specifically around the google_compute_vpn_tunnel, google_compute_router_interface, and google_compute_router_peer resources. I'd like to make things similar to AWS, where pre-shared keys and tunnel interface IP addresses are randomized by default, but can be overridden by the user (provided they are within a certain range).
The random option is working fine. For example, to create a 20-character random password, I do this:
resource "random_password" "RANDOM_PSK" {
length = 20
special = false
}
But, I only want to use this value if an input variable called vpn_shared_secret was not defined. Seems like this should work:
variable "vpn_shared_secret" {
type = string
default = null
}
locals {
vpn_shared_secret = try(var.vpn_shared_secret, random_password.RANDOM_PSK.result)
}
resource "google_compute_vpn_tunnel" "VPN_TUNNEL" {
shared_secret = local.vpn_shared_secret
}
Instead, it seems to ignore the vpn_shared_secret input variable and just go with the randomly generated one each time.
Is try() the correct way to be doing this? I'm just now learning Terraform if/else and map statements.
How about the coalesce() function?
The coalesce function takes any number of arguments, and returns the first argument that isn't null or an empty string.
locals {
vpn_shared_secret = coalesce(var.vpn_shared_secret, random_password.RANDOM_PSK.result)
}

Is it possible to report error on a condition with terraform 0.12?

Original reference - Quit condition on Terraform blueprint
Is it still possible to make conditional check like in the above question
resource "null_resource" "condition_checker" {
count = "${var.variable == 1 ? 0 : 1}"
"Insert your custom error message" = true
}
Similar format does not work in terraform 0.12 and 0.13 and I could not find any reference to removal of this feature. Is it possible to make a check like this 0.12 or 0.13?
Currently it is still not possible to validate inputs that require access to more than a variable. (The validation block only allows access to the validated variable.)
A hacky validation is still possible using the external data source:
data "external" "check_valid" {
count = var.to_test == true && some_other_condition ? 1 : 0
program = ["sh", "-c", ">&2 echo Condition must be satisfied when to_test is true; exit 1"]
}
This condition is checked before terraform asks for approval of a plan.
On the output it looks like this:
Error: failed to execute "sh": Condition must be satisfied when to_test is true
on variables.tf line 1, in data "external" "check_valid":
1: data "external" "check_valid" {
What you're referring to here was never an actual Terraform feature, but rather an example of exploiting a bug in an earlier version of Terraform to get a result that Terraform had no explicit support for.
With that said, modern versions of Terraform have support for custom variable validation rules which allow you to write out variable validation checks directly inside the corresponding variable block. For example:
variable "variable" {
type = number
validation {
condition = var.variable == 1
error_message = "Variable value must always be 1."
}
}
With that said, I just copied your contrived example from the question here, so this would require some adaptation for a real example. Note also that variable validation rules can only depend on the variable value and other constants, so you can't use this for more complicated checks such as those which involve two different variables. For that sort of situation, I'd recommend refactoring so that the values that are related arrive in a single variable of a object type, and then the validation can be for whether that object is valid.

Pattern for templating arguments for existing Terraform resources

I'm using the Terraform GitHub provider to define GitHub repositories for an internal GitHub Enterprise instance (although the question isn't provider-specific).
The existing github_repository resource works fine, but I'd like to be able to set non-standard defaults for some of its arguments, and easily group other arguments under a single new argument.
e.g.
github_repository's private value defaults to false but I'd like to default to true
Many repos will only want to allow squash merges, so having a squash_merge_only parameter which sets allow_squash_merge = true, allow_rebase_merge = false, allow_merge_commit = false
There are more cases but these illustrate the point. The intention is to make it simple for people to configure new repos correctly and to avoid having large amounts of config repeated across every repo.
I can achieve this by passing variables into a custom module, e.g. something along the lines of:
Foo/custom_repo/main.tf:
resource "github_repository" "custom_repo" {
name = ${var.repo_name}
private = true
allow_squash_merge = true
allow_merge_commit = ${var.squash_merge_only ? false : true}
allow_rebase_merge = ${var.squash_merge_only ? false : true}
}
Foo/main.tf:
provider "github" {
...
}
module "MyRepo_module" {
source = "./custom_repo"
repo_name = "MyRepo"
squash_merge_only = true
}
This is a bit rubbish though, as I have to add a variable for every other argument on github_repository that people using the custom_repo module might want to set (which is basically all of them - I'm not trying to restrict what people are allowed to do) - see name and repo_name on the example. This all then needs documenting separately, which is also a shame given that there are good docs for the existing provider.
Is there a better pattern for reusing existing resources like this but having some control over how arguments get passed to them?
We created an opinionated module (terraform 0.12+) for this at https://github.com/mineiros-io/terraform-github-repository
We set all the defaults to values we think fit best, but basically you can create a set of local defaults and reuse them when calling the module multiple times.
fun fact... your desired defaults are already the module's default right away, but to be clear how to set those explicitly here is an example (untested):
locals {
my_defaults = {
# actually already the modules default to create private repositories
private = true
# also the modules default already and
# all other strategies are disabled by default
allow_squash_merge = true
}
}
module "repository" {
source = "mineiros-io/repository/github"
version = "0.4.0"
name = "my_new_repository"
defaults = local.my_defaults
}
not all arguments are supported as defaults yet, buts most are: https://github.com/mineiros-io/terraform-github-repository#defaults-object-attributes

How do I pick elements from a terraform list

I am creating a series of resources in terraform (in this case, dynamo DB table). I want to apply IAM policies to subgroups of them. E.g.
resource "aws_dynamodb_table" "foo" {
count = "${length(var.tables)}"
name = "foo-${element(var.tables,count.index)}"
tags {
Name = "foo-${element(var.tables,count.index)}"
Environment = "<unsure how to get this>"
Source = "<unsure how to get this>"
}
}
All of these share some common element, e.g. var.sources is a list composed of the Cartesian product of var.environments and var.sources:
environments = ["dev","qa","prod"]
sources = ["a","b","c"]
So:
tables = ["a:dev","a:qa","a:prod","b:dev","b:qa","b:prod","c:dev","c:qa","c:prod"]
I want to get the arns of the created dynamo tables that have, e.g. c (i.e. those with the name ["c:dev","c:qa","c:prod"]) or prod(i.e. those with the name ["a:prod","b:prod","c:prod"]).
Is there any sane way to do this with terraform 0.11 (or even 0.12 for that matter)?
I am looking to:
group the dynamo db table resources by some of the inputs (environment or source) so I can apply some policy to each group
Extract the input for each created one so I can apply the correct tags
I was thinking of, potentially, instead of creating the cross-product list, to create maps for each input:
{
"a": ["dev","qa","prod"],
"b": ["dev","qa","prod"],
"c": ["dev","qa","prod"]
}
or
{
"dev": ["a","b","c"],
"qa": ["a","b","c"],
"prod": ["a","b","c"]
}
It would make it easy to find the target names for each one, since I can look up by the input, but that only gives me the names, but not make it easy to get the actual resources (and hence the arns).
Thanks!
A Terraform 0.12 solution would be to derive the cartesian product automatically (using setproduct) and use a for expression to shape it into a form that's convenient for what you need. For example:
locals {
environments = ["dev", "qa", "prod"]
sources = ["a", "b", "c"]
tables = [for pair in setproduct(local.environments, local.sources) : {
environment = pair[0]
source = pair[1]
name = "${pair[1]}:${pair[0]}"
})
}
resource "aws_dynamodb_table" "foo" {
count = length(local.tables)
name = "foo-${local.tables[count.index].name}"
tags {
Name = "foo-${local.tables[count.index].name}"
Environment = local.tables[count.index].environment
Source = local.tables[count.index].source
}
}
At the time I write this the resource for_each feature is still in development, but in a near-future Terraform v0.12 minor release it should be possible to improve this further by making these table instances each be identified by their names, rather than by their positions in the local.tables list:
# (with the same "locals" block as in the above example)
resource "aws_dynamodb_table" "foo" {
for_each = { for t in local.tables : t.name => t }
name = "foo-${each.key}"
tags {
Name = "foo-${each.key}"
Environment = each.value.environment
Source = each.value.source
}
}
As well as cleaning up some redundancy in the syntax, this new for_each form will cause Terraform to identify this instances with addresses like aws_dynamodb_table.foo["a:dev"] instead of aws_dynamodb_table.foo[0], which means that you'll be able to freely add and remove members of the two initial lists without causing churn and replacement of other instances because the list indices changed.
This sort of thing would be much harder to achieve in Terraform 0.11. There are some general patterns that can help translate certain 0.12-only constructs to 0.11-compatible features, which might work here:
A for expression returning a sequence (one with square brackets around it, rather than braces) can be simulated with a data "null_data_source" block with count set, if the result would've been a map of string values only.
A Terraform 0.12 object in a named local value can in principle be replaced with a separate simple map of local value for each object attribute, using a common set of keys in each map.
Terraform 0.11 does not have the setproduct function, but for sequences this small it's not a huge problem to just write out the cartesian product yourself as you did in the question here.
The result will certainly be very inelegant, but I expect it's possible to get something working on Terraform 0.11 if you apply the above ideas and make some compromises.

Handling Terraform AMI looking returning an empty list

Is there a better way than the following to handle a Terraform data resource aws_ami_ids returning an empty list?
Always want the module to return the latest AMI's ID if found.
If the list was empty I was getting a "list "data.aws_ami_ids.full_unencrypted_ami.ids" does not have any elements so cannot determine type." error, so this was the workaround.
data "aws_ami_ids" "full_unencrypted_ami" {
name_regex = "${var.ami_unencrypted_regex}"
owners = ["123456789","self"]
}
locals {
notfound = "${list("AMI Not Found")}"
unencrypted_ami = "${concat(data.aws_ami_ids.full_unencrypted_ami.ids,local.notfound)}"
}
output "full_ami_unencrypted_id" {
description = "Full Unencrypted AMI ID"
value = "${local.full_unencrypted_ami[0]}"
}
1) Use aws_ami_id instead of aws_ami_ids so that terraform apply fails if the AMI is gone, forcing you to update your Terraform solution.
OR
2) Create two aws_ami_ids data sources (the second being a fallback), concat the results and take the first item. But, as ydaetskcoR hinted at, why would you want this implicit (possibly undetected) fallback?

Resources