Does Terraform support conditional attributes? I only want to use an attribute depending on a variable's value.
Example:
resource "aws_ebs_volume" "my_volume" {
availability_zone = "xyz"
size = 30
if ${var.staging_mode} == true:
snapshot_id = "a_specific_snapshot_id"
endif
}
The above if statement enclosing the attribute snapshot_id is what I'm looking for. Does Terraform support such attribute inclusion based on a variable's value.
Terraform 0.12 (yet to be released) will also bring support for HCL2 which allows you to use nullable arguments with something like this:
resource "aws_ebs_volume" "my_volume" {
availability_zone = "xyz"
size = 30
snapshot_id = var.staging_mode ? local.a_specific_snapshot_id : null
}
Nullable arguments are covered in this 0.12 preview guide.
For version of Terraform before 0.12, Markus's answer is probably your best bet although I'd be more explicit with the count with something like this:
resource "aws_ebs_volume" "staging_volume" {
count = "${var.staging_mode ? 1 : 0}"
availability_zone = "xyz"
size = 30
snapshot_id = "a_specific_snapshot_id"
}
resource "aws_ebs_volume" "non_staging_volume" {
count = "${var.staging_mode ? 0 : 1}"
availability_zone = "xyz"
size = 30
}
Note that the resource names must be unique or Terraform will complain. This then causes issues if you need to refer to the EBS volume such as with an aws_volume_attachment as in pre 0.12 the ternary expression is not lazy so something like this doesn't work:
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = "${var.staging_mode ? aws_ebs_volume.staging_volume.id : aws_ebs_volume.non_staging_volume.id}"
instance_id = "${aws_instance.web.id}"
}
Because it will attempt to evaluate both sides of the ternary where only one can be valid at any point. In Terraform 0.12 this will no longer be the case but obviously you could solve it more easily with the nullable arguments.
I'm not aware of such a feature, however, you can model around this, if your cases are not too complicated. Since the Boolean values true and false are considered to be 1 and 0, you can use them within a count. So you may use
provider "null" {}
resource "null_resource" "test1" {
count= ${var.condition ? 1 : 0}
}
resource "null_resource" "test2" {
count = ${var.condition ? 0 : 1}
}
output "out" {
value = "${var.condition ? join(",",null_resource.test1.*.id) : join(",",null_resource.test2.*.id) }"
}
Only one of the two resources is created due to the count attribute.
You have to use join for the values, because this seems to handle the inexistence of one of the two values gracefully.
Thanks ydaetskcor for pointing out in their answer the improvements for variable handling.
Now that Terraform v0.12 and respective HCL2 were released, you can achieve this by just setting the default variable value to "null". Look at this example from Terraform website:
variable "override_private_ip" {
type = string
default = null
}
resource "aws_instance" "example" {
# ... (other aws_instance arguments) ...
private_ip = var.override_private_ip
}
More info here:
https://www.hashicorp.com/blog/terraform-0-12-conditional-operator-improvements
There is a new experimental feature with Terraform 0.15 : defaults which works with optional.
The defaults function is a specialized function intended for use with input variables whose type constraints are object types or collections of object types that include optional attributes.
From documentation :
terraform {
# Optional attributes and the defaults function are
# both experimental, so we must opt in to the experiment.
experiments = [module_variable_optional_attrs]
}
variable "storage" {
type = object({
name = string
enabled = optional(bool)
website = object({
index_document = optional(string)
error_document = optional(string)
})
documents = map(
object({
source_file = string
content_type = optional(string)
})
)
})
}
locals {
storage = defaults(var.storage, {
# If "enabled" isn't set then it will default
# to true.
enabled = true
# The "website" attribute is required, but
# it's here to provide defaults for the
# optional attributes inside.
website = {
index_document = "index.html"
error_document = "error.html"
}
# The "documents" attribute has a map type,
# so the default value represents defaults
# to be applied to all of the elements in
# the map, not for the map itself. Therefore
# it's a single object matching the map
# element type, not a map itself.
documents = {
# If _any_ of the map elements omit
# content_type then this default will be
# used instead.
content_type = "application/octet-stream"
}
})
}
just for the help, a more complex example:
data "aws_subnet" "private_subnet" {
count = var.sk_count
vpc_id = data.aws_vpc.vpc.id
availability_zone = element(sort(data.aws_availability_zones.available.names), count.index)
tags = {
Name = var.old_cluster_fqdn != "" ? "${var.old_cluster_fqdn}-prv-subnet-${count.index}" : "${var.cluster_fqdn}-prv-subnet-${count.index}"
}
}
Related
If I want to define a lambda function with a VPC config. I can do it like this:
resource "aws_lambda_function" "lambda" {
function_name = "..."
...
vpc_config {
subnet_ids = ["..."]
security_group_ids = ["..."]
}
}
I would like to create the lambda in a terraform module and define the vpc_config in the module definition. I can define the module like this:
resource "aws_lambda_function" "lambda" {
function_name = "..."
...
dynamic "vpc_config" {
for_each = var.vpc_configs
content {
subnet_ids = vpc_config.value["subnet_ids"]
security_group_ids = vpc_config.value["security_group_ids"]
}
}
}
variable "vpc_configs" {
type = list(object({
subnet_ids = list(string)
security_group_ids = list(string)
}))
default = []
}
And then use it:
module "my_lambda" {
source = "./lambda"
...
vpc_configs = [
{
subnet_ids = ["..."]
security_group_ids = ["..."]
}
]
}
However, since there is only one vpc_config block allowed there is no point in defining the variable as a list. I would prefer the following syntax:
module "my_lambda" {
source = "./lambda"
...
vpc_config = {
subnet_ids = ["..."]
security_group_ids = ["..."]
}
# or:
#vpc_config {
# subnet_ids = ["..."]
# security_group_ids = ["..."]
#}
}
However, I can't figure out if it is possible to define a variable like this and then use it in a dynamic block. I defined it as a list in the first place because I don't always need a VPC config and this way I can simply leave the list empty and no VPC config will be created. Is there a way to create an optional vpc_config block through a simple map or object definition?
dynamic blocks work by generating one block for each element in a collection, if any, whereas you want to define a variable that is an optional non-collection value. Therefore the key to this problem is to translate from a single value that might be null (representing absence) into a list of zero or one elements.
Due to how commonly this arises, Terraform has a concise way to represent that conversion using the splat operator, [*]. If you apply it to a value that isn't a list, then it will implicitly convert it into a list of zero or one elements, depending on whether the value is null.
The example in the documentation I just linked to shows a practical example of this pattern. The following is essentially the same approach, but adapted to use the resource type that you are using in your question:
variable "vpc_config" {
type = object({
subnet_ids = list(string)
security_group_ids = list(string)
})
default = null
}
resource "aws_lambda_function" "lambda" {
function_name = "..."
...
dynamic "vpc_config" {
for_each = var.vpc_config[*]
content {
subnet_ids = vpc_config.value.subnet_ids
security_group_ids = vpc_config.value.security_group_ids
}
}
}
The default value of var.vpc_config is null, so if the caller doesn't set it then that is the value it will take.
var.vpc_config[*] will either return an empty list or a list containing one vpc_config object, and so this dynamic block will generate either zero or one vpc_config blocks depending on the "null-ness" of var.vpc_config.
so you are wanting a conditional dynamic block
you could possibly get away with it by doing a check similar to the one on the object below
dynamic "vpc_config"{
for_each = length(var.vpc_config) > 0 ? {config=var.vpc_config}: {}
content{
...
}
}
if no vpc_config is passed in the module then the input variable should default to something like an empty object {}, that way the dynamic conditional check will still work if no config is passed
Turns out it doesn't seem to be possible what I want to do (building an optional type safe configuration through an object definition without having to nest it in a list).
Instead I now use the lambda module provided by Terraform:
module "email_lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "3.3.1"
function_name = "${var.stack_name}-email"
handler = "pkg.email.App::handleRequest"
runtime = "java11"
architectures = ["x86_64"]
memory_size = 512
timeout = 30
layers = [aws_lambda_layer_version.lambda_layer.arn]
create_package = false
local_existing_package = "../email/target/email.jar"
environment_variables = {
# https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
JAVA_TOOL_OPTIONS = "-XX:+TieredCompilation -XX:TieredStopAtLevel=1"
}
vpc_subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [aws_security_group.lambda_security_group.id]
attach_policies = true
policies = [
"arn:aws:iam::aws:policy/service-role/AWSLambdaSQSQueueExecutionRole",
]
number_of_policies = 1
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "SESBulkTemplatedPolicy"
Effect = "Allow"
Resource = [...]
Action = [
"ses:SendEmail",
"ses:SendRawEmail",
"ses:SendTemplatedEmail",
"ses:SendBulkTemplatedEmail",
]
}
]
})
}
As one can see in this configuration I had to set the VPC parameters individually and in case of the policy I had to specify a boolean parameter to tell Terraform that the configuration was set (I even had to specify the length of the provided list). Looking at the source code of the module reveals that there may not be a better way how to achieve this in the most up to date version of Terraform.
I am learning terraform and trying to understand the for_each loop iteration in terraform.
I am iterating through a loop for creating RGs in Azure cloud and what I want to understand is the difference between accessing the value of an instance using . or [""].
So for example, below is my tfvar file:
resource_groups = {
resource_group_1 = {
name = "terraform-apply-1"
location = "eastus2"
tags = {
created_by = "vivek89#test.com"
}
},
resource_group_2 = {
name = "terraform-apply-2"
location = "eastus2"
tags = {
created_by = "vivek89#test.com"
}
},
resource_group_3 = {
name = "terraform-apply-3"
location = "eastus2"
tags = {
created_by = "vivek89#test.com"
contact_dl = "vivek89#test.com"
}
}
}
and below is my terraform main.tf file:
resource "azurerm_resource_group" "terraformRG" {
for_each = var.resource_groups
name = each.value.name
location = each.value.location
tags = each.value.tags
}
I am confused with the expression in for_each in RG creation block. Both the below codes works and create RGs:
name = each.value.name
name = each.value["name"]
I want to understand the difference between the two and which one is correct.
They are equivalent as explained in the docs:
Map/object attributes with names that are valid identifiers can also be accessed using the dot-separated attribute notation, like local.object.attrname. In cases where a map might contain arbitrary user-specified keys, we recommend using only the square-bracket index notation (local.map["keyname"]).
The main difference is that dot notation requires key attributes to be valid identifiers. In contrast, the square-bracket notation works with any identifiers.
I'd like to use something as below to create/manage common tags for all resources in projects. For the common_var_tag, I'd like it to be applied only there are any other changes. So the sources are tagged with last-modified by who and when.
Is there any way to do it?
Thanks in advance!
locals {
common_var_tags = {
ChangedBy = data.aws_caller_identity.current.arn
ChangedAt = timestamp()
}
common_fix_tags = {
Project. = "Project"
Owner = "Tiger Peng"
Team = "DevOps"
CreatedAt = "2021-06-08"
}
}
For example, right now, I have to comment out the "local.common_var_tags" as each time when I run terraform plan or terrafomr apply without changing any attribute, the resource nginx is marked/changed due to ChangedAt = timestamp(). I'd like to find the way that only when other attributes changed, this tag changing will be applied.
resource "aws_instance" "nginx" {
count = 1
ami = var.nginx-ami
instance_type = var.nginx-instance-type
subnet_id = var.frontend-subnets[count.index]
key_name = aws_key_pair.key-pair.key_name
vpc_security_group_ids = [aws_security_group.nginx-sg.id]
root_block_device {
delete_on_termination = false
encrypted = true
volume_size = var.nginx-root-volume-size
volume_type = var.default-ebs-type
tags = merge(
local.common_fix_tags,
#local.common_var_tags,
map(
"Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}"
)
)
}
tags = merge(
local.common_fix_tags,
#local.common_var_tags,
map(
"Name", "${var.project}-${var.env}-nginx-${var.zones[count.index]}",
"Role", "Nginx"
)
)
}
I had the same problem and I found a workaround. It is not a clean solution but it works in some way.
First of all, you have to create a lifecycle block on your resource and ignore changes on your "ChangedAt" tag:
resource "aws_instance" "nginx" {
...
lifecycle {
ignore_changes = [tags["ChangedAt"]]
}
}
Then create a local variable. Its value must be a md5 hash with the value of all the resource attributes whose changes should provoke and update on "ChangedAt" tag:
locals{
hash = md5(join(",",[var.nginx-ami,var.nginx-instance-type, etc]))
}
Finally create a null resource that triggers on the change of that local variable, with a local-exec that updates "ChangedAt" tag:
resource "null_resource" "nginx_tags" {
triggers = {
instance = local.hash
}
provisioner "local-exec" {
command = "aws resourcegroupstaggingapi tag-resources --resource-arn-list ${aws_instance.nginx.arn} --tags ChangedAt=${timestamp()}"
}
}
With that configuration any change in the variables included in the md5 has will update your tag
That's defeating a bit the purpose of immutable infrastructure. You shouldn't have any change between 2 successive tf apply. BUT because this is a quite common pattern when you work in K8S clusters, Terraforn AWS provider 2.6 allows you to globally ignore changes on tags
provider "aws" {
# ... potentially other configuration ...
ignore_tags {
# specific tag
keys = ["ChangedAt"]
# or by prefix to ignore ChangedBy too
key_prefixes = ["Changed"]
}
}
I'm confused on how to get this working, I have a sub-domian (module.foo.dev) and alternate domain name as *.foo.dev but it has to use the same zone_id as my root_domain.
I'm trying to use a local map something like
all_domains = {
["module.foo.dev","*.foo.dev"] = "foo.dev"
["bar.com"] = "bar.com"
}
My variables are as follows
primary_domain = "module.foo.dev"
sub_alternate_domain = ["*.foo.dev","bar.com"]
Eventually would be using that locals value in the below module
module:
resource "aws_route53_record" "record" {
count = var.validation_method == "DNS" ? local.all_domains : 0
name = aws_acm_certificate.certificate.domain_validation_options.0.resource_record_name
type = aws_acm_certificate.certificate.domain_validation_options.0.resource_record_type
zone_id = data.aws_route53_zone.selected[count.index].zone_id
ttl = "300"
records = [aws_acm_certificate.certificate.domain_validation_options.0.resource_record_value]
}
Can someone pls help me with this solution..
In Terraform a map can only have strings as keys (unquoted keys are still strings), so you need to swap your keys and values:
locals{
all_domains = {
"foo.dev" = ["module.foo.dev","*.foo.dev"]
"bar.com" = ["bar.com"]
}
}
Also, as above, your local variables need to be declared and assigned in a locals block.
The count argument on resources expects a whole non-negative number (0 or more) and will not accept a map as a value. You'll need to use for_each instead:
resource "aws_route53_record" "record" {
for_each = var.validation_method == "DNS" ? local.all_domains : {}
name = aws_acm_certificate.certificate.domain_validation_options.0.resource_record_name
type = aws_acm_certificate.certificate.domain_validation_options.0.resource_record_type
zone_id = data.aws_route53_zone.selected[count.index].zone_id
ttl = "300"
records = [aws_acm_certificate.certificate.domain_validation_options.0.resource_record_value]
}
The map type in the Expression Language doc provides some minimal additional guidance.
I have an ecs_cluster module which defines an ECS cluster. I want the module to be re-usable so I can create various clusters with different configurations. Hence I want to be able to optionally specify whether to create and attach an EBS volume in the launch configuration of the ECS hosts.
I initially tried using count in the ebs_block_device inside the launch configuration e.g.
variable "ebs_volume_device_name" { type = "string", default = "" }
variable "ebs_volume_type" { type = "string", default = "" }
variable "ebs_volume_size" { type = "string", default = "" }
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
ebs_block_device {
count = "${length(var.ebs_volume_device_name) > 0 ? 1 : 0}"
device_name = "${var.ebs_volume_device_name }"
volume_type = "${var.ebs_volume_type}"
volume_size = "${var.ebs_volume_size}"
}
}
However this results in the following error:
module.ecs_cluster.aws_launch_configuration.launch_configuration: ebs_block_device.0: invalid or unknown key: count
I then tried specifying the launch_configuration resource twice, once with and once without the ebs block device e.g.
variable "ebs_volume_device_name" { type = "string", default = "" }
variable "ebs_volume_type" { type = "string", default = "" }
variable "ebs_volume_size" { type = "string", default = "" }
resource "aws_launch_configuration" "launch_configuration" {
count = "${length(var.ebs_volume_device_name) == 0 ? 1 : 0}"
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
# No specification of ebs_block_device
}
resource "aws_launch_configuration" "launch_configuration" {
count = "${length(var.ebs_volume_device_name) > 0 ? 1 : 0}"
name_prefix = "foo"
image_id = "bar"
# Irrelevant stuff removed for brevity...
ebs_block_device {
device_name = "${var.ebs_volume_device_name }"
volume_type = "${var.ebs_volume_type}"
volume_size = "${var.ebs_volume_size}"
}
}
However Terraform then complains because the resource is defined twice.
I can't change the id of either of the resources as I have an auto scaling group which depends upon the name of the launch configuration e.g.
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
}
I guess I could conditionally define 2 autoscaling groups and map one to each launch configuration but this feels really messy. Also these resources themselves have dependent resources such as cloudwatch metric alarms etc. It feels very unDRY to repeat all of this code twice with 2 separate conditions. Am I missing a trick here?
Grateful for any relevant Terraform wisdom!
The count meta-attribute works only on resource-level, unfortunately. Having a conditional block within a resource (such as your ebs_block_device or for example logging or etc) is a problem commonly mentioned in terraform issues in github and as far as I can tell there isn't a solution yet.
In your case a 'trick' could be to have your autoscaling_group.launch_configuration property also have a ternary operator, i.e.
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${length(var.ebs_volume_device_name) == 0 ? aws_launch_configuration.launch_configuration.name : aws_launch_configuration.launch_configuration2.name}"
}
Or better yet extract that logic in a launch_configuration module with an output name and then the above can look like
resource "aws_autoscaling_group" "autoscaling_group" {
name = "foo"
launch_configuration = "${module.launch_config.name}"
}
Not saying it isn't ugly but that's terraform's conditionals for you.
It seems you don't actually need a condition here in aws_launch_configuration resource.
If you are using AWS ECS optimized which is based on the Amazon Linux then it will automatically attach a device volume at /dev/xvdcz with default volume_size=22gb by default.
You can pass the variable to something else(say 50gb) to this variable ${var.ebs_volume_device_name}, if you want to increase or decrease the size of that particular volume depending upon your need.