I need my resources to be created on specified environments. For example, if I have a AWS Lambda that is not ready for production, I need it to only exist in development environment. Is there a nice way to do this? I know that it's possible to set count to 0, but I'm not sure how to cascade this decision to other resources.
For example, I have a resource for an AWS Lambda and the count is set to 0.
resource "aws_lambda_function" "example_lambda" {
count ? local.is_production ? 0 : 1
}
How do I cascade this decision to other resources that depends on the AWS Lambda above?
And let's say I have a S3 Bucket which will invoke the Lambda function.
resource "aws_s3_bucket" "example_bucket" {
bucket = "bucket_name"
}
resource "aws_lambda_permission" "example_bucket_etl" {
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.example_lambda.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.example_bucket.arn
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.example_bucket.id
lambda_function {
lambda_function_arn = aws_lambda_function.example_lambda.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "example_bucket/"
filter_suffix = ".txt"
lambda_function {
lambda_function_arn = aws_lambda_function.another_lambda_function.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "another_example_bucket/"
filter_suffix = ".txt"
}
}
You can use the same count variable on multiple resources. A nicer and clear way would be to add all resources into a module, if that is possible in your code. https://www.terraform.io/docs/language/meta-arguments/count.html
When you use count in a resource block, that makes Terraform treat references to that resource elsewhere as producing a list of objects representing each of the instances of that resource.
Since that value is just a normal list value, you can take its length in order to concisely write down what is essentially the statement "there should be one Y for each X", or in your specific case "there should be one lambda permission for each lambda function".
For example:
resource "aws_lambda_function" "example" {
count = local.is_production ? 0 : 1
# ...
}
resource "aws_lambda_permission" "example_bucket_etl" {
count = length(aws_lambda_function.example)
function_name = aws_lambda_function.example[count.index].name
# ...
}
Inside the aws_lambda_permission configuration we first set the count to be whatever is the count of the aws_lambda_function.example, which tells Terraform that we intend for the counts of these to always match. That connection helps Terraform understand how to resolve situations where you increase or reduce the count, by hinting that the resulting create/destroy actions will need to happen in a particular order in order to be valid. We then use count.index to refer to indices of the other resource, which in this case will only ever be zero but again helps Terraform understand our intent during validation.
The lambda_function nested block inside aws_s3_bucket_notification requires a slightly different strategy, since in that case we're not creating a separate resource instance per lambda function but instead just generating some dynamic configuration blocks inside a single resource instance. For that situation, we can use dynamic blocks which serve as a sort of macro for generating multiple blocks based on elements of a collection:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.example_bucket.id
dynamic "lambda_function" {
for_each = aws_lambda_function.example
content {
# "lambda_function" in this block is the iterator
# symbol, so lambda_function.value refers to the
# current element of aws_lambda_function.example.
lambda_function_arn = lambda_function.value.arn
# ...
}
}
}
Again this is relying on the fact that aws_lambda_function.example is a list of objects, but in a different way: we ask Terraform to generate a lambda_function block for each element of aws_lambda_function.example, setting lambda_function.value to the whole aws_lambda_function object corresponding to each block. We can therefore access the .arn attribute from that object to get the corresponding ARN that we need to populate the lambda_function_arn argument inside the block.
Again, for this case there will only ever be zero or one lambda function objects and therefore only zero or one lambda_function blocks, but in both cases this pattern generalizes to other values of count, ensuring that all of these will stay aligned as your configuration evolves.
Related
I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}
I have been trying to conditionally use a module from the root module, so that for certain environments this module is not created. Many people claim that by setting the count in the module to either 0 or 1 using a conditional does the trick.
module "conditionally_used_module" {
source = "./modules/my_module"
count = (var.create == true) ? 1 : 0
}
However, this changes the type of conditionally_used_module: instead of an object (or map) we will have a list (or tuple) containing a single object. Is there another way to achieve this, that does not imply changing the type of the module?
To conditionally create a module you can use a variable, lets say it's called create_module in the variables.tf file of the module conditionally_used_module.
Then for every resource inside the conditionally_used_module module you will use the count to conditionally create or not that specific resource.
The following example should work and provide you with the desired effect.
# Set a variable to know if the resources inside the module should be created
module "conditionally_used_module" {
source = "./modules/my_module"
create_module = var.create
}
# Inside the conditionally_used_module file
# ( ./modules/my_module/main.tf ) most likely
# for every resource inside use the count to create or not each resource
resource "resource_type" "resource_name" {
count = var.create_module ? 1 : 0
... other resource properties
}
I used this in conjunction with workspaces to build a resource only for certain envs. The advantage is for me that I get a single terraform.tfvars file to control the all the environments structure for a project.
Inside main.tf:
workspace = terraform.workspace
#....
module "gcp-internal-lb" {
source = "../../modules/gcp-internal-lb"
# Deploy conditionally based on deploy_internal_lb variable
count = var.deploy_internal_lb[local.workspace] == true ? 1 : 0
# module attributes here
}
Then in variables.tf
variable "deploy_internal_lb" {
description = "Set to true if you want to create an internal LB"
type = map(string)
}
And in terraform.tfvars:
deploy_internal_lb = {
# DEV
myproject-dev = false
# QA
myproject-qa = false
# PROD
myproject-prod = true
}
I hope it helps.
terraform version 0.11.13
Error: Error refreshing state: 1 error(s) occurred:
data.aws_subnet.private_subnet: data.aws_subnet.private_subnet: value of 'count' cannot be computed
VPC code generated the error above:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
#count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
Change the above code to use count = "${length(var.private-subnet-mapping)}", I successfully provisioned the VPC. But, the output of vpc_private_subnets_ids is empty.
vpc_private_subnets_ids = []
Code provisioned VPC, but got empty list of vpc_private_subnets_ids:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
#count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
outputs.tf
output "vpc_private_subnets_ids" {
value = ["${data.aws_subnet.private_subnet.*.id}"]
}
The output of vpc_private_subnets_ids:
vpc_private_subnets_ids = []
I need the values of vpc_private_subnets_ids. After successfully provisioned VPC use the line, count = "${length(var.private-subnet-mapping)}", I changed code back to count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}". terraform apply, I got values of the list vpc_private_subnets_ids without above error.
vpc_private_subnets_ids = [
subnet-03199b39c60111111,
subnet-068a3a3e76de66666,
subnet-04b86aa9dbf333333,
subnet-02e1d8baa8c222222
......
]
I cannot use count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}" when I provision VPC. But, I can use it after VPC provisioned. Any clue?
The problem here seems to be that your VPC isn't created yet and so the data "aws_subnet_ids" "private_subnet_ids" data source read must wait until the apply step, which in turn means that the number of subnets isn't known, and thus the number of data "aws_subnet" "private_subnet" instances isn't predictable and Terraform returns this error.
If this configuration is also the one responsible for creating those subnets then the better design would be to refer to the subnet objects directly. If your module.vpc is also the module creating the subnets then I would suggest to export the subnet ids as an output from that module. For example:
output "subnet_ids" {
value = "${aws_subnet.example.*.id}"
}
Your calling module can then just get those ids directly from module.vpc.subnet_ids, without the need for a redundant extra API call to look them up:
output "vpc_private_subnets_ids" {
value = ["${module.vpc.subnet_ids}"]
}
Aside from the error about count, the configuration you showed also has a race condition because the data "aws_subnet_ids" "private_subnet_ids" block depends only on the VPC itself, and not on the individual VPCs, and so Terraform can potentially read that data source before the subnets have been created. Exporting the subnet ids through module output means that any reference to module.vpc.subnet_ids indirectly depends on all of the subnets and so those downstream actions will wait until all of the subnets have been created.
As a general rule, a particular Terraform configuration should either be managing an object or reading that object via a data source, and not both together. If you do both together then it may sometimes work but it's easy to inadvertently introduce race conditions like this, where Terraform can't tell that the data resource is attempting to consume the result of another resource block that's participating in the same plan.
I have a case where I have to create an aws_vpc resource if the user does not provide vpc id. After that I am supposed to create resources with that VPC.
Now, I am applying conditionals while creating an aws_vpc resource. For example, only create VPC if existing_vpc is false:
count = "${var.existing_vpc ? 0 : 1}"
Next, for example, I have to create nodes in the VPC. If the existing_vpc is true, use the var.vpc_id, else use the computed VPC ID from aws_vpc resource.
But, the issue is, if existing_vpc is true, aws_vpc will not create a new resource and the ternary condition is anyways trying to check if the aws_vpc resource is being created or not. If it doesn't get created, terraform errors out.
An example of the error when using conditional operator on aws_subnet:
Resource 'aws_subnet.xyz-subnet' not found for variable 'aws_subnet.xyz-subnet.id'
The code resulting in the error is:
subnet_id = "${var.existing_vpc ? var.subnet_id : aws_subnet.xyz-subnet.id}"
If both things are dependent on each other, how can we create conditional resources and assign values to other configuration based on them?
You can access dynamically created modules and resources as follows
output "vpc_id" {
value = length(module.vpc) > 0 ? module.vpc[*].id : null
}
If count = 0, output is null
If count > 0, output is list of vpc ids
If count = 1 and you want to receive a single vpc id you can specify:
output "vpc_id" {
value = length(module.vpc) > 0 ? one(module.vpc).id : null
}
The following example shows how to optionally specify whether a resource is created (using the conditional operator), and shows how to handle returning output when a resource is not created. This happens to be done using a module, and uses an object variable's element as a flag to indicate whether the resource should be created or not.
But to specifically answer your question, you can use the conditional operator as follows:
output "module_id" {
value = var.module_config.skip == true ? null : format("%v",null_resource.null.*.id)
}
And access the output in the calling main.tf:
module "use_conditionals" {
source = "../../scratch/conditionals-modules/m2" # << Change to your directory
a = module.skipped_module.module_id # Doesn't exist, so might need to handle that.
b = module.notskipped_module.module_id
c = module.default_module.module_id
}
Full example follows. NOTE: this is using terraform v0.14.2
# root/main.tf
provider "null" {}
module "skipped_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
module_config = {
skip = true # explicitly skip this module.
name = "skipped"
}
}
module "notskipped_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
module_config = {
skip = false # explicitly don't skip this module.
name = "notskipped"
}
}
module "default_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
# The default position is, don't skip. see m1/variables.tf
}
module "use_conditionals" {
source = "../../scratch/conditionals-modules/m2" # << Change to your directory
a = module.skipped_module.module_id
b = module.notskipped_module.module_id
c = module.default_module.module_id
}
# root/outputs.tf
output skipped_module_name_and_id {
value = module.skipped_module.module_name_and_id
}
output notskipped_module_name_and_id {
value = module.notskipped_module.module_name_and_id
}
output default_module_name_and_id {
value = module.default_module.module_name_and_id
}
the module
# m1/main.tf
resource "null_resource" "null" {
count = var.module_config.skip ? 0 : 1 # If skip == true, then don't create the resource.
provisioner "local-exec" {
command = <<EOT
#!/usr/bin/env bash
echo "null resource, var.module_config.name: ${var.module_config.name}"
EOT
}
}
# m1/variables.tf
variable "module_config" {
type = object ({
skip = bool,
name = string
})
default = {
skip = false
name = "<NAME>"
}
}
# m1/outputs.tf
output "module_name_and_id" {
value = var.module_config.skip == true ? "SKIPPED" : format(
"%s id:%v",
var.module_config.name,
null_resource.null.*.id
)
}
output "module_id" {
value = var.module_config.skip == true ? null : format("%v",null_resource.null.*.id)
}
The current answers here are helpful when you are working with more modern versions of terraform, but as noted by OP here they do not work when you are working with terraform < 0.12 (If you're like me and still dealing with these older versions, I am sorry, I feel your pain.)
See the relevant issue from the terraform project for more info on why the below is necessary with the older versions.
but to avoid link rot, I'll use the OPs example subnet_id argument using the answers in the github issue.
subnet_id = "${element(compact(concat(aws_subnet.xyz-subnet.*.id, list(var.subnet_id))),0)}"
From the inside out:
concat will join the splat output list to list(var.subnet_id) -- per the background link 'When count = 0, the "splat syntax" expands to an empty list'
compact will remove the empty item
element will return your var.subnet_id only when compact recieves the empty splat output.
I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.