I was using a count for creating multiple AWS task_definitions that should be executed by an AWS step function.
The task_definition required a data "template_file" "task_definition" { section to be able to fill the template data.
Then I needed to render the template data for multiple definitions at a time and I was blocked by an error looking like this:
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
Here's the initial code:
data "template_file" "task_definition" {
count = length(var.task_container_command)
template = file("./configs/file.json")
vars = {
task = module.ecs[count.index].task_definition
}
}
module "step_function" {
count = length(var.task_container_command)
source = "path"
region = var.region
name = "${var.step_function_name}-${count.index}"
definition_file = data.template_file.task_definition.rendered
}
The point here is that I can't render task_definition because these are not known by terraform yet before the apply. I wasn't able to use the -target argument either because I wanted to make the change in code and not in my deployment pipeline. Meaning when you try to do a terraform plan on the definition_file, the error will pop up.
Solution is below.
What worked was to decouple the use of the count from the .rendered argument by doing this:
data "template_file" "task_definition" {
count = length(var.task_container_command)
template = file("./configs/file.json")
vars = {
task = module.ecs[count.index].task_definition
}
}
resource "local_file" "foo" {
count = length(var.task_container_command)
content = element(data.template_file.task_definition.*.rendered, count.index)
filename = "task-definition-${count.index}"
}
module "step_function" {
count = length(var.task_container_command)
source = "path"
region = var.region
name = "${var.step_function_name}-${count.index}"
definition_file = local_file.foo[count.index].filename
}
Now your data is rendered in the resource called "foo" here and then passed to the step_function module so the terraform plan already knows what's inside your variable. The content element of foo acts like a loop to render each task_definition that I've created using a different filename to avoid duplicates.
Hope this helped :)
Related
I have to provision many instances using modules and count in it. The issue is , the required tags for some particular instances have to be different. Can any suggest a better way where a tag can be added along with default tags with all instances. Im using a map with all 30 occurrence which is long.
Default tags:tag1,tag2
Example Case:
instance1......instance15 - tag1,tag2
instance16 - tag1,tag2,tag_xxx
instance17......instance29 - tag1,tag2.
instance30 - tag1,tag2,tag_yyy
Terraform Code:
module "compute-vm" {
count = length(var.names)
source = "../modules/compute-vm"
project_id = var.project_id
name = var.name[count.index]
tags = var.tags[var.names[count.index]]
variable "tags" {
type = map(list(string))
}
tags.tfvars:
tags= {
"instance1" = ["tag1","tag2"],
"instance2"=["tag1","tag2"],
"instance3" = ["tag1","tag2"],
.
.
.
.
"instance16"=["tag1","tag2",tag_xxx]
You can concatenate two lists with concat.
Something along the lines of:
variable "default_tags" {
type = list(string)
}
module "compute-vm" {
...
tags = concat(var.default_tags, var.tags[var.names[count.index]])
}
Additionally, I would use for_each instead of count. Which makes the block easier to read. Additionally you can later refer to the block via module.compute-vm["instance1"] instead of having to do it with indices module.compute-vm[0]. Because who knows if the first VM is really "instance1" (maybe Terraform changes the order inside the map).
module "compute-vm" {
for_each = var.tags
source = "../modules/compute-vm"
project_id = var.project_id
name = each.key
tags = each.value // or concat(var.default_tags, each.value)
}
I am trying to build a galera cluster using terraform. To do that I need to render the galera config with the nodes ip, so I use a file template.
When applying, terraform fires an error
Error: Cycle: data.template_file.galera_node_config, hcloud_server.galera_node
It seems there is a circular reference when applying because the servers are not being created before the data template is used.
How may I circumvent this ?
Thanks
galera_node.tf
data "template_file" "galera_node_config" {
template = file("sys/etc/mysql/mariadb.conf/galera.cnf")
vars = {
galera_node0 = hcloud_server.galera_node[0].ipv4_address
galera_node1 = hcloud_server.galera_node[1].ipv4_address
galera_node2 = hcloud_server.galera_node[2].ipv4_address
curnode_ip = hcloud_server.galera_node[count.index].ipv4_address
curnode = hcloud_server.galera_node[count.index].id
}
}
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = self.ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}
The problem here is that you have multiple nodes that all depend on each other, and so there is no valid order for Terraform to create them: they must all be created before any other one can be created.
To address this will require a different approach. There are a few different options for this, but the one that seems closest to what you were already trying is to use the special resource type null_resource to factor out the provisioning into a separate resource that Terraform can work on only after all of the hcloud_server instances are ready.
Note also that the template_file data source is deprecated in favor of the templatefile function, so this is a good opportunity to simplify the configuration by using the function instead.
Both of those changes together lead to this:
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
}
resource "null_resource" "galera_config" {
count = length(hcloud_server.galera_node)
triggers = {
config_file = templatefile("${path.module}/sys/etc/mysql/mariadb.conf/galera.cnf", {
all_addresses = hcloud_server.galera_node[*].ipv4_address
this_address = hcloud_server.galera_node[count.index].ipv4_address
this_id = hcloud_server.galera_node[count.index].id
})
}
provisioner "file" {
content = self.triggers.config_file
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_node[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}
The triggers argument above serves to tell Terraform that it must re-run the provisioner each time the configuration file changes in any way, which could for example be because you've added a new node: all of the existing nodes would then be reprovisioned to include that additional node in their configurations.
Provisioners are considered a last resort in the Terraform documentation, but in this particular case the alternatives would likely be considerably more complicated. A typical non-provisioner answer to this would be to use a service discovery system where each node can register itself on startup and then discover the other nodes, for example with HashiCorp Consul's service catalog. But unless you have lots of similar use-cases in your infrastructure which could all share the Consul cluster, having to run another service is likely an unreasonable cost in comparison to just using a provisioner.
You really try to use data.template_file.galera_node_config inside of your resource "hcloud_server" "galera_node" and use hcloud_server.galera_node in your data.template_file.
To avoid this problem:
Remove provisioner "file" from your hcloud_server.galera_node
Move this provisioner "file" to a new null_resource e.g. like that:
resource "null_resource" template_upload {
count = var.galera_nodes
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_nodes[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
depends_on = [hcloud_server.galera_node]
}
Is it possible to create multiple CloutFormation stacks with one aws_cloudformation_stack resource definition in terraform, based on parametrized name ?
I have the following resources defined and I would like to have a stack per app_name, app_env build_name combo:
resource "aws_s3_bucket_object" "sam_deploy_object" {
bucket = var.sam_bucket
key = "${var.app_env}/${var.build_name}/sam_template_${timestamp()}.yaml"
source = "../.aws-sam/sam_template_output.yaml"
etag = filemd5("../.aws-sam/sam_template_output.yaml")
}
resource "aws_cloudformation_stack" "subscriptions_sam_stack" {
name = "${var.app_name}---${var.app_env}--${var.build_name}"
capabilities = ["CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
template_url = "https://${var.sam_bucket}.s3-${data.aws_region.current.name}.amazonaws.com/${aws_s3_bucket_object.sam_deploy_object.id}"
}
When I run terraform apply when build_name name changes, the old stack gets deleted and a new one created, however I would like to keep the old stack and create a new one
One way would be to define your variable build_name as a list. Then, when you create new build, you just append them to the list, and create stacks with the help of for_each to iterate over the build names.
For example, if you have the following:
variable "app_name" {
default = "test1"
}
variable "app_env" {
default = "test2"
}
variable "build_name" {
default = ["test3"]
}
resource "aws_cloudformation_stack" "subscriptions_sam_stack" {
for_each = toset(var.build_name)
name = "${var.app_name}---${var.app_env}--${each.value}"
capabilities = ["CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
template_url = "https://${var.sam_bucket}.s3-${data.aws_region.current.name}.amazonaws.com/${aws_s3_bucket_object.sam_deploy_object.id}"
}
Then if you want second build for the stack, you just extend variable "build_name":
variable "build_name" {
default = ["test3", "new_build"]
}
I am trying to create a terraform module that creates a compute instance. I want the resource to have an attached disk if and only if I have a variable attached_disk_enabled set to true during module invocation. I have this:
resource "google_compute_disk" "my-disk" {
name = "data"
type = "pd-ssd"
size = 20
count = var.attached_disks_enabled ? 1 : 0
}
resource "google_compute_instance" "computer" {
name = "computer"
boot_disk {
...
}
// How do I make this disappear if attached_disk_enabled == false?
attached_disk {
source = "${google_compute_disk.my-disk.self_link}"
device_name = "computer-disk"
mode = "READ_WRITE"
}
}
Variables have been declared for the module in vars.tf. Module invocation is like this:
module "main" {
source = "../modules/computer"
attached_disk_enabled = false
...
}
I know about dynamic blocks and how to use for loop to iterate over a list and set multiple blocks, but I'm not sure how to exclude a block from a resource using this method:
dynamic "attached-disk" {
for_each in var.disk_list
content {
source = "${google_compute_disk.my-disk.*.self_link}"
device_name = "computer-disk-${count.index}"
mode = "READ_WRITE"
}
}
I want if in place of for_each. Is there a way to do this?
$ terraform version
Terraform v0.12.0
Because your disk resource already has the conditional attached to it, you can use the result of that resource as your iterator and thus avoid specifying the conditional again:
dynamic "attached_disk" {
for_each = google_compute_disk.my-disk
content {
source = attached_disk.value.self_link
device_name = "computer-disk-${attached_disk.key}"
mode = "READ_WRITE"
}
}
To answer the general question: if you do need a conditional block, the answer is to write a conditional expression that returns either a single-item list or an empty list:
dynamic "attached_disk" {
for_each = var.attached_disk_enabled ? [google_compute_disk.my-disk[0].self_link] : []
content {
source = attached_disk.value
device_name = "computer-disk-${attached_disk.key}"
mode = "READ_WRITE"
}
}
However, in your specific situation I'd prefer the former because it describes the intent ("attach each of the disks") more directly.
I'm creating subnets as part of a seperate terraform template and exporting the IDs as follows.
output "subnet-aza-dev" {
value = "${aws_subnet.subnet-aza-dev.id}"
}
output "subnet-azb-dev" {
value = "${aws_subnet.subnet-azb-dev.id}"
}
output "subnet-aza-test" {
value = "${aws_subnet.subnet-aza-test.id}"
}
output "subnet-azb-test" {
value = "${aws_subnet.subnet-azb-test.id}"
}
...
I'm then intending to lookup these IDs in another template which is reused to provision multiple environments. Example below shows my second template is calling a module to provision an EC2 instance and is passing through the subnet_id.
variable "environment" {
description = "Environment name"
default = "dev"
}
module "sql-1-ec2" {
source = "../modules/ec2winserver_sql"
...
subnet_id = "${data.terraform_remote_state.env-shared.subnet-aza-dev}"
}
What I'd like to do is pass the environment variable as part of the lookup for the subnet_id e.g.
subnet_id = "${data.terraform_remote_state.env-shared.subnet-aza-${var.environment}"
However I'm aware that variable interpolation isn't supported. I've tried using a map inside of the first terraform template to export them all to a 'subnet' which I could then use to lookup from the second template. This didn't work as I was unable to output variables inside of the map.
This sort of design pattern is something I've used previously with CloudFormation, however I'm much newer to terraform. Am I missing something obvious here?
Worked out a way to do this using data sources
variable "environment" {
description = "Environment name"
default = "dev"
}
module "sql-1-ec2" {
source = "../modules/ec2winserver_sql"
...
subnet_id = "${data.aws_subnet.subnet-aza.id}"
}
data "aws_subnet" "subnet-aza" {
filter {
name = "tag:Name"
values = ["${var.product}-${var.environment}-${var.environmentno}-subnet-aza"]
}
}
data "aws_subnet" "subnet-azb" {
filter {
name = "tag:Name"
values = ["${var.product}-${var.environment}-${var.environmentno}-subnet-azb"]
}
}
Whilst this works and fulfils my original need, I'd like to improve on this by moving the data blocks to within the module, so that there's less repetition. Still working on that one though...