Terraform fires Cycle error when applying - terraform

I am trying to build a galera cluster using terraform. To do that I need to render the galera config with the nodes ip, so I use a file template.
When applying, terraform fires an error
Error: Cycle: data.template_file.galera_node_config, hcloud_server.galera_node
It seems there is a circular reference when applying because the servers are not being created before the data template is used.
How may I circumvent this ?
Thanks
galera_node.tf
data "template_file" "galera_node_config" {
template = file("sys/etc/mysql/mariadb.conf/galera.cnf")
vars = {
galera_node0 = hcloud_server.galera_node[0].ipv4_address
galera_node1 = hcloud_server.galera_node[1].ipv4_address
galera_node2 = hcloud_server.galera_node[2].ipv4_address
curnode_ip = hcloud_server.galera_node[count.index].ipv4_address
curnode = hcloud_server.galera_node[count.index].id
}
}
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = self.ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}

The problem here is that you have multiple nodes that all depend on each other, and so there is no valid order for Terraform to create them: they must all be created before any other one can be created.
To address this will require a different approach. There are a few different options for this, but the one that seems closest to what you were already trying is to use the special resource type null_resource to factor out the provisioning into a separate resource that Terraform can work on only after all of the hcloud_server instances are ready.
Note also that the template_file data source is deprecated in favor of the templatefile function, so this is a good opportunity to simplify the configuration by using the function instead.
Both of those changes together lead to this:
resource "hcloud_server" "galera_node" {
count = var.galera_nodes
name = "galera-${count.index}"
image = var.os_type
server_type = var.server_type
location = var.location
ssh_keys = [hcloud_ssh_key.default.id]
labels = {
type = "cluster"
}
user_data = file("galera_cluster.sh")
}
resource "null_resource" "galera_config" {
count = length(hcloud_server.galera_node)
triggers = {
config_file = templatefile("${path.module}/sys/etc/mysql/mariadb.conf/galera.cnf", {
all_addresses = hcloud_server.galera_node[*].ipv4_address
this_address = hcloud_server.galera_node[count.index].ipv4_address
this_id = hcloud_server.galera_node[count.index].id
})
}
provisioner "file" {
content = self.triggers.config_file
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_node[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
}
}
The triggers argument above serves to tell Terraform that it must re-run the provisioner each time the configuration file changes in any way, which could for example be because you've added a new node: all of the existing nodes would then be reprovisioned to include that additional node in their configurations.
Provisioners are considered a last resort in the Terraform documentation, but in this particular case the alternatives would likely be considerably more complicated. A typical non-provisioner answer to this would be to use a service discovery system where each node can register itself on startup and then discover the other nodes, for example with HashiCorp Consul's service catalog. But unless you have lots of similar use-cases in your infrastructure which could all share the Consul cluster, having to run another service is likely an unreasonable cost in comparison to just using a provisioner.

You really try to use data.template_file.galera_node_config inside of your resource "hcloud_server" "galera_node" and use hcloud_server.galera_node in your data.template_file.
To avoid this problem:
Remove provisioner "file" from your hcloud_server.galera_node
Move this provisioner "file" to a new null_resource e.g. like that:
resource "null_resource" template_upload {
count = var.galera_nodes
provisioner "file" {
content = data.template_file.galera_node_config.rendered
destination = "/tmp/galera_cnf"
connection {
type = "ssh"
user = "root"
host = hcloud_server.galera_nodes[count.index].ipv4_address
private_key = file("~/.ssh/id_rsa")
}
depends_on = [hcloud_server.galera_node]
}

Related

How to render terraform data when using a count

I was using a count for creating multiple AWS task_definitions that should be executed by an AWS step function.
The task_definition required a data "template_file" "task_definition" { section to be able to fill the template data.
Then I needed to render the template data for multiple definitions at a time and I was blocked by an error looking like this:
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
Here's the initial code:
data "template_file" "task_definition" {
count = length(var.task_container_command)
template = file("./configs/file.json")
vars = {
task = module.ecs[count.index].task_definition
}
}
module "step_function" {
count = length(var.task_container_command)
source = "path"
region = var.region
name = "${var.step_function_name}-${count.index}"
definition_file = data.template_file.task_definition.rendered
}
The point here is that I can't render task_definition because these are not known by terraform yet before the apply. I wasn't able to use the -target argument either because I wanted to make the change in code and not in my deployment pipeline. Meaning when you try to do a terraform plan on the definition_file, the error will pop up.
Solution is below.
What worked was to decouple the use of the count from the .rendered argument by doing this:
data "template_file" "task_definition" {
count = length(var.task_container_command)
template = file("./configs/file.json")
vars = {
task = module.ecs[count.index].task_definition
}
}
resource "local_file" "foo" {
count = length(var.task_container_command)
content = element(data.template_file.task_definition.*.rendered, count.index)
filename = "task-definition-${count.index}"
}
module "step_function" {
count = length(var.task_container_command)
source = "path"
region = var.region
name = "${var.step_function_name}-${count.index}"
definition_file = local_file.foo[count.index].filename
}
Now your data is rendered in the resource called "foo" here and then passed to the step_function module so the terraform plan already knows what's inside your variable. The content element of foo acts like a loop to render each task_definition that I've created using a different filename to avoid duplicates.
Hope this helped :)

How do I mark a LogDNA or SysDig instance as the default destination for Platform Logs or Metrics?

I'm using the Terraform provider for IBM Cloud to create a LogDNA instance. I'd like to mark this instance as the destination for Platform Logs.
Here is my Terraform:
resource ibm_resource_instance logdna_us_south {
name = "logging-us-south"
location = "us-south"
service = "logdna"
plan = "7-day"
resource_group_id = ibm_resource_group.dev.id
}
Is it possible?
You need to set the default_receiver parameter when creating the instance as described in https://cloud.ibm.com/docs/Log-Analysis-with-LogDNA?topic=Log-Analysis-with-LogDNA-config_svc_logs#platform_logs_enabling_cli
Your terraform should look like:
resource ibm_resource_instance logdna_us_south {
name = "logging-us-south"
location = "us-south"
service = "logdna"
plan = "7-day"
resource_group_id = ibm_resource_group.dev.id
parameters = {
"default_receiver" = true
}
}

Create multiple aws_cloudformation_stack based on parametrized name with Terraform

Is it possible to create multiple CloutFormation stacks with one aws_cloudformation_stack resource definition in terraform, based on parametrized name ?
I have the following resources defined and I would like to have a stack per app_name, app_env build_name combo:
resource "aws_s3_bucket_object" "sam_deploy_object" {
bucket = var.sam_bucket
key = "${var.app_env}/${var.build_name}/sam_template_${timestamp()}.yaml"
source = "../.aws-sam/sam_template_output.yaml"
etag = filemd5("../.aws-sam/sam_template_output.yaml")
}
resource "aws_cloudformation_stack" "subscriptions_sam_stack" {
name = "${var.app_name}---${var.app_env}--${var.build_name}"
capabilities = ["CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
template_url = "https://${var.sam_bucket}.s3-${data.aws_region.current.name}.amazonaws.com/${aws_s3_bucket_object.sam_deploy_object.id}"
}
When I run terraform apply when build_name name changes, the old stack gets deleted and a new one created, however I would like to keep the old stack and create a new one
One way would be to define your variable build_name as a list. Then, when you create new build, you just append them to the list, and create stacks with the help of for_each to iterate over the build names.
For example, if you have the following:
variable "app_name" {
default = "test1"
}
variable "app_env" {
default = "test2"
}
variable "build_name" {
default = ["test3"]
}
resource "aws_cloudformation_stack" "subscriptions_sam_stack" {
for_each = toset(var.build_name)
name = "${var.app_name}---${var.app_env}--${each.value}"
capabilities = ["CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
template_url = "https://${var.sam_bucket}.s3-${data.aws_region.current.name}.amazonaws.com/${aws_s3_bucket_object.sam_deploy_object.id}"
}
Then if you want second build for the stack, you just extend variable "build_name":
variable "build_name" {
default = ["test3", "new_build"]
}

How to have conditional resources inside a module with 0.12 for_each

I'm passing my modules a list and it's going to create EC2 instances and eips and attach.
I'm using for_each so users can reorder the list and Terraform won't try to destroy anything.
But how do I use conditional resources now? Do I still use count? If so how, because you can't use count with for_each?
This is my module now:
variable "mylist" {
type = set(string)
description = "Name used for tagging, AD, and chef"
}
variable "createip" {
type = bool
default = true
}
resource "aws_instance" "sdfsdfsdfsdf" {
for_each = var.mylist
user_data = data.template_file.user_data[each.key].rendered
tags = each.value
...
#conditional for EIP
resource "aws_eip" "public-ip" {
for_each = var.mylist
// I can't use this anymore!
// how can I say if true create else don't create
#count = var.createip ? 0 : length(tolist(var.mylist))
instance = aws_instance.aws-vm[each.key].id
vpc = true
tags = each.value
}
I also need to get the value of the mylist item for eip too because I use that to tag the eip. So I think I need to index into the foreach loop somehow and also be able to use count or another list to determine if it's created or not - is that correct?
I think I got it but I don't want to accept until it's confirmed this is not the wrong way (not as a matter of opinion but improper usage that will cause actual problems).
variable "mylist" {
type = set(string)
description = "Name used for tagging, AD, and chef"
}
variable "createip" {
type = bool
default = true
}
locals {
// set will-create-public-ip to empty array if false
// otherwise use same mylist which module uses for creating instances
will-create-public-ip = var.createip ? var.mylist : []
}
resource "aws_instance" "sdfsdfsdfsdf" {
for_each = var.mylist
user_data = data.template_file.user_data[each.key].rendered
tags = each.value
...
resource "aws_eip" "public-ip" {
// will-create-public-ip set to mylist or empty to skip this resource creatation
for_each = will-create-public-ip
instance = aws_instance.aws-vm[each.key].id
vpc = true
tags = each.value
}

Create nested resource parameter blocks based on conditional in terraform

I am trying to create a terraform module that creates a compute instance. I want the resource to have an attached disk if and only if I have a variable attached_disk_enabled set to true during module invocation. I have this:
resource "google_compute_disk" "my-disk" {
name = "data"
type = "pd-ssd"
size = 20
count = var.attached_disks_enabled ? 1 : 0
}
resource "google_compute_instance" "computer" {
name = "computer"
boot_disk {
...
}
// How do I make this disappear if attached_disk_enabled == false?
attached_disk {
source = "${google_compute_disk.my-disk.self_link}"
device_name = "computer-disk"
mode = "READ_WRITE"
}
}
Variables have been declared for the module in vars.tf. Module invocation is like this:
module "main" {
source = "../modules/computer"
attached_disk_enabled = false
...
}
I know about dynamic blocks and how to use for loop to iterate over a list and set multiple blocks, but I'm not sure how to exclude a block from a resource using this method:
dynamic "attached-disk" {
for_each in var.disk_list
content {
source = "${google_compute_disk.my-disk.*.self_link}"
device_name = "computer-disk-${count.index}"
mode = "READ_WRITE"
}
}
I want if in place of for_each. Is there a way to do this?
$ terraform version
Terraform v0.12.0
Because your disk resource already has the conditional attached to it, you can use the result of that resource as your iterator and thus avoid specifying the conditional again:
dynamic "attached_disk" {
for_each = google_compute_disk.my-disk
content {
source = attached_disk.value.self_link
device_name = "computer-disk-${attached_disk.key}"
mode = "READ_WRITE"
}
}
To answer the general question: if you do need a conditional block, the answer is to write a conditional expression that returns either a single-item list or an empty list:
dynamic "attached_disk" {
for_each = var.attached_disk_enabled ? [google_compute_disk.my-disk[0].self_link] : []
content {
source = attached_disk.value
device_name = "computer-disk-${attached_disk.key}"
mode = "READ_WRITE"
}
}
However, in your specific situation I'd prefer the former because it describes the intent ("attach each of the disks") more directly.

Resources