terraform destroy produces cycle error when no cycles present - terraform

Terraform Version
Terraform v0.12.1
Terraform Configuration Files
main.tf in my root provider:
provider "google" {}
module "organisation_info" {
source = "../../modules/organisation-info"
top_level_domain = "smoothteam.fi"
region = "us-central1"
}
module "stack_info" {
source = "../../modules/stack-info"
organisation_info = "${module.organisation_info}"
}
Here's module 'organisation-info':
variable "top_level_domain" {}
variable "region" {}
data "google_organization" "organization" {
domain = "${var.top_level_domain}"
}
locals {
organization_id = "${data.google_organization.organization.id}"
ns = "${replace("${var.top_level_domain}", ".", "-")}-"
}
output "organization_id" {
value = "${local.organization_id}"
}
output "ns" {
value = "${local.ns}"
}
Then module 'stack-info':
variable "organisation_info" {
type = any
description = "The organisation-scope this environment exists in."
}
module "project_info" {
source = "../project-info"
organisation_info = "${var.organisation_info}"
name = "${local.project}"
}
locals {
# Use the 'default' workspace for the 'staging' stack.
name = "${terraform.workspace == "default" ? "staging" : terraform.workspace}"
# In the 'production' stack, target the production project. Otherwise, target the staging project.
project = "${local.name == "production" ? "production" : "staging"}"
}
output "project" {
value = "${module.project_info}" # COMMENTING THIS OUTPUT REMOVES THE CYCLE.
}
And finally, the 'project-info' module:
variable "organisation_info" {
type = any
}
variable "name" {}
data "google_project" "project" {
project_id = "${local.project_id}"
}
locals {
project_id = "${var.organisation_info.ns}${var.name}"
}
output "org" {
value = "${var.organisation_info}"
}
Debug Output
After doing terraform destroy -auto-approve, I get:
Error: Cycle: module.stack_info.module.project_info.local.project_id, module.stack_info.output.project, module.stack_info.module.project_info.data.google_project.project (destroy), module.organisation_info.data.google_organization.organization (destroy), module.stack_info.var.organisation_info, module.stack_info.module.project_info.var.organisation_info, module.stack_info.module.project_info.output.org
And terraform graph -verbose -draw-cycles -type=plan-destroy gives me this graph:
Source:
digraph {
compound = "true"
newrank = "true"
subgraph "root" {
"[root] module.organisation_info.data.google_organization.organization" [label = "module.organisation_info.data.google_organization.organization", shape = "box"]
"[root] module.stack_info.module.project_info.data.google_project.project" [label = "module.stack_info.module.project_info.data.google_project.project", shape = "box"]
"[root] provider.google" [label = "provider.google", shape = "diamond"]
"[root] module.organisation_info.data.google_organization.organization" -> "[root] module.stack_info.module.project_info.data.google_project.project"
"[root] module.organisation_info.data.google_organization.organization" -> "[root] provider.google"
"[root] module.stack_info.module.project_info.data.google_project.project" -> "[root] provider.google"
}
}
Expected Behavior
The idea is to use modules at the org, project and stack levels to set up naming conventions that can be re-used across all resources. Organisation-info loads organisation info, project-info about projects, and stack-info determines which project to target based on current workspace.
I have omitted a bunch of other logic in the modules in order to keep them clean for this issue.
According to terraform there are no cycles, and destroy should work fine.
Actual Behavior
We get the cycle I posted above, even though terraform shows no cycles.
Steps to Reproduce
Set up the three modules, organisation-info, project-info, and stack-info as shown above.
Set up a root provider as shown above.
terraform init
terraform destroy (it doesn't seem to matter if you've applied first)
Additional Context
The weird thing is that if I comment out this output in stack-info, the cycle stops:
output "project" {
value = "${module.project_info}" # IT'S THIS... COMMENTING THIS OUT REMOVES THE CYCLE.
}
This seems really weird... I neither understand why outputting a variable should make a difference, nor why I'm getting a cycle error when there's no cycle.
Oddly, terraform plan -destroy does not reveal the cycle, only terraform destroy.
My spidey sense tells me evil is afoot.
Appreciate anyone who can tell me what's going on, whether this is a bug, and perhaps how to work around.

In my case the same cycle error was because the out of 3 key:pair one of the key:pair expected by map type variable inside variables.tf of a module not declared in main.tf nor in anywhere.

Related

Why am I getting 'Unsupported argument errors' in my main.tf file?

I have a main.tf file with the following code block:
module "login_service" {
source = "/path/to/module"
name = var.name
image = "python:${var.image_version}"
port = var.port
command = var.command
}
# Other stuff below
I've defined a variables.tf file as follows:
variable "name" {
type = string
default = "login-service"
description = "Name of the login service module"
}
variable "command" {
type = list(string)
default = ["python", "-m", "LoginService"]
description = "Command to run the LoginService module"
}
variable "port" {
type = number
default = 8000
description = "Port number used by the LoginService module"
}
variable "image" {
type = string
default = "python:3.10-alpine"
description = "Image used to run the LoginService module"
}
Unfortunately, I keep getting this error when running terraform plan.
Error: Unsupported argument
│
│ on main.tf line 4, in module "login_service":
│ 4: name = var.name
│
│ An argument named "name" is not expected here.
This error repeats for the other variables. I've done a bit of research and read the terraform documentation on variables, and read other stack overflow answers but I haven't really found a good answer to the problem.
Any help appreciated.
A Terraform module block is only for referring to a Terraform module. It doesn't support any other kind of module. Terraform modules are a means for reusing Terraform declarations across many configurations, but th
Therefore in order for this to be valid you must have at least one .tf file in /path/to/module that declares the variables that you are trying to pass into the module.
From what you've said it seems like there's a missing step in your design: you are trying to declare something in Kubernetes using Terraform, but the configuration you've shown here doesn't include anything which would tell Terraform to interact with Kubernetes.
A typical way to manage Kubernetes objects with Terraform is using the hashicorp/kubernetes provider. A Terraform configuration using that provider would include a declaration of the dependency on that provider, the configuration for that provider, and at least one resource block declaring something that should exist in your Kubernetes cluster:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
}
}
provider "kubernetes" {
host = "https://example.com/" # URL of your Kubernetes API
# ...
}
# For example only, a kubernetes_deployment resource
# that declares one Kubernetes deployment.
# In practice you can use any resource type from this
# provider, depending on what you want to declare.
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 3
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
Although you can arrange resources into separate modules in Terraform if you wish, I would suggest focusing on learning to directly describe resources in Terraform first and then once you are confident with that you can learn about techniques for code reuse using Terraform modules.

Terraform passing list/set from root module to child module issue

I have this root module which calls the child module to create a GCP project and create IAM role bindings.
module "test_project" {
source = "terraform.dev.mydomain.com/Dev/sbxprjmodule/google"
version = "1.0.3"
short_name = "looker-nwtest"
owner_bindings = ["group:npe-cloud-platformeng-contractors#c.mydomain.com", "group:npe-sbox-rw-tfetraining#c.mydomain.com"]
}
variable "owner_bindings" {
type = list(string)
default = null
}
This is the child module which does the assignments
resource "google_project_iam_binding" "g-sbox-iam-owner" {
count = var.owner_bindings == null ? 0 : length(var.owner_bindings)
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [var.owner_bindings[count.index]]
}
variable "owner_bindings" {
type = list(string)
default = null
}
/*
When I do a terraform plan and apply, it creates both the bindings properly, looping through twice. Then when I run a terraform plan again and apply, it shows this change below.
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[0] will be updated in-place
~ resource "google_project_iam_binding" "g-sbox-iam-owner" {
id = "g-prj-npe-sbox-looker-nwtest/roles/owner"
~ members = [
+ "group:npe-cloud-platformeng-contractors#c.mydomain.com",
- "group:npe-sbox-rw-tfetraining#c.mydomain.com",
]
# (3 unchanged attributes hidden)
}
Next time I do a terraform plan and apply, it shows the below. It then alternates between the two of the groups on each subsequent plan and apply.
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[1] will be updated in-place
~ resource "google_project_iam_binding" "g-sbox-iam-owner" {
id = "g-prj-npe-sbox-looker-nwtest/roles/owner"
~ members = [
- "group:npe-cloud-platformeng-contractors#c.relayhealth.com",
+ "group:npe-sbox-rw-tfetraining#c.relayhealth.com",
]
# (3 unchanged attributes hidden)
}
Tried to change the data structure from list to set and had the same issue.
The groups are not inherited and are applied only at the project level too. So not sure what I'm doing wrong here.
Instead of count you can use a for_each the change is simple...
the resource in your child module will look something like this:
resource "google_project_iam_binding" "g-sbox-iam-owner" {
for_each = var.owner_bindings == null ? toset([]) : toset(var.owner_bindings)
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [each.value]
}
The count changes for_each and in the members we use the each.value
With a for_each the state changes, you will no longer see the numeric array:
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[0]
...
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner[1]
instead it will have the names, something like:
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner["abc"]
...
# module.lookernwtest_project.google_project_iam_binding.g-sbox-iam-owner["def"]
To loop or not to loop
After looking at this for a while; I'm questioning why do we need individual iam_binding if they all will have the same role, if all members have the same "roles/owner" we could just do:
resource "google_project_iam_binding" "g-sbox-iam-owner" {
project = "${var.project_id}-${var.short_name}"
role = "roles/owner"
members = [var.owner_bindings]
}

Error "An argument named ... is not expected here" for terraform resource

I'm trying to replicate the nested use of for_each at https://blog.boltops.com/2020/10/06/terraform-hcl-nested-loops/, and am getting an error An argument named "name" is not expected here. right after the first for_each. So I've narrowed it down to the following code:
# Datadog Performance Dashboard
locals {
dashboard_title = "Title here"
hosts = toset( [ "host1", "host2"] )
params = {
"CPU" = [
{
title = "System Load - 1 min avg"
dd_param = "avg:system.load.norm.1"
}
]
"RAM" = [
{
title = "Memory Commit limit"
dd_param = "system.mem.commit_limit"
}
]
}
}
resource "datadog_dashboard" "ordered_dashboard" {
# for_each = local.params
#name = each.key
name = "asdadas"
title = local.dashboard_title
description = "Created using the Datadog provider in Terraform"
layout_type = "ordered"
is_read_only = true
}
Since name = each.key doesn't work (i.e. gives the same error An argument named name is not expected here, one can see I tried commenting out the for_each, the each.key assignment and the rest of the nested loop, and it's still complaining about the variable.
It would make sense that maybe I need to declare it before using it, but I find the code in the link I mentioned earlier doesn't do it. Neither does https://www.terraform.io/language/resources/syntax nor https://www.terraform.io/language/expressions/dynamic-blocks, both from Hashicorps' Terraform documentation site.
This is the version of terraform I'm using:
$ terraform -v
Terraform v1.1.3
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.73.0
Anyone have thoughts on what I'm missing here? Why is terraform complaining on the variable assignment?
It seems to me that the resource datadog_dashboard doesn't have an argument named name, so it gives such an error.
For more information take a look carefully to:
https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/dashboard

terraform nested interpolation with count

Using terraform I wish to refer to the content of a list of files (ultimately I want to zip them up using the archive_file provider, but in the context of this post that isn't important). These files all live within the same directory so I have two variables:
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
I want to use the template data provider to refer to the content of the files. Given the number of files in source_files can vary I need to use a count to carry out the same operation on all of them.
Thanks to the information provided at https://stackoverflow.com/a/43195932/201657 I know that I can refer to the content of a single file like so:
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
}
variable "source_file" {
type = "string"
}
data "template_file" "t_file" {
template = "${file("${var.source_root_dir}/${var.source_file}")}"
}
output "myoutput" {
value = "${data.template_file.t_file.rendered}"
}
Notice that that contains nested string interpolations. If I run:
terraform init && terraform apply -var source_file="foo" -var source_root_dir="./mydir"
after creating file mydir/foo of course then this is the output:
Success!
Now I want to combine that nested string interpolation syntax with my count. Hence my terraform project now looks like this:
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
data "template_file" "t_file" {
count = "${length(var.source_files)}"
template = "${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}"
}
output "myoutput" {
value = "${data.template_file.t_file.*.rendered}"
}
yes it looks complicated but syntactically, its correct (at least I think it is). However, if I run init and apply:
terraform init && terraform apply -var source_files='["foo", "bar"]' -var source_root_dir='mydir'
I get errors:
Error: data.template_file.t_file: 2 error(s) occurred:
* data.template_file.t_file[0]: __builtin_StringToInt: strconv.ParseInt: parsing "mydir": invalid syntax in:
${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}
* data.template_file.t_file1: __builtin_StringToInt: strconv.ParseInt: parsing "mydir": invalid syntax in:
${file("${"${var.source_root_dir}"/"${element("${var.source_files}", count.index)}"}")}
My best guess is that its interpreting the / as a division operation hence its attempting to parse the value mydir in source_root_dir as an int.
I've played around with this for ages now and can't figure it out. Can someone figure out how to use nested string interpolations together with a count in order to refer to the content of multiple files using the template provider?
OK, I think I figured it out. formatlist to the rescue
provider "template" {
version = "1.0.0"
}
variable "source_root_dir" {
type = "string"
description = "Directory containing all the files"
}
variable "source_files" {
type = "list"
description = "List of files to be added to the cloud function. Locations are relative to source_root_dir"
}
locals {
fully_qualified_source_files = "${formatlist("%s/%s", var.source_root_dir, var.source_files)}"
}
data "template_file" "t_file" {
count = "${length(var.source_files)}"
template = "${file(element("${local.fully_qualified_source_files}", count.index))}"
}
output "myoutput" {
value = "${data.template_file.t_file.*.rendered}"
}
when applied:
terraform init && terraform apply -var source_files='["foo", "bar"]' -var source_root_dir='mydir'
outputs:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
myoutput = [
This is the content of foo
,
This is the content of bar
]

Terraform use case to create multiple almost identical copies of infrastructure

I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.

Resources