condition on terraform module - terraform

Trying to run modules conditionally.
Expectation : Run module only when env is not equal to prd
module "database_diagnostic_eventhub_setting" {
count = var.env != "prd" ? 1 : 0 // run block if condition is satisfied
source = "git::https://git_url//modules/...."
target_ids = [
"${data.terraform_remote_state.database.outputs.server_id}"
]
environment = "${var.environment}-database-eventhub"
destination = data.azurerm_eventhub_namespace_authorization_rule.event_hub.id
eventhub_name = var.eventhub_name
logs = [
"PostgreSQLLogs",
"QueryStoreWaitStatistics"
]
}
Error:
The name "count" is reserved for use in a future version of Terraform.

You need to use Terraform v0.13 or later in order to use count or for_each inside a module block.
If you can't upgrade from Terraform v0.12 then the old approach, prior to support for module repetition, was to add a variable to your module to specify the object count:
variable "instance_count" {
type = number
}
...and then inside your module add count to each of the resources:
resource "example" "example" {
count = var.instance_count
}
However, if you are able to upgrade to Terraform v0.13 now then I would strongly suggest doing so rather than using the above workaround, because upgrading to use module-level count later, with objects already created, is quite a fiddly process involving running terraform state mv for each of your resource in that module.

Related

Why declaring different modules downloads as many registry modules locally , plus Error: Duplicate required providers configuration?

Just tried to create say 2 sets of resources using the same registry module which contains Oracle cloud compartments (multi level).
see Module link
I needed 2 subcompartments because set #2 is a child of set #1
example :
Terraform v1.0.3
module "main_compartment" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
tenancy_ocid = var.tenancy_ocid
compartment_id = var.tenancy_ocid # define the parent compartment. Creation at tenancy root if omitted
compartment_name = "mycomp"
compartment_description = "main compartment at root level"
compartment_create = true
enable_delete = true
}
}
module "level_1_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l1_subcomp
compartment_id = module.iam_compartment_main_compartment.compartment_id # define the parent compartment. Here we make reference to the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
}
...}
module "level_2_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l2_subcomp
compartment_id = data.oci_identity_compartments.compx.id # define the parent compartment. Here we make reference to one of the l1 subcomp created in the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
depends_on = [module.level_1_sub_compartments,]
....}
When I run a terraform init I get as many folders than module blocks ? why would I call them this way?
Why not download a single module manually and then reference it 3 times as local modules .
Or better off writing dynamic blocks in the main.tf using regular compartment resource ?
Initializing modules...
Downloading oracle-terraform-modules/iam/oci 2.0.2 for iam_compartment_main...
. main_compartment in .terraform/modules/main_compartment/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_1_sub_compartments...
. level_1_sub_compartments in .terraform/modules/level_1_sub_compartments/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_2_sub_compartments...
. level_2_sub_compartments in .terraform/modules/level_2_sub_compartments/modules/iam-compartment
There are some problems with the configuration, described below.
...(for each module)=> Error: Duplicate required providers configuration
A module may have only one required providers configuration. The required providers were previously configured at .terraform/modules/level_1_sub_compartments/modules/iam-compartment/main.tf:5,3-21.
What I wanted is to reuse one registry module through URL source but only have one physical folder in my working directory.
I just expected it to work but it seems local Modules are the only working option for this goal.If there is anything I'm doing wrong please let me know as the provider error is also coming from the fact that I have multiple directories having the same module config. thank you

Terraform using outputs from multiple occurrences of a module created using for each

I have a problem that occurred to me after changing my code when it changed the number of "instances" of a sub-module from one to a dynamic number (using for each). The sub-module is not of my authorship, I use ready-made code from the registry, its initialization looks like this, among other things:
module "container_definition_sidecar" {
source = "cloudposse/ecs-container-definition/aws"
version = "v0.46.0"
for_each = var.sidecars
container_name = each.value.container_name
container_image = each.value.container_image
...
Why does I write sub-module? Because I already use the above fragment in my own module called simply "ECS", which is initialized like this:
module "ecs-service" {
source = "./ecs-service"
environment = "test"
awslogs_group = "/ecs/fargate-task-definition"
awslogs_stream_prefix = "ecs"
container_name = "my_container"
container_image = "nginx:latest"
...
sidecars = {
first_sidecar = {
container_name = "logzio-log-router"
container_image = "12345.dkr.ecr.us-east-2.amazonaws.com/aws-for-fluent-bit:latest"
}
second_sidecar = {...}
}
Now, where is the problem?
Where, using jsonencode, I need to get the output, which according to the documentation is called "json_map_object" for each called to life module.container_definition_sidecar
resource "aws_ecs_task_definition" "task_definition" {
family = var.family_name
network_mode = "awsvpc"
requires_compatibilities = [ "FARGATE" ]
container_definitions = jsonencode([module.container_definition_sidecar[*].json_map_object])
When I try use [*] I receive such error:
Error: Unsupported attribute
│
│ on ecs-service/main.tf line 111, in resource "aws_ecs_task_definition" "task_definition":
│ 111: container_definitions = jsonencode([module.container_definition_sidecar.*.json_map_object])
│
│ This object does not have an attribute named "json_map_object".
And the only situation in which the code passes is when I manually type e.g.:
container_definitions = jsonencode([module.container_definition_sidecar["first_sidecar"].json_map_object, module.container_definition_sidecar["second_sidecar"].json_map_object])
However, of course, I don't want to manually provide keys ["first_sidecar"], ["second_sidecar"] and etc. in my function. But don't know how to handle that dynamically
I'll just add that from where the jsonencode is executed I don't have access to the references of the ecs-service module, so I can't go through it and extract the sidecar call keys.
Ok, I solved my own issue using by making this code - writing because maybe someone will find it useful:
container_definitions = jsonencode([for key in range(length(var.sidecars)): module.container_definition_sidecar[keys(var.sidecars)[key]].json_map_object])
That is, I make a FOR loop for as many times as the number of keys in the map object. Then I use the built-in keys() function in which I point to the map, and the numeric value of the key I want to get (not the name of the key, but the value as in the index). Thanks to the for loop, the construction of this is done dynamically, as many times as there are nodes in the map object.

Conditionally create a single module in Terraform

I have been trying to conditionally use a module from the root module, so that for certain environments this module is not created. Many people claim that by setting the count in the module to either 0 or 1 using a conditional does the trick.
module "conditionally_used_module" {
source = "./modules/my_module"
count = (var.create == true) ? 1 : 0
}
However, this changes the type of conditionally_used_module: instead of an object (or map) we will have a list (or tuple) containing a single object. Is there another way to achieve this, that does not imply changing the type of the module?
To conditionally create a module you can use a variable, lets say it's called create_module in the variables.tf file of the module conditionally_used_module.
Then for every resource inside the conditionally_used_module module you will use the count to conditionally create or not that specific resource.
The following example should work and provide you with the desired effect.
# Set a variable to know if the resources inside the module should be created
module "conditionally_used_module" {
source = "./modules/my_module"
create_module = var.create
}
# Inside the conditionally_used_module file
# ( ./modules/my_module/main.tf ) most likely
# for every resource inside use the count to create or not each resource
resource "resource_type" "resource_name" {
count = var.create_module ? 1 : 0
... other resource properties
}
I used this in conjunction with workspaces to build a resource only for certain envs. The advantage is for me that I get a single terraform.tfvars file to control the all the environments structure for a project.
Inside main.tf:
workspace = terraform.workspace
#....
module "gcp-internal-lb" {
source = "../../modules/gcp-internal-lb"
# Deploy conditionally based on deploy_internal_lb variable
count = var.deploy_internal_lb[local.workspace] == true ? 1 : 0
# module attributes here
}
Then in variables.tf
variable "deploy_internal_lb" {
description = "Set to true if you want to create an internal LB"
type = map(string)
}
And in terraform.tfvars:
deploy_internal_lb = {
# DEV
myproject-dev = false
# QA
myproject-qa = false
# PROD
myproject-prod = true
}
I hope it helps.

terraform: data.aws_subnet, value of 'count' cannot be computed

terraform version 0.11.13
Error: Error refreshing state: 1 error(s) occurred:
data.aws_subnet.private_subnet: data.aws_subnet.private_subnet: value of 'count' cannot be computed
VPC code generated the error above:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
#count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
Change the above code to use count = "${length(var.private-subnet-mapping)}", I successfully provisioned the VPC. But, the output of vpc_private_subnets_ids is empty.
vpc_private_subnets_ids = []
Code provisioned VPC, but got empty list of vpc_private_subnets_ids:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
#count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
outputs.tf
output "vpc_private_subnets_ids" {
value = ["${data.aws_subnet.private_subnet.*.id}"]
}
The output of vpc_private_subnets_ids:
vpc_private_subnets_ids = []
I need the values of vpc_private_subnets_ids. After successfully provisioned VPC use the line, count = "${length(var.private-subnet-mapping)}", I changed code back to count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}". terraform apply, I got values of the list vpc_private_subnets_ids without above error.
vpc_private_subnets_ids = [
subnet-03199b39c60111111,
subnet-068a3a3e76de66666,
subnet-04b86aa9dbf333333,
subnet-02e1d8baa8c222222
......
]
I cannot use count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}" when I provision VPC. But, I can use it after VPC provisioned. Any clue?
The problem here seems to be that your VPC isn't created yet and so the data "aws_subnet_ids" "private_subnet_ids" data source read must wait until the apply step, which in turn means that the number of subnets isn't known, and thus the number of data "aws_subnet" "private_subnet" instances isn't predictable and Terraform returns this error.
If this configuration is also the one responsible for creating those subnets then the better design would be to refer to the subnet objects directly. If your module.vpc is also the module creating the subnets then I would suggest to export the subnet ids as an output from that module. For example:
output "subnet_ids" {
value = "${aws_subnet.example.*.id}"
}
Your calling module can then just get those ids directly from module.vpc.subnet_ids, without the need for a redundant extra API call to look them up:
output "vpc_private_subnets_ids" {
value = ["${module.vpc.subnet_ids}"]
}
Aside from the error about count, the configuration you showed also has a race condition because the data "aws_subnet_ids" "private_subnet_ids" block depends only on the VPC itself, and not on the individual VPCs, and so Terraform can potentially read that data source before the subnets have been created. Exporting the subnet ids through module output means that any reference to module.vpc.subnet_ids indirectly depends on all of the subnets and so those downstream actions will wait until all of the subnets have been created.
As a general rule, a particular Terraform configuration should either be managing an object or reading that object via a data source, and not both together. If you do both together then it may sometimes work but it's easy to inadvertently introduce race conditions like this, where Terraform can't tell that the data resource is attempting to consume the result of another resource block that's participating in the same plan.

Terraform target aws_volume_attachment with only its corresponding aws_instance resource from a list

I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.

Resources