Why declaring different modules downloads as many registry modules locally , plus Error: Duplicate required providers configuration? - terraform

Just tried to create say 2 sets of resources using the same registry module which contains Oracle cloud compartments (multi level).
see Module link
I needed 2 subcompartments because set #2 is a child of set #1
example :
Terraform v1.0.3
module "main_compartment" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
tenancy_ocid = var.tenancy_ocid
compartment_id = var.tenancy_ocid # define the parent compartment. Creation at tenancy root if omitted
compartment_name = "mycomp"
compartment_description = "main compartment at root level"
compartment_create = true
enable_delete = true
}
}
module "level_1_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l1_subcomp
compartment_id = module.iam_compartment_main_compartment.compartment_id # define the parent compartment. Here we make reference to the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
}
...}
module "level_2_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l2_subcomp
compartment_id = data.oci_identity_compartments.compx.id # define the parent compartment. Here we make reference to one of the l1 subcomp created in the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
depends_on = [module.level_1_sub_compartments,]
....}
When I run a terraform init I get as many folders than module blocks ? why would I call them this way?
Why not download a single module manually and then reference it 3 times as local modules .
Or better off writing dynamic blocks in the main.tf using regular compartment resource ?
Initializing modules...
Downloading oracle-terraform-modules/iam/oci 2.0.2 for iam_compartment_main...
. main_compartment in .terraform/modules/main_compartment/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_1_sub_compartments...
. level_1_sub_compartments in .terraform/modules/level_1_sub_compartments/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_2_sub_compartments...
. level_2_sub_compartments in .terraform/modules/level_2_sub_compartments/modules/iam-compartment
There are some problems with the configuration, described below.
...(for each module)=> Error: Duplicate required providers configuration
A module may have only one required providers configuration. The required providers were previously configured at .terraform/modules/level_1_sub_compartments/modules/iam-compartment/main.tf:5,3-21.
What I wanted is to reuse one registry module through URL source but only have one physical folder in my working directory.
I just expected it to work but it seems local Modules are the only working option for this goal.If there is anything I'm doing wrong please let me know as the provider error is also coming from the fact that I have multiple directories having the same module config. thank you

Related

Two workspaces, tfe_output

I'm trying to use output values from a 2nd workspace in my current one.
For example:
data "tfe_outputs" "EKS" {
organization = "EKS_Deploy"
workspace = "EKS_Deploy"
}
Then I need EKS cluster ID in one of my modules from that 2nd workspace (I already set up outputs):
2nd workspace
output "eks_cluster_id" {
description = "EKS Cluster ID"
value = module.eks-ssp.eks_cluster_id
}
1st workspace
eks_cluster_id = data.tfe_outputs.EKS.eks_cluster_id
But, running a terraform apply in the second workspace throws this:
Error: Unsupported attribute
on main.tf line 22, in data "aws_eks_cluster" "cluster":
name = data.tfe_outputs.EKS.eks_cluster_id
This object has no argument, nested block, or exported attribute named "eks_cluster_id".
This is strange to me, since I can see the correct output value in my 2nd workspace, ie. it shows a proper output. So I'm guessing I'm calling it wrong somehow. What could it be?

condition on terraform module

Trying to run modules conditionally.
Expectation : Run module only when env is not equal to prd
module "database_diagnostic_eventhub_setting" {
count = var.env != "prd" ? 1 : 0 // run block if condition is satisfied
source = "git::https://git_url//modules/...."
target_ids = [
"${data.terraform_remote_state.database.outputs.server_id}"
]
environment = "${var.environment}-database-eventhub"
destination = data.azurerm_eventhub_namespace_authorization_rule.event_hub.id
eventhub_name = var.eventhub_name
logs = [
"PostgreSQLLogs",
"QueryStoreWaitStatistics"
]
}
Error:
The name "count" is reserved for use in a future version of Terraform.
You need to use Terraform v0.13 or later in order to use count or for_each inside a module block.
If you can't upgrade from Terraform v0.12 then the old approach, prior to support for module repetition, was to add a variable to your module to specify the object count:
variable "instance_count" {
type = number
}
...and then inside your module add count to each of the resources:
resource "example" "example" {
count = var.instance_count
}
However, if you are able to upgrade to Terraform v0.13 now then I would strongly suggest doing so rather than using the above workaround, because upgrading to use module-level count later, with objects already created, is quite a fiddly process involving running terraform state mv for each of your resource in that module.

Terraform: loop over directory to create a single resource

I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}

Conditionally create a single module in Terraform

I have been trying to conditionally use a module from the root module, so that for certain environments this module is not created. Many people claim that by setting the count in the module to either 0 or 1 using a conditional does the trick.
module "conditionally_used_module" {
source = "./modules/my_module"
count = (var.create == true) ? 1 : 0
}
However, this changes the type of conditionally_used_module: instead of an object (or map) we will have a list (or tuple) containing a single object. Is there another way to achieve this, that does not imply changing the type of the module?
To conditionally create a module you can use a variable, lets say it's called create_module in the variables.tf file of the module conditionally_used_module.
Then for every resource inside the conditionally_used_module module you will use the count to conditionally create or not that specific resource.
The following example should work and provide you with the desired effect.
# Set a variable to know if the resources inside the module should be created
module "conditionally_used_module" {
source = "./modules/my_module"
create_module = var.create
}
# Inside the conditionally_used_module file
# ( ./modules/my_module/main.tf ) most likely
# for every resource inside use the count to create or not each resource
resource "resource_type" "resource_name" {
count = var.create_module ? 1 : 0
... other resource properties
}
I used this in conjunction with workspaces to build a resource only for certain envs. The advantage is for me that I get a single terraform.tfvars file to control the all the environments structure for a project.
Inside main.tf:
workspace = terraform.workspace
#....
module "gcp-internal-lb" {
source = "../../modules/gcp-internal-lb"
# Deploy conditionally based on deploy_internal_lb variable
count = var.deploy_internal_lb[local.workspace] == true ? 1 : 0
# module attributes here
}
Then in variables.tf
variable "deploy_internal_lb" {
description = "Set to true if you want to create an internal LB"
type = map(string)
}
And in terraform.tfvars:
deploy_internal_lb = {
# DEV
myproject-dev = false
# QA
myproject-qa = false
# PROD
myproject-prod = true
}
I hope it helps.

Terraform target aws_volume_attachment with only its corresponding aws_instance resource from a list

I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.

Resources