Two workspaces, tfe_output - terraform

I'm trying to use output values from a 2nd workspace in my current one.
For example:
data "tfe_outputs" "EKS" {
organization = "EKS_Deploy"
workspace = "EKS_Deploy"
}
Then I need EKS cluster ID in one of my modules from that 2nd workspace (I already set up outputs):
2nd workspace
output "eks_cluster_id" {
description = "EKS Cluster ID"
value = module.eks-ssp.eks_cluster_id
}
1st workspace
eks_cluster_id = data.tfe_outputs.EKS.eks_cluster_id
But, running a terraform apply in the second workspace throws this:
Error: Unsupported attribute
on main.tf line 22, in data "aws_eks_cluster" "cluster":
name = data.tfe_outputs.EKS.eks_cluster_id
This object has no argument, nested block, or exported attribute named "eks_cluster_id".
This is strange to me, since I can see the correct output value in my 2nd workspace, ie. it shows a proper output. So I'm guessing I'm calling it wrong somehow. What could it be?

Related

Why declaring different modules downloads as many registry modules locally , plus Error: Duplicate required providers configuration?

Just tried to create say 2 sets of resources using the same registry module which contains Oracle cloud compartments (multi level).
see Module link
I needed 2 subcompartments because set #2 is a child of set #1
example :
Terraform v1.0.3
module "main_compartment" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
tenancy_ocid = var.tenancy_ocid
compartment_id = var.tenancy_ocid # define the parent compartment. Creation at tenancy root if omitted
compartment_name = "mycomp"
compartment_description = "main compartment at root level"
compartment_create = true
enable_delete = true
}
}
module "level_1_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l1_subcomp
compartment_id = module.iam_compartment_main_compartment.compartment_id # define the parent compartment. Here we make reference to the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
}
...}
module "level_2_sub_compartments" {
source = "oracle-terraform-modules/iam/oci//modules/iam-compartment"
for_each = local.compartments.l2_subcomp
compartment_id = data.oci_identity_compartments.compx.id # define the parent compartment. Here we make reference to one of the l1 subcomp created in the previous module
compartment_name = each.value.compartment_name
compartment_description = each.value.description
compartment_create = true # if false, a data source with a matching name is created instead
enable_delete = true # if false, on `terraform destroy`, compartment is deleted from the terraform state but not from oci
depends_on = [module.level_1_sub_compartments,]
....}
When I run a terraform init I get as many folders than module blocks ? why would I call them this way?
Why not download a single module manually and then reference it 3 times as local modules .
Or better off writing dynamic blocks in the main.tf using regular compartment resource ?
Initializing modules...
Downloading oracle-terraform-modules/iam/oci 2.0.2 for iam_compartment_main...
. main_compartment in .terraform/modules/main_compartment/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_1_sub_compartments...
. level_1_sub_compartments in .terraform/modules/level_1_sub_compartments/modules/iam-compartment
Downloading oracle-terraform-modules/iam/oci 2.0.2 for level_2_sub_compartments...
. level_2_sub_compartments in .terraform/modules/level_2_sub_compartments/modules/iam-compartment
There are some problems with the configuration, described below.
...(for each module)=> Error: Duplicate required providers configuration
A module may have only one required providers configuration. The required providers were previously configured at .terraform/modules/level_1_sub_compartments/modules/iam-compartment/main.tf:5,3-21.
What I wanted is to reuse one registry module through URL source but only have one physical folder in my working directory.
I just expected it to work but it seems local Modules are the only working option for this goal.If there is anything I'm doing wrong please let me know as the provider error is also coming from the fact that I have multiple directories having the same module config. thank you

Terraform: loop over directory to create a single resource

I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}

Conditionally create a single module in Terraform

I have been trying to conditionally use a module from the root module, so that for certain environments this module is not created. Many people claim that by setting the count in the module to either 0 or 1 using a conditional does the trick.
module "conditionally_used_module" {
source = "./modules/my_module"
count = (var.create == true) ? 1 : 0
}
However, this changes the type of conditionally_used_module: instead of an object (or map) we will have a list (or tuple) containing a single object. Is there another way to achieve this, that does not imply changing the type of the module?
To conditionally create a module you can use a variable, lets say it's called create_module in the variables.tf file of the module conditionally_used_module.
Then for every resource inside the conditionally_used_module module you will use the count to conditionally create or not that specific resource.
The following example should work and provide you with the desired effect.
# Set a variable to know if the resources inside the module should be created
module "conditionally_used_module" {
source = "./modules/my_module"
create_module = var.create
}
# Inside the conditionally_used_module file
# ( ./modules/my_module/main.tf ) most likely
# for every resource inside use the count to create or not each resource
resource "resource_type" "resource_name" {
count = var.create_module ? 1 : 0
... other resource properties
}
I used this in conjunction with workspaces to build a resource only for certain envs. The advantage is for me that I get a single terraform.tfvars file to control the all the environments structure for a project.
Inside main.tf:
workspace = terraform.workspace
#....
module "gcp-internal-lb" {
source = "../../modules/gcp-internal-lb"
# Deploy conditionally based on deploy_internal_lb variable
count = var.deploy_internal_lb[local.workspace] == true ? 1 : 0
# module attributes here
}
Then in variables.tf
variable "deploy_internal_lb" {
description = "Set to true if you want to create an internal LB"
type = map(string)
}
And in terraform.tfvars:
deploy_internal_lb = {
# DEV
myproject-dev = false
# QA
myproject-qa = false
# PROD
myproject-prod = true
}
I hope it helps.

terraform: data.aws_subnet, value of 'count' cannot be computed

terraform version 0.11.13
Error: Error refreshing state: 1 error(s) occurred:
data.aws_subnet.private_subnet: data.aws_subnet.private_subnet: value of 'count' cannot be computed
VPC code generated the error above:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
#count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
Change the above code to use count = "${length(var.private-subnet-mapping)}", I successfully provisioned the VPC. But, the output of vpc_private_subnets_ids is empty.
vpc_private_subnets_ids = []
Code provisioned VPC, but got empty list of vpc_private_subnets_ids:
resources.tf
data "aws_subnet_ids" "private_subnet_ids" {
vpc_id = "${module.vpc.vpc_id}"
}
data "aws_subnet" "private_subnet" {
#count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}"
count = "${length(var.private-subnet-mapping)}"
id = "${data.aws_subnet_ids.private_subnet_ids.ids[count.index]}"
}
outputs.tf
output "vpc_private_subnets_ids" {
value = ["${data.aws_subnet.private_subnet.*.id}"]
}
The output of vpc_private_subnets_ids:
vpc_private_subnets_ids = []
I need the values of vpc_private_subnets_ids. After successfully provisioned VPC use the line, count = "${length(var.private-subnet-mapping)}", I changed code back to count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}". terraform apply, I got values of the list vpc_private_subnets_ids without above error.
vpc_private_subnets_ids = [
subnet-03199b39c60111111,
subnet-068a3a3e76de66666,
subnet-04b86aa9dbf333333,
subnet-02e1d8baa8c222222
......
]
I cannot use count = "${length(data.aws_subnet_ids.private_subnet_ids.ids)}" when I provision VPC. But, I can use it after VPC provisioned. Any clue?
The problem here seems to be that your VPC isn't created yet and so the data "aws_subnet_ids" "private_subnet_ids" data source read must wait until the apply step, which in turn means that the number of subnets isn't known, and thus the number of data "aws_subnet" "private_subnet" instances isn't predictable and Terraform returns this error.
If this configuration is also the one responsible for creating those subnets then the better design would be to refer to the subnet objects directly. If your module.vpc is also the module creating the subnets then I would suggest to export the subnet ids as an output from that module. For example:
output "subnet_ids" {
value = "${aws_subnet.example.*.id}"
}
Your calling module can then just get those ids directly from module.vpc.subnet_ids, without the need for a redundant extra API call to look them up:
output "vpc_private_subnets_ids" {
value = ["${module.vpc.subnet_ids}"]
}
Aside from the error about count, the configuration you showed also has a race condition because the data "aws_subnet_ids" "private_subnet_ids" block depends only on the VPC itself, and not on the individual VPCs, and so Terraform can potentially read that data source before the subnets have been created. Exporting the subnet ids through module output means that any reference to module.vpc.subnet_ids indirectly depends on all of the subnets and so those downstream actions will wait until all of the subnets have been created.
As a general rule, a particular Terraform configuration should either be managing an object or reading that object via a data source, and not both together. If you do both together then it may sometimes work but it's easy to inadvertently introduce race conditions like this, where Terraform can't tell that the data resource is attempting to consume the result of another resource block that's participating in the same plan.

Concatenate Public IP output with Server Name

I have written a Terraform script to create a few Azure Virtual Machines.
The number of VMs created is based upon a variable called type in my .tfvars file:
type = [ "Master-1", "Master-2", "Master-3", "Slave-1", "Slave-2", "Slave-3" ]
My variables.tf file contains the following local:
count_of_types = "${length(var.type)}"
And my resources.tf file contains the code required to actual create the relevant number of VMs from this information:
resource "azurerm_virtual_machine" "vm" {
count = "${local.count_of_types}"
name = "${replace(local.prefix_specific,"##TYPE##",var.type[count.index])}-VM"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.*.id[count.index]}"]
vm_size = "Standard_B2ms"
tags = "${local.tags}"
Finally, in my output.tf file, I output the IP address of each server:
output "public_ip_address" {
value = ["${azurerm_public_ip.main.*.ip_address}"]
}
I am creating a Kubernetes cluster with 1x Master and 1x Slave VM. For this purpose, the script works fine - the first IP output is the Master and the second IP output is the Slave.
However, when I move to 8+ VMs in total, I'd like to know which IP refers to which VM.
Is there a way of amending my output to include the type local, or just the server's hostname alongside the Public IP?
E.g. 54.10.31.100 // Master-1.
Take a look at formatlist (which is one of the functions for string manipulations) and can be used to iterate over the instance attributes and list tags and other attributes of interest.
output "ip-address-hostname" {
value = "${
formatlist(
"%s:%s",
azurerm_public_ip.resource_name.*.fqdn,
azurerm_public_ip.resource_name.*.ip_address
)
}"
}
Note this is just a draft pseudo code. You may have to tweak this and create additional data sources in your TF file for effective enums
More reading available - https://www.terraform.io/docs/configuration/functions/formatlist.html
Raunak Jhawar's answer pointed me in the right direction, and therefore got the green tick.
For reference, here's the exact code I used in the end:
output "public_ip_address" {
value = "${formatlist("%s: %s", azurerm_virtual_machine.vm.*.name, azurerm_public_ip.main.*.ip_address)}"
}
This resulted in the following output:

Resources