Terraform expression for_each invalid index - terraform

HI Guys and happy new year to all
I got an issue to gatter the token generate by the bloc ressource below which have an iteration with an for_each loop.
my varibale map is :
variable "wvd_hostpool" {
description = "Please provide the required information to create a WVD hostpool."
type = map(any)
default = {
hp-azcan-weu-wvd-01 = {
"name" = "hp-azcan-weu-wvd-01"
"type" = "Personal"
"load_balancer_type" = "DepthFirst"
"personal_desktop_assignment_type" = "Automatic"
"maximum_sessions_allowed" = 16
"expiration_date" = "2022-02-10T18:46:43Z"
"friendly_name" = "Canary"
"description" = "Dedicated to canary deployments."
"location" = "westeurope"
"vm_count" = 1
"vm_size" = "Standard_F4s_v2"
"vm_prefix" = "AZWEUHP01TST"
"validate_environment" = "true"
},
hp-azprd-weu-wvd-01 = {
"name" = "hp-azprd-weu-wvd-01"
"type" = "Pooled"
"load_balancer_type" = "DepthFirst"
"personal_desktop_assignment_type" = "Automatic"
"maximum_sessions_allowed" = 16
"expiration_date" = "2022-02-10T18:46:43Z"
"friendly_name" = "desktop"
"description" = "Dedicated to medium workload type (Microsoft Word, CLIs, ...)."
"location" = "westeurope"
"vm_count" = 1
"vm_size" = "Standard_F4s_v2"
"vm_prefix" = "AZWEUHP01WKT"
"validate_environment" = "false"
},
the ressource bloc :
resource "azurerm_virtual_desktop_host_pool" "wvd_hostpool" {
for_each = var.wvd_hostpool
name = each.value.name
location = each.value.location
custom_rdp_properties = "audiocapturemode:i:1;audiomode:i:0;"
resource_group_name = data.azurerm_resource_group.avd_rg.name
validate_environment = each.value.validate_environment
type = each.value.type
load_balancer_type = each.value.load_balancer_type
friendly_name = each.value.friendly_name
description = each.value.description
personal_desktop_assignment_type = each.value.personal_desktop_assignment_type
maximum_sessions_allowed = each.value.maximum_sessions_allowed
registration_info {
expiration_date = each.value.expiration_date
}
}
I would get the value of the token generate under registration_info to save it to a key vault for reuse or export it to an output but has you can see I getting an error with invalid index. I speding 2 day without sucess at this could you help me please ?
resource "azurerm_key_vault_secret" "wvd_registration_info" {
for_each = var.wvd_hostpool
name = each.value.name
value = azurerm_virtual_desktop_host_pool.wvd_hostpool[each.value.name].registration_info.0.token
key_vault_id = azurerm_key_vault.wvd_key_vault.id
depends_on = [azurerm_role_assignment.wvd_sp]
}
the same result
Error: Invalid index
│
│ on security.tf line 115, in resource "azurerm_key_vault_secret" "wvd_registration_info":
│ 115: value = azurerm_virtual_desktop_host_pool.wvd_hostpool[each.value.name].registration_info[0].token
│ ├────────────────
│ │ azurerm_virtual_desktop_host_pool.wvd_hostpool is object with 3 attributes
│ │ each.value.name is "hp-azprd-weu-wvd-02"
│
│ The given key does not identify an element in this collection value: the collection has no elements

If you specify a map as a for_each attribute, Terraform will use its keys as identifiers for the resources which will be created. This means that if you want to reference a another resource created using a for_each, you have to use they keys from the map, or each.key in your example:
resource "azurerm_key_vault_secret" "wvd_registration_info" {
for_each = var.wvd_hostpool
name = each.value.name
value = azurerm_virtual_desktop_host_pool.wvd_hostpool[each.key].registration_info.token
key_vault_id = azurerm_key_vault.wvd_key_vault.id
depends_on = [azurerm_role_assignment.wvd_sp]
}

Related

Use count instead of for_each for terraform resource

When deploying resources, the template terraform gave uses for_each. This poses as a problem as it will give
Error: Invalid for_each argument
│
│ on /home/baiyuc/workspaces/billow/src/GoAmzn-LambdaStackTools/configurations/terraform/sync.tf line 410, in resource "aws_route53_record" "subdomain_cert_validation":
│ 410: for_each = {
│ 411: for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
│ 412: name = dvo.resource_record_name
│ 413: record = dvo.resource_record_value
│ 414: type = dvo.resource_record_type
│ 415: }
│ 416: }
│ ├────────────────
│ │ aws_acm_certificate.cert.domain_validation_options is a set of object, known only after apply
The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
error when using terraform import.
I found a potential solution that suggests using count in this type of scenarios, but it didn't go into details. Anyone can give any details on how to do so?
The code of interest is for resource "aws_route53_record" "subdomain_cert_validation":
data "aws_route53_zone" "root_domain" {
name = "${var.root_domain}."
private_zone = false
}
resource "aws_acm_certificate" "cert" {
depends_on = [aws_route53_record.sub-zone]
domain_name = var.domain
validation_method = "DNS"
}
resource "aws_route53_zone" "core-domain" {
name = var.domain
count = var.root_domain == var.domain ? 0 : 1 # If the two are the same, do not create this resource.
tags = {
Environment = var.stack_tag
}
}
resource "aws_route53_record" "sub-zone" {
depends_on = [aws_route53_zone.core-domain]
zone_id = data.aws_route53_zone.root_domain.zone_id
name = var.domain
type = "NS"
ttl = "30"
count = var.root_domain == var.domain ? 0 : 1 # If the two are the same, do not create this resource.
records = var.root_domain == var.domain ? [] : [
aws_route53_zone.core-domain[0].name_servers[0],
aws_route53_zone.core-domain[0].name_servers[1],
aws_route53_zone.core-domain[0].name_servers[2],
aws_route53_zone.core-domain[0].name_servers[3],
]
}
resource "aws_route53_record" "subdomain_cert_validation" {
depends_on = [aws_acm_certificate.cert]
for_each = {
for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
type = each.value.type
ttl = 600
zone_id = var.root_domain == var.domain ? data.aws_route53_zone.root_domain.zone_id : aws_route53_zone.core-domain[0].zone_id
}
resource "aws_acm_certificate_validation" "core" {
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns = [for record in aws_route53_record.subdomain_cert_validation : record.fqdn]
}
this issue is pretty common when using iteration and is caused by trying to use keys that will be dynamically generated at apply time. You need to make sure that your keys are statically defined so they're known at apply time, the value of the map can then be dynamic. Some good reference, for this issue and solutions are here
for_each example and solution
for_each example and solution

Terraform for_each The given key does not identify an element in this collection value

I am working on a project where I am building a "stateful" qa environment that incorporates different Win versions that correspond to different sql versions. For example, for Server 2016 I will have 3 servers with each having a different version of sql. Same thing for 2019, and 2022. I have reached the point where it will read the values correctly but then it is giving me this error:
│ Error: Invalid index
│
│ on main.tf line 60, in resource "vsphere_virtual_machine" "vm":
│ 60: guest_id = data.vsphere_virtual_machine.template[each.value.template].guest_id
│ ├────────────────
│ │ data.vsphere_virtual_machine.template is object with 2 attributes
│ │ each.value.template is "Templates/QA_2016"
│
│ The given key does not identify an element in this collection value.
Here is the code:
`
provider "vsphere" {
vim_keep_alive = 30
user = var.vsphere_user
password = var.vsphere_password
vsphere_server = var.vsphere_server
# If you have a self-signed cert
allow_unverified_ssl = true
}
#### data block see local vars
data "vsphere_datacenter" "dc" {
name = local.dc
}
data "vsphere_compute_cluster" "compute_cluster" {
name = local.cluster
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_datastore" "datastore" {
name = local.datastore
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_network" "network" {
for_each = var.vms
name = each.value.network
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_virtual_machine" "template" {
for_each = var.vms
name = each.value.template
datacenter_id = data.vsphere_datacenter.dc.id
}
##### Resource Block
resource "vsphere_virtual_machine" "vm" {
for_each = var.vms
datastore_id = data.vsphere_datastore.datastore.id
guest_id = data.vsphere_virtual_machine.template[each.value.template].guest_id
resource_pool_id = data.vsphere_compute_cluster.compute_cluster.id
# host_system_id = "${data.vsphere_datacenter.dc.id}"
firmware = data.vsphere_virtual_machine.template[each.value.template].firmware
num_cpus = local.cpu_count
memory = local.memory
scsi_type = data.vsphere_virtual_machine.template[each.value.template].scsi_type
wait_for_guest_net_timeout = -1
name = each.value.name
`
Here is the vars file:
`
locals {
dc = "DC"
cluster = "The Cluster"
datastore = "Storage_thing"
cpu_count = "4"
memory = "16384"
disk_label = "disk0"
disk_size = "250"
disk_thin = "true"
domain = "my.domain"
dns = ["xx.xx.xx.xx", "xx.xx.xx.xx"]
password = "NotMyPass"
auto_logon = true
auto_logon_count = 1
firmware = "efi"
}
#### Name your vm's here - Terraform will provision what is provided here - Add or comment out VM's as needed
variable "vms" {
type = map(any)
default = {
wqawin16sql14 = {
name = "wqawin16sql14"
network = "vm_network"
template = "Templates/QA_2016"
},
wqawin16sql17 = {
name = "wqawin16sql17"
network = "vm_network"
template = "Templates/QA_2016"
},
}
}
`
Terraform is reporting this error because your data "vsphere_virtual_machine" "template" block has for_each = var.vms and so the instance keys of that resource are the keys from your map value: "wqawin16sql14" and "wqawin16sql17".
That fails because you're trying to look up an instance using the value of the template attribute, which is "Templates/QA_2016" and therefore doesn't match any of the instance keys.
It seems like your goal here is to find one virtual machine for each distinct value of the template attributes in your input variable, and then use the guest_id of each of those VMs to populate the guest_id of the corresponding instance of resource "vsphere_virtual_machine" "vm".
If so, you'll need to make the for_each for your data resource be a collection where each element represents a template, rather than having each element represent a virtual machine to manage. One way to achieve that would be to calculate the set of all template values across all of your elements of var.vm, like this:
data "vsphere_virtual_machine" "template" {
for_each = toset(var.vms[*].template)
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Notice that name is now set to just each.value because for_each is now just a set of template names, like toset(["Templates/QA_2016"]), so the values of this collection are just strings rather than objects with attributes.
With this change you should then have only one instance of this data resource whose address will be data.vmware_virtual_machine.template["Templates/QA_2016"]. This instance key now does match the template attribute in both of your VM objects, and so the dynamic lookup of the guest ID based on the template attribute of each VM object should succeed.

In terraform using Azure, is it possible to create a key vault secret and reference that secret in the same file on the same run?

I am attempting to generate a random username and password, store them in key vault and then immediately (after they are stored) retrieve them and use them as variables in the sql server creation.
Considering this code:
resource "random_string" "username" {
length = 24
special = true
override_special = "%#!"
}
resource "random_password" "password" {
length = 24
special = true
override_special = "%#!"
}
# # Create KeyVault Secret
resource "azurerm_key_vault_secret" "sql-1-username" {
name = "sql-server-1-username"
value = random_string.username.result
key_vault_id = azurerm_key_vault.key_vault.id
tags = merge(local.common_tags, tomap({"type" = "key-vault-secret-username"}), tomap({"resource" = azurerm_mssql_server.sql-server_1.name}))
depends_on = [azurerm_key_vault.key_vault]
}
resource "azurerm_key_vault_secret" "sql-1-password" {
name = "sql-server-1-password"
value = random_password.password.result
key_vault_id = azurerm_key_vault.key_vault.id
tags = merge(local.common_tags, tomap({"type" = "key-vault-secret-password"}), tomap({"resource" = azurerm_mssql_server.sql-server_1.name}))
depends_on = [azurerm_key_vault.key_vault]
}
data "azurerm_key_vault_secret" "sql-server-1-username" {
name = "sql-server-1-username"
key_vault_id = azurerm_key_vault.key_vault.id
}
data "azurerm_key_vault_secret" "sql-server-1-password" {
name = "sql-server-1-password"
key_vault_id = azurerm_key_vault.key_vault.id
}
resource "azurerm_mssql_server" "sql-server_1" {
name = "${local.resource-name-prefix}-sql-server-1"
resource_group_name = local.resource-group-name
location = var.resource-location
version = "12.0"
administrator_login = data.azurerm_key_vault_secret.sql-server-1-username.value
administrator_login_password = data.azurerm_key_vault_secret.sql-server-1-password.value
tags = merge(local.common_tags, tomap({"type" = "mssql-server"}))
}
When running this via terraform I get:
│ Error: KeyVault Secret "sql-server-1-username" <<<KEY VAULT>>> does not exist
│
│ with data.azurerm_key_vault_secret.sql-server-1-username,
│ on sql-server.tf line 31, in data "azurerm_key_vault_secret" "sql-server-1-username":
│ 31: data "azurerm_key_vault_secret" "sql-server-1-username" {
│
╵
╷
│ Error: KeyVault Secret "sql-server-1-password" <<<KEY VAULT>>> does not exist
│
│ with data.azurerm_key_vault_secret.sql-server-1-password,
│ on sql-server.tf line 36, in data "azurerm_key_vault_secret" "sql-server-1-password":
│ 36: data "azurerm_key_vault_secret" "sql-server-1-password" {
│
and I understand because at run time terraform is trying to evaluate that secret but it hasn't been created.
My question is, is there a way to define a value, store it as a key vault secret and then upon completion of that azurerm_key_vault_secret resource being complete, retrieve that value?
As a work around, I've put lifecycle blocks with ignore_change for the username and password values on both the key vault secret resources and the sql server. That should give me the same values in key vault being used as the username/password for the sql server, but that feels like the wrong solution.
What would be the better way?
When using data.azurerm_key_vault_secret.* in azurerm_mssql_server it doesn't consider dependency , so instead of creating the keyvault secret it creates the sqlserver first as it doesn't have any dependencies on the resources created by the file thats the reason you get the error .
For solution , If you are creating the keyvault secret in the same file then instead of using data blocks , you can directly reference the value for administrator_login and administrator_login_password with azurerm_key_vault_secret.sql-1-username.value and azurerm_key_vault_secret.sql-1-password.value.
Your Code will be like below:
resource "random_string" "username" {
length = 24
special = true
override_special = "%#!"
}
resource "random_password" "password" {
length = 24
special = true
override_special = "%#!"
}
# # Create KeyVault Secret
resource "azurerm_key_vault_secret" "sql-1-username" {
name = "sql-server-1-username"
value = random_string.username.result
key_vault_id = azurerm_key_vault.key_vault.id
tags = merge(local.common_tags, tomap({"type" = "key-vault-secret-username"}), tomap({"resource" = azurerm_mssql_server.sql-server_1.name}))
depends_on = [azurerm_key_vault.key_vault]
}
resource "azurerm_key_vault_secret" "sql-1-password" {
name = "sql-server-1-password"
value = random_password.password.result
key_vault_id = azurerm_key_vault.key_vault.id
tags = merge(local.common_tags, tomap({"type" = "key-vault-secret-password"}), tomap({"resource" = azurerm_mssql_server.sql-server_1.name}))
depends_on = [azurerm_key_vault_secret.sql-1-username]
}
resource "azurerm_mssql_server" "sql-server_1" {
name = "${local.resource-name-prefix}-sql-server-1"
resource_group_name = local.resource-group-name
location = var.resource-location
version = "12.0"
administrator_login = azurerm_key_vault_secret.sql-1-username.value
administrator_login_password = azurerm_key_vault_secret.sql-1-password.value
tags = merge(local.common_tags, tomap({"type" = "mssql-server"}))
depends_on = [azurerm_key_vault_secret.sql-1-password]
}

Finding Ways to Merge Resource Tags

Hello Terraform Experts,
I inherited some old Terraform code for deploying resources to Azure. One of the main components that I see in most of the modules is to merge the Resource Group tags with additional tags that go on individual resources. The Resource Group tags are outputs as a map of tags. For example:
output "resource_group_tags_map" {
value = { for r in azurerm_resource_group.this : r.name => r.tags }
description = "map of rg tags."
}
and then a resource like vnets merges the RG tags with additional specific tags for the vnet given the name of the RG in a variable.
# merge Resource Group tags with Tags for VNET
# this is going to break if we change RGs
locals {
tags = merge(var.net_additional_tags, data.azurerm_resource_group.this.tags)
This works just fine if we can set the resource group in a single variable. It assumes that the resource(s) being deployed will go into one RG. However, this is not the case anymore and we somehow need to build in a way for any RG to be chosen when deploying a resource. The code below shows how the original concept works.
locals {
tags = merge(var.net_additional_tags, data.azurerm_resource_group.this.tags)
# - Virtual Network
# -
resource "azurerm_virtual_network" "this" {
for_each = var.virtual_networks
name = each.value["name"]
location = data.azurerm_resource_group.this.location
resource_group_name = var.resource_group_name
address_space = each.value["address_space"]
dns_servers = lookup(each.value, "dns_servers", null)
tags = local.tags
}
looking for help therefore to work around this. Say we create 100 vnets and each one of them goes into a different RG, we couldn't create 100 different resource group variables to capture that as it would become too cumbersome.
Here is my example with Key Vault
resource "azurerm_key_vault" "this" {
for_each = var.key_vaults
name = each.value["name"]
location = each.value["location"]
resource_group_name = each.value["resource_group_name"]
sku_name = each.value["sku_name"]
access_policy = var.access_policies
enabled_for_deployment = each.value["enabled_for_deployment"]
enabled_for_disk_encryption = each.value["enabled_for_disk_encryption"]
enabled_for_template_deployment = each.value["enabled_for_template_deployment"]
enable_rbac_authorization = each.value["enable_rbac_authorization"]
purge_protection_enabled = each.value["purge_protection_enabled"]
soft_delete_retention_days = each.value["soft_delete_retention_days"]
tags = merge(each.value["tags"], )
In the tags argument, we need to somehow merge the tags entered for this instance of Key Vault with the resource group tags that the user chose to place the key vault in. I thought of something like this, but clearly the syntax is wrong.
merge(each.value["tags"], data.azurerm_resource_group[each.key][each.value["resource_group_name"].tags)
Thanks for your input.
UPDATE:
│ Error: Invalid index
│
│ on Modules\keyvault\main.tf line 54, in resource "azurerm_key_vault" "this":
│ 54: tags = merge(each.value["tags"], data.azurerm_resource_group.this["${each.value.resource_group_name}"].tags)
│ ├────────────────
│ │ data.azurerm_resource_group.this is object with 1 attribute "keyvault1"
│ │ each.value.resource_group_name is "Terraform1"
│
│ The given key does not identify an element in this collection value.
Solution code posted below using a map and locals.
SOLUTION
Variables.tf
variable "key_vaults" {
description = "Key Vaults and their properties."
type = map(object({
name = string
location = string
resource_group_name = string
sku_name = string
tenant_id = string
enabled_for_deployment = bool
enabled_for_disk_encryption = bool
enabled_for_template_deployment = bool
enable_rbac_authorization = bool
purge_protection_enabled = bool
soft_delete_retention_days = number
tags = map(string)
}))
default = {}
}
# soft_delete_retention_days numeric value can be between 7 and 90. 90 is default
Main.tf for KeyVault module
data "azurerm_resource_group" "this" {
# read from local variable, index is resource_group_name
for_each = local.rgs_map
name = each.value.name
}
# use data azurerm_client_config to get tenant_id, not from config
data "azurerm_client_config" "current" {}
# -
# - Setup key vault
# - transform variables to locals to make sure the correct index will be used: resource group name and key vault name
locals {
rgs_map = {
for n in var.key_vaults :
n.resource_group_name => {
name = n.resource_group_name
}
}
kvs_map = {
for n in var.key_vaults :
n.name => {
name = n.name
location = n.location
resource_group_name = n.resource_group_name
sku_name = n.sku_name
tenant_id = data.azurerm_client_config.current.tenant_id # n.tenant_id
enabled_for_deployment = n.enabled_for_deployment
enabled_for_disk_encryption = n.enabled_for_disk_encryption
enabled_for_template_deployment = n.enabled_for_template_deployment
enable_rbac_authorization = n.enable_rbac_authorization
purge_protection_enabled = n.purge_protection_enabled
soft_delete_retention_days = n.soft_delete_retention_days
tags = merge(n.tags, data.azurerm_resource_group.this["${n.resource_group_name}"].tags)
}
}
}
resource "azurerm_key_vault" "this" {
for_each = local.kvs_map # use local variable, other wise keyvault1 will be used in stead of kv-eastus2-01 as index
name = each.value["name"]
location = each.value["location"]
resource_group_name = each.value["resource_group_name"]
sku_name = each.value["sku_name"]
tenant_id = each.value["tenant_id"]
enabled_for_deployment = each.value["enabled_for_deployment"]
enabled_for_disk_encryption = each.value["enabled_for_disk_encryption"]
enabled_for_template_deployment = each.value["enabled_for_template_deployment"]
enable_rbac_authorization = each.value["enable_rbac_authorization"]
purge_protection_enabled = each.value["purge_protection_enabled"]
soft_delete_retention_days = each.value["soft_delete_retention_days"]
tags = each.value["tags"]
}

Referencing resource instances created by "for_each" in Terraform

I'm testing the "for_each" resource attribute now available in Terraform 12.6 but can't manage to reference created instances in other resources.
azure.tf
variable "customers" {
type = map(object({name=string}))
}
resource "azurerm_storage_account" "provisioning-datalake" {
for_each = var.customers
name = "mydatalake${each.key}"
resource_group_name = "${azurerm_resource_group.provisioning-group.name}"
location = "${azurerm_databricks_workspace.databricks.location}"
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "GRS"
is_hns_enabled = true
enable_advanced_threat_protection = true
tags = {
environment = var.environment
customer = each.value.name
}
}
resource "azurerm_key_vault_secret" "key-vault-datalake-secret" {
for_each = var.customers
name = "mydatalake-shared-key-${each.key}"
value = azurerm_storage_account.provisioning-datalake[each.key].primary_access_key
key_vault_id = azurerm_key_vault.key-vault.id
tags = {
environment = var.environment
customer = each.value.name
}
}
build.tfvars
environment = "Build"
customers = {
dev = {
name = "Development"
},
int = {
name = "Integration"
},
stg = {
name = "Staging"
}
}
I expect "keyvault-datalake-secret" entries to be created with the matching keys of the generated "provisioning-datalake" resources.
But when I run terraform plan --var-file=build.tfvars, I get the following error:
Error: Invalid index
on azure.tf line 45, in resource "azurerm_key_vault_secret" "key-vault-datalake-secret":
45: value = azurerm_storage_account.provisioning-datalake[each.key].primary_access_key
|----------------
| azurerm_storage_account.provisioning-datalake is object with 52 attributes
| each.key is "stg"
The given key does not identify an element in this collection value.
Error: Invalid index
on azure.tf line 45, in resource "azurerm_key_vault_secret" "key-vault-datalake-secret":
45: value = azurerm_storage_account.provisioning-datalake[each.key].primary_access_key
|----------------
| azurerm_storage_account.provisioning-datalake is object with 52 attributes
| each.key is "int"
The given key does not identify an element in this collection value.
Error: Invalid index
on azure.tf line 45, in resource "azurerm_key_vault_secret" "key-vault-datalake-secret":
45: value = azurerm_storage_account.provisioning-datalake[each.key].primary_access_key
|----------------
| azurerm_storage_account.provisioning-datalake is object with 52 attributes
| each.key is "dev"
The given key does not identify an element in this collection value.
Bug corrected in Terraform 0.12.7

Resources