Due to some technical issues during a migration we had to do some changes to our Azure resource directly into the portal. In order to get our Terraform State files again up to date we plan to import some resources.
But, when doing a trial on a POC environment with just 1 recource group we already run into trouble.
I'm having these instructions executed.
TerraForm import -var-file="T:\_config\%SIB_Subscription%\%Core%\terraform.tfvars" "module.provision_resourcegroup.module.rg_create[\"edw-10\"].azurerm_resource_group.rg" /subscriptions/8dc72845-b367-4dcc-98f9-d9a4a933defc/resourceGroups/rg-poc-edw-010
TerraForm plan -var-file="T:\_config\%SIB_Subscription%\%Core%\terraform.tfvars" -out "T:\_CommandLine\_Logs\planfile.log"
environment variables are set correctly as the state files is created on the blob storage.
But when looking into the output on screen I see this.
# module.provision_resourcegroup.module.rg_create["edw-1"].azurerm_resource_group.rg will be destroyed
- - resource "azurerm_resource_group" "rg" {
- id = "/subscriptions/oooooo-zzzz-xxxx-yyyy-zzzz/resourceGroups/rg-poc-edw-001" -> null
- location = "westeurope" -> null
- name = "rg-poc-edw-001" -> null
- tags = {
- "APMId" = "00000"
- "CMDBApplicationId" = "tbd"
- "CMDBApplicationURL" = "tbd"
- "Capability" = "DAS - Data Analytics Services"
- "Das_Desc" = "DAS Common Purpose"
- "Solution" = "EDW"
} -> null
- timeouts {}
}
# module.provision_resourcegroup[0].module.rg_create["edw-1"].azurerm_resource_group.rg will be created
+ resource "azurerm_resource_group" "rg" {
+ id = (known after apply)
+ location = "westeurope"
+ name = "rg-poc-edw-001"
+ tags = {
+ "APMId" = "00000"
+ "CMDBApplicationId" = "tbd"
+ "CMDBApplicationURL" = "tbd"
+ "Capability" = "DAS - Data Analytics Services"
+ "Das_Desc" = "DAS Common Purpose"
+ "Solution" = "EDW"
}
}
So this can't be used as the Apply would delete the RG before creating the new one. Is there a way that I can see WHY TF wants to recreate ?
This is my code to create the module .
resource "azurerm_resource_group" "rg" {
name = "rg-${module.subscription.environment}-${local.rg_name_solution}-${var.rg_name_seqnr}"
location = module.location.azure
tags = {
"Das_Desc" = var.tag_Desc
"Capability" = var.tag_capability
"Solution" = var.tag_solution
"APMId" = var.tag_APMId
"CMDBApplicationURL" = var.tag_CMDBApplicationURL
"CMDBApplicationId" = var.tag_CMDBApplicationId
}
}
It appears that your outer module declaration now has a count meta-argument, so you need to rename the resource path in your state according to the new namespace. You can rename resources in your state with terraform state mv <former name> <current name>:
terraform state mv 'module.provision_resourcegroup.module.rg_create["edw-1"].azurerm_resource_group.rg' 'module.provision_resourcegroup[0].module.rg_create["edw-1"].azurerm_resource_group.rg'
Related
I have seen examples to add one secret (or) key to azure key vault. but I have a requirement now to add multiple secrets to azure key vault using terraform.
How can I achieve that? Can anyone suggest?
Thank You.
I tried to add resource for each secret. added multiple resources like below. but that did not work.
module "keyvault_secret" {
source = "../../modules/keyvault_secret"
count = length(var.secrets)
keyVaultSecretName = keys(var.secrets)[count.index]
keyVaultSecretValue = values(var.secrets)[count.index]
keyVaultId = data.azurerm_key_vault.key_vault.id
}
variables:
variable "secrets" {
type = map(string)
}
variables.tfvars:
secrets = $(secrets)
in YAML pipeline:
displayName: DEV
variables:
- group: 'Environment - Dev'
- name: secrets
value: '{"testAPIKey1" = $(testAPIKey1) , "testAPIKey2" = $(testAPIKey2) }'
i have defined those key values in above variable group - Environment - Dev
This is what the error throws
Expected a closing parenthesis to terminate the expression.
##[error]Terraform command 'plan' failed with exit code '1'.: Unbalanced parentheses
##[error]
Error: Unbalanced parentheses
You need to run it in a loop.
See this link for more info about Terraform loops (for each or count):
https://www.cloudbolt.io/terraform-best-practices/terraform-for-loops/
Untested but something like this:
#Reference AKV in data block
data "azurerm_key_vault" "kvexample" {
name = "mykeyvault"
resource_group_name = "some-resource-group"
}
variable "secret_maps" {
type = map(string)
default = {
"name1"= "value1"
"name2" = "value2"
"name3" = "value3"
}
}
# Count loop
resource "azurerm_key_vault_secret" "kvsecrettest" {
count = length(var.secret_maps)
name = keys(var.secret_maps)[count.index]
value = values(var.secret_maps)[count.index]
key_vault_id = azurerm_key_vault.kvexample.id
}
#----------------- Or use For Each instead of Count
# For Each loop
resource "azurerm_key_vault_secret" "kvsecrettest" {
for_each = var.secret_maps
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.kvexample.id
}
I have the following terraform resources in a file
resource "google_project_service" "cloud_resource_manager" {
project = var.tf_project_id
service = "cloudresourcemanager.googleapis.com"
disable_dependent_services = true
}
resource "google_project_service" "artifact_registry" {
project = var.tf_project_id
service = "artifactregistry.googleapis.com"
disable_dependent_services = true
depends_on = [google_project_service.cloud_resource_manager]
}
resource "google_artifact_registry_repository" "el" {
provider = google-beta
project = var.tf_project_id
location = var.region
repository_id = "el"
description = "Repository for extract/load docker images"
format = "DOCKER"
depends_on = [google_project_service.artifact_registry]
}
However, when I run terraform plan, I get this
Terraform will perform the following actions:
# google_artifact_registry_repository.el will be created
+ resource "google_artifact_registry_repository" "el" {
+ create_time = (known after apply)
+ description = "Repository for extract/load docker images"
+ format = "DOCKER"
+ id = (known after apply)
+ location = "us-central1"
+ name = (known after apply)
+ project = "backbone-third-party-data"
+ repository_id = "el"
+ update_time = (known after apply)
}
# google_project_iam_member.ingest_sa_roles["cloudscheduler.serviceAgent"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = "backbone-third-party-data"
+ role = "roles/cloudscheduler.serviceAgent"
}
# google_project_iam_member.ingest_sa_roles["run.invoker"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = <my project id>
+ role = "roles/run.invoker"
}
# google_project_service.artifact_registry will be created
+ resource "google_project_service" "artifact_registry" {
+ disable_dependent_services = true
+ disable_on_destroy = true
+ id = (known after apply)
+ project = <my project id>
+ service = "artifactregistry.googleapis.com"
}
See how google_project_service.artifact_registry is created after google_artifact_registry_repository.el. I was hoping that my depends_on in resource google_artifact_registry_repository.el would make it so the service was created first. Am I misunderstanding how depends_on works? Or does the ordering of resources listed from terraform plan not actually mean that thats the order they are created in?
Edit: when I run terraform apply it errors out with
Error 403: Cloud Resource Manager API has not been used in project 521986354168 before or it is disabled
Even though it is enabled. I think it's doing this because its running the artifact registry resource creation before creating the terraform services?
I don't think that it will be possible to enable this particular API this way, as google_project_service resource depends on Resource Manager API (and maybe also on Service Usage API?) being enabled. So you could either enable those manually or use null_resource with local-exec provisioner to do it automatically:
resource "null_resource" "enable_cloudresourcesmanager_api" {
provisioner "local-exec" {
command = "gcloud services enable cloudresourcesmanager.googleapis.com cloudresourcemanager.googleapis.com --project ${var.project_id}"
}
}
Another issue you might find is that enabling an API takes some time, depending on a service. So sometimes even though your resources depend on a resource enabling an API, you will still get the same error message. Then you can just reapply your configuration and as an API had time to initialize, second apply will work. In some cases this is good enough, but if you are building a reusable module you might want to avoid those reapplies. Then you can use time_sleep resources to wait for API initialization:
resource "time_sleep" "wait_for_cloudresourcemanager_api" {
depends_on = [null_resource.enable_cloudresourcesmanager_api]
# or: depends_on = [google_project_service.some_other_api]
create_duration = "30s"
}
I have a storage account created in azure portal(out side of terraform). I want to configure lifecycle management policy to delete older blob. I have tried terraform import to import the resource(storage account), but seems settings are different terraform plan, when I run terraform plan it say, it will replace or create storage account.
But I dont want to recreate the storage account which has some date in it.
provider "azurerm" {
features {}
skip_provider_registration = "true"
}
variable "LOCATION" {
default = "northeurope"
description = "Region to deploy into"
}
variable "RESOURCE_GROUP" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the resource group"
}
variable "STORAGE_ACCOUNT" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the storage account where to store the backup"
}
variable "STORAGE_ACCOUNT_RETENTION_DAYS" {
default = "180"
description = "Number of days to keep the backups"
}
resource "azurerm_resource_group" "storage-account" {
name = var.RESOURCE_GROUP
location = var.LOCATION
}
resource "azurerm_storage_account" "storage-account-lifecycle" {
name = var.STORAGE_ACCOUNT
location = azurerm_resource_group.storage-account.location
resource_group_name = azurerm_resource_group.storage-account.name
account_tier = "Standard"
account_replication_type = "RAGRS" #Read-access geo-redundant storage
}
resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
storage_account_id = azurerm_storage_account.storage-account-lifecycle.id
rule {
name = "DeleteOldBackups"
enabled = true
filters {
blob_types = ["blockBlob"]
}
actions {
base_blob {
delete_after_days_since_modification_greater_than = var.STORAGE_ACCOUNT_RETENTION_DAYS
}
}
}
}
Import resource
$ terraform import azurerm_storage_account.storage-account-lifecycle /subscriptions/[RETRACTED]
azurerm_storage_account.storage-account-lifecycle: Importing from ID "/subscriptions/[RETRACTED]...
azurerm_storage_account.storage-account-lifecycle: Import prepared!
Prepared azurerm_storage_account for import
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
The plan is below
$ terraform plan
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_resource_group.storage-account will be created
+ resource "azurerm_resource_group" "storage-account" {
+ id = (known after apply)
+ location = "northeurope"
+ name = "[RETRACTED]"
}
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform
apply" now.
From the plan, I see it will create "storage account". I also tried removing azurerm_storage_account section and specified resource id for the var storage_account_id in azurerm_storage_management_policy section, but still it is saying # azurerm_resource_group.storage-account will be created.
How to configure lifecycle management policy without modifying/creating existing storage account.
PS: This is my first terraform script
Ok, I see the problem as #Jim Xu pointed in the comment. I didn't import resource group which is what it is saying. I imported resource group like and ran terraform plan
$ terraform import azurerm_resource_group.storage-account /subscriptions/[RETRACTED]
$ $ terraform plan
azurerm_resource_group.storage-account: Refreshing state... [id=/subscriptions/[RETRACTED]]
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Im trying to debug why my Terraform script is not working. Due an unknown reason Terraform keeps destroying my MySQL database and recreates it after that.
Below is the output of the execution plan:
# azurerm_mysql_server.test01 will be destroyed
- resource "azurerm_mysql_server" "test01" {
- administrator_login = "me" -> null
- auto_grow_enabled = true -> null
- backup_retention_days = 7 -> null
- create_mode = "Default" -> null
- fqdn = "db-test01.mysql.database.azure.com" -> null
- geo_redundant_backup_enabled = false -> null
- id = "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01" -> null
- infrastructure_encryption_enabled = false -> null
- location = "westeurope" -> null
- name = "db-test01" -> null
- public_network_access_enabled = true -> null
- resource_group_name = "production-rg" -> null
- sku_name = "B_Gen5_1" -> null
- ssl_enforcement = "Disabled" -> null
- ssl_enforcement_enabled = false -> null
- ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled" -> null
- storage_mb = 51200 -> null
- tags = {} -> null
- version = "8.0" -> null
- storage_profile {
- auto_grow = "Enabled" -> null
- backup_retention_days = 7 -> null
- geo_redundant_backup = "Disabled" -> null
- storage_mb = 51200 -> null
}
- timeouts {}
}
# module.databases.module.test.azurerm_mysql_server.test01 will be created
+ resource "azurerm_mysql_server" "test01" {
+ administrator_login = "me"
+ administrator_login_password = (sensitive value)
+ auto_grow_enabled = true
+ backup_retention_days = 7
+ create_mode = "Default"
+ fqdn = (known after apply)
+ geo_redundant_backup_enabled = false
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ location = "westeurope"
+ name = "db-test01"
+ public_network_access_enabled = true
+ resource_group_name = "production-rg"
+ sku_name = "B_Gen5_1"
+ ssl_enforcement = (known after apply)
+ ssl_enforcement_enabled = false
+ ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled"
+ storage_mb = 51200
+ version = "8.0"
+ storage_profile {
+ auto_grow = (known after apply)
+ backup_retention_days = (known after apply)
+ geo_redundant_backup = (known after apply)
+ storage_mb = (known after apply)
}
}
As far as i know all is exactly the same. To prevent this i also did a manually terraform import to sync the state with the remote state.
The actually resource as defined in my main.tf
resource "azurerm_mysql_server" "test01" {
name = "db-test01"
location = "West Europe"
resource_group_name = var.rg
administrator_login = "me"
administrator_login_password = var.root_password
sku_name = "B_Gen5_1"
storage_mb = 51200
version = "8.0"
auto_grow_enabled = true
backup_retention_days = 7
geo_redundant_backup_enabled = false
infrastructure_encryption_enabled = false
public_network_access_enabled = true
ssl_enforcement_enabled = false
}
The other odd thing is that below command will output that all is actually in sync?
➜ terraform git:(develop) ✗ terraform plan --refresh-only
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/firstklas-production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
No changes. Your infrastructure still matches the configuration.
After an actual import the same still happens even though the import states all is in state:
➜ terraform git:(develop) ✗ terraform import azurerm_mysql_server.test01 /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
azurerm_mysql_server.test01: Importing from ID "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01"...
azurerm_mysql_server.test01: Import prepared!
Prepared azurerm_mysql_server for import
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
What can i do to prevent this destroy? Or even figure out why the actually destroy is triggered? This is happening on multiple azure instances at this point.
NOTE: the subscription ID is spoofed so don't worry
Best,
Pim
Your plan output shows that Terraform is seeing two different resource addresses:
# azurerm_mysql_server.test01 will be destroyed
# module.databases.module.test.azurerm_mysql_server.test01 will be created
Notice that the one to be created is in a nested module, not in the root module.
If your intent is to import this object to the address that is shown as needing to be created above, you'll need to specify this full address in the terraform import command:
terraform import 'module.databases.module.test.azurerm_mysql_server.test01' /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
The terraform import command tells Terraform to bind an existing remote object to a particular Terraform address, and so when you use it you need to be careful to specify the correct Terraform address to bind to.
In your case, you told Terraform to bind the object to a hypothetical resource "azurerm_mysql_server" "test01" block in the root module, but your configuration has no such block and so when you ran terraform plan Terraform assumed that you wanted to delete that object, because deleting a resource block is how we typically tell Terraform that we intend to delete something.
There is a way.
user
Plan
Apply
terraform state rm "resource_name" --------This will eliminate or remove resource from current state
next Apply
Worked perfectly on GCP for creating 2 successive VM using same TF script.
Only thing is we need to write/code to get current resources and store somewhere and create commands in config: require blocks for upstream dependencies #3. While destroying we can add back using terraform state mv "resource_name"
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)
Hope this helps.
Hi I am trying to create a Terraform script which will take inputs from the user in the form of a CSV file and create multiple Azure resources.
For example if the user wants to create: ResourceGroup>Vnet>Subnet in bulk, he will provide input in CSV format as below:
resourcegroup,RG_location,RG_tag,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
I have written the following working main.tf file:
# Configure the Microsoft Azure Provider
provider "azurerm" {
version = "=1.43.0"
subscription_id = var.subscription
tenant_id = var.tenant
client_id = var.client
client_secret = var.secret
}
#Decoding the csv file
locals {
vmcsv = csvdecode(file("${path.module}/computelanding.csv"))
}
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
To be continued....
The issue I am facing here what is in the second resource group, the user don't want a resource type, suppose the user want to skip the DNS zone in the resource group csvrg2. How do I make terraform skip that block ?
Edit: What I am trying to achieve is "based on some condition in the CSV file, not to create azurerm_dns_zone resource for the resource group csvrg2"
I have provided an example of the CSV file, how it may look like below:
resourcegroup,RG_location,RG_tag,DNS_required,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,1,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,0,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
you had already the right thought in your mind using the depends_on function. Although, you're using a count inside, which causes from my understanding, that once the first resource[0] is created, Terraform sees the dependency as solved and goes ahead as well.
I found this post with a workaround which you might be able to try:
https://github.com/hashicorp/terraform/issues/15285#issuecomment-447971852
That basically tells us to create a null_resource like in that example:
variable "instance_count" {
default = 0
}
resource "null_resource" "a" {
count = var.instance_count
}
resource "null_resource" "b" {
depends_on = [null_resource.a]
}
In your example, it might look like this:
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = null_resource.example
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
resource "null_resource" "example" {
...
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
}
or depending on your Terraform version (0.12+ which you're using guessing your syntax)
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
I hope that helps.
Greetings