Use output value from module that has a for_each set - terraform

I had my code setup to export the dynamic private ip address when the VM is created. I did this via an outputs value. Since then, I have updated to tf 0.13 and I'm using a for_each in the module but when I reference this value now I get the below error. I'm not sure how I can export the dynamic private address attribute of the NIC now the for_each has been set to be used in the source_address_prefixes. I understand what the error is saying but not sure on correct way of exporting the value to the object map?
Error: Unsupported attribute
on main.tf line 66, in resource "azurerm_network_security_rule" "fico-app-sr-80":
66: source_address_prefixes = module.fico_web_vm.linux_vm_ips[0]
|----------------
| module.fico_web_vm is object with 1 attribute "web-1"
This object does not have an attribute named "linux_vm_ips".
The outputs.tf file:
output "linux_vm_names" {
value = [azurerm_linux_virtual_machine.virtual_machine.*.name]
}
output "linux_vm_ips" {
value = [azurerm_network_interface.network_interface.*.private_ip_address]
}
output "linux_vm_nsg" {
value = azurerm_network_security_group.network_security_group.name
}
Security rule where the error is referencing, located in the main.tf:
resource "azurerm_network_security_rule" "fico-app-sr-80" {
name = "nsr-${var.environment}-${var.directorate}-${var.business_unit}-${var.vm_identifier}${var.instance_number}-http80"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "80"
source_address_prefixes = module.fico_web_vm.linux_vm_ips[0]
destination_address_prefix = "VirtualNetwork"
resource_group_name = azurerm_resource_group.rg_dwp_fico_app.name
network_security_group_name = "module.fico_app_vm.linux_vm_nsg"
}
The NIC which is in the root module:
resource "azurerm_network_interface" "network_interface" {
name = "nic-${var.environment}-${var.directorate}-${var.business_unit}-${var.vm_identifier}-${var.vm_name}"
resource_group_name = var.resource_group
location = var.location
enable_ip_forwarding = "false"
enable_accelerated_networking = "false"
ip_configuration {
name = "ipconfig1"
subnet_id = data.azurerm_subnet.dwp_subnet.id
private_ip_address_allocation = "Dynamic"
primary = "true"
}
The module that builds the web_vm in main.tf:
module "fico_web_vm" {
for_each = var.web_servers
source = "../modules/compute/linux_vm"
source_image_id = var.web_image_id
location = var.location
vm_name = each.key
vm_identifier = "${var.vm_identifier}${var.instance_number}"
vm = each.value
disks = each.value["disks"]
resource_group = azurerm_resource_group.rg_dwp_fico_web.name
directorate = var.directorate
business_unit = var.business_unit
environment = var.environment
network_rg_identifier = var.network_rg_identifier
subnet_name = "sub-dwp-${var.environment}-${var.directorate}-${var.business_unit}-fe01"
diag_storage_account_name = var.diag_storage_account_name
ansible_storage_account_name = var.ansible_storage_account_name
ansible_storage_account_key = var.ansible_storage_account_key
log_analytics_workspace_name = var.log_analytics_workspace_name
backup_policy_name = var.backup_policy_name
}
variables for the app in the main.tf:
# Web VM Variables
variable "web_servers" {
description = "Variable for defining each instance"
type = map(object({
size = string
admin_username = string
public_key = string
disks = list(number)
zone_vm = string
zone_disk = list(string)
}))
}
The tfvars file that the variables.tf translates to and which the for_each loops over:
web_servers ={
web-1 = {
size = "Standard_B2s"
admin_username = xxxx
public_key = xxxx
disks = [32, 32]
zone_vm = "1"
zone_disk = ["1"]
}
}

As I see, there two mistakes in your code.
First:
network_security_group_name = "module.fico_app_vm.linux_vm_nsg"
need to change into:
network_security_group_name = module.fico_app_vm.linux_vm_nsg
The double quotes are usually used to set a string when you use them only, not other resources.
Second:
source_address_prefixes = module.fico_web_vm.linux_vm_ips[0]
need to change into:
source_address_prefixes = module.fico_web_vm.linux_vm_ips
and the output also should be change into:
output "linux_vm_ips" {
value = azurerm_network_interface.network_interface.*.private_ip_address
}
I see you just create one NIC without count and for_each, but when you set the value with azurerm_network_interface.network_interface.*.private_ip_address then it also returns a list. And the source_address_prefixes option expects a list, so you just need to give the output.
Maybe it's better to change the output like this:
output "linux_vm_ip" {
value = azurerm_network_interface.network_interface.private_ip_address
}
And then you can set the source_address_prefixes like this:
source_address_prefixes = [ module.fico_web_vm.linux_vm_ip ]

Related

List & String Conversion Issue for Data sources

I am caught in a bit of a loop on this one. Need to provide the azure_windows_virtual_machine with a list of network interface IDs. The network interfaces are created using a separate resource block. In my variable definition for the windows vm, I provide an argument for the name[s] of said network interfaces so that we can correctly associate the nics that we want with each virtual machine. If we have 100 nics and 90 VMs, some of the VMs could get two NICs, so we want to be sure we provide some link between NIC name and VM name.
The network interface names are therefore a list(string).
I have been trying to use the values function to get the list of NIC IDs (given the names), but running into a failure: "The each object can be used only in "module" or "resource" blocks, and only when the "for_each" argument is set."
If I use a data source in the resource block, which seemed most logical, it fails too because I have a list(string) specified for the network_interface_names argument, but the data source cannot take that. It of course needs a single string. But it's never going to be a single string, it's always going to be a list (since we can have more than one NIC per VM).
I think the correct answer is to maybe create the list of IDs beforehand - trick is that it would need to almost be dynamic for each defined VM - because each VM will have a different list of network_interface_names. We therefore need to generate the new list on the fly for each VM.
Variables
variable "resource_groups" {
description = "Resource groups"
type = map(object({
location = string
}))
}
variable "virtual_networks" {
description = "virtual networks and properties"
type = map(object({
resource_group_name = string
address_space = list(string)
}))
}
variable "subnets" {
description = "subnet and their properties"
type = map(object({
resource_group_name = string
virtual_network_name = string
address_prefixes = list(string)
}))
}
variable "nic" {
description = "network interfaces"
type = map(object({
subnet_name = string
resource_group_name = string
}))
}
variable "admin_password" {
type = string
sensitive = true
}
variable "admin_user" {
type = string
sensitive = true
}
variable "windows_vm" {
description = "Windows virtual machine"
type = map(object({
network_interface_names = list(string)
resource_group_name = string
size = string
timezone = string
}))
}
INPUTS
resource_groups = {
rg-eastus-dev1 = {
location = "eastus"
}
}
virtual_networks = {
vnet-dev1 = {
resource_group_name = "rg-eastus-dev1"
address_space = ["10.0.0.0/16"]
}
}
subnets = {
snet-01 = {
resource_group_name = "rg-eastus-dev1"
virtual_network_name = "vnet-dev1"
address_prefixes = ["10.0.1.0/24"]
}
}
nic = {
nic1 = {
subnet_name = "snet-01"
resource_group_name = "rg-eastus-dev1"
}
}
admin_password = "s}8cpH96qa.1BQ"
admin_user = "padmin"
windows_vm = {
winvm1 = {
network_interface_names = ["nic1"]
resource_group_name = "rg-eastus-dev1"
size = "Standard_B2s"
timezone = "Eastern Standard Time"
}
}
MAIN
resource "azurerm_resource_group" "rgs" {
for_each = var.resource_groups
name = each.key
location = each.value["location"]
}
data "azurerm_resource_group" "rgs" {
for_each = var.resource_groups
name = each.key
depends_on = [
azurerm_resource_group.rgs
]
}
resource "azurerm_virtual_network" "vnet" {
for_each = var.virtual_networks
name = each.key
resource_group_name = each.value["resource_group_name"]
address_space = each.value["address_space"]
location = data.azurerm_resource_group.rgs[each.value["resource_group_name"]].location
}
resource "azurerm_subnet" "subnet" {
for_each = var.subnets
name = each.key
resource_group_name = each.value["resource_group_name"]
virtual_network_name = each.value["virtual_network_name"]
address_prefixes = each.value["address_prefixes"]
depends_on = [
azurerm_virtual_network.vnet
]
}
data "azurerm_subnet" "subnet" {
for_each = var.subnets
name = each.key
virtual_network_name = each.value["virtual_network_name"]
resource_group_name = each.value["resource_group_name"]
depends_on = [
azurerm_resource_group.rgs
]
}
resource "azurerm_network_interface" "nics" {
for_each = var.nic
ip_configuration {
name = each.key
subnet_id = data.azurerm_subnet.subnet[each.value["subnet_name"]].id
private_ip_address_allocation = "Dynamic"
}
location = data.azurerm_resource_group.rgs[each.value["resource_group_name"]].location
name = each.key
resource_group_name = each.value["resource_group_name"]
depends_on = [
azurerm_resource_group.rgs,
azurerm_subnet.subnet
]
}
data "azurerm_network_interface" "nics" {
for_each = var.nic
name = each.key
resource_group_name = each.value["resource_group_name"]
depends_on = [
azurerm_resource_group.rgs
]
}
resource "azurerm_windows_virtual_machine" "windows_vm" {
for_each = var.windows_vm
admin_password = var.admin_password
admin_username = var.admin_user
location = data.azurerm_resource_group.rgs[each.value["resource_group_name"]].location
name = each.key
network_interface_ids = values(data.azurerm_network_interface.nics[each.value["network_interface_names"]].id)
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
resource_group_name = each.value["resource_group_name"]
size = each.value["size"]
timezone = each.value["timezone"]
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
}
Current Error on Plan
╷
│ Error: Invalid index
│
│ on main.tf line 75, in resource "azurerm_windows_virtual_machine" "windows_vm":
│ 75: network_interface_ids = values(data.azurerm_network_interface.nics[each.value["network_interface_names"]].id)
│ ├────────────────
│ │ data.azurerm_network_interface.nics is object with 1 attribute "nic1"
│ │ each.value["network_interface_names"] is list of string with 1 element
│
│ The given key does not identify an element in this collection value: string required.
Possible Solution - But Not working
Provide a map, keyed off the VM name, of NIC IDs. Then, in the windows_vm resource, take that map and try to get the list of NIC ID values.
locals {
nic_ids {
[for k, v in var.windows_vm : k => v {data.azurerm_network_interface.nics[v.network_interface_names]}.id]
}
}
resource "azurerm_windows_virtual_machine" "windows_vm" {
for_each = var.windows_vm
admin_password = var.admin_password
admin_username = var.admin_user
location = data.azurerm_resource_group.rgs[each.value["resource_group_name"]].location
name = each.key
network_interface_ids = values(local.nic_ids[each.key])
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
resource_group_name = each.value["resource_group_name"]
size = each.value["size"]
timezone = each.value["timezone"]
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
}
First of all, you do not need to call data after creation of each resource. The resource itself will contain all the information that you need. So you should eliminate all data sources in your code and use resource directly.
But returning to the error you provided. One way to generate the list dynamically, would be:
network_interface_ids = [for ni_name in each.value["network_interface_names"]: azurerm_network_interface.nics[ni_name].id]

azure terraform: how to attach multiple data disks to multiple VMs

I'm following Neal Shah's instructions for deploying multiple VMs with multiple managed disks (https://www.nealshah.dev/posts/2020/05/terraform-for-azure-deploying-multiple-vms-with-multiple-managed-disks/#deploying-multiple-vms-with-multiple-datadisks)
everything works fine except for the azurerm_virtual_machine_data_disk_attachment resource which fails with the following error
│ Error: Invalid index
│
│ on main.tf line 103, in resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach":
│ 103: virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
│ ├────────────────
│ │ azurerm_linux_virtual_machine.vms is tuple with 3 elements
│ │ each.key is "datadisk_dca0-apache-cassandra-node0_disk00"
│
│ The given key does not identify an element in this collection value: a number is required.
my code is below:
locals {
vm_datadiskdisk_count_map = { for k in toset(var.nodes) : k => var.data_disk_count }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s-%s", var.dca, var.name)
location = var.location
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-%s-availability-set", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-%s-linux-vm-nsg", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for apache cassandra hosts
resource "azurerm_network_interface" "vm_nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
count = length(var.nodes)
name = format("%s-%s-nic${count.index}", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-%s-apache-cassandra-ip", var.dca, var.name)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create apache cassandra VMs
resource "azurerm_linux_virtual_machine" "vms" {
count = length(var.nodes)
name = element(var.nodes, count.index)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [element(azurerm_network_interface.vm_nics.*.id, count.index)]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
source_image_id = var.source_image_id
os_disk {
caching = "ReadWrite"
storage_account_type = var.storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# create data disk(s) for VMs
resource "azurerm_managed_disk" "managed_disk" {
for_each= toset([for j in local.datadisk_lun_map : j.datadisk_name])
name= each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
}
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}
anyone know how to accomplish this? thanks!
I've tried several different approaches to this but have been unsuccessful so far, I was expecting it to work as described in Neal's post
I was able to get this working. However, I have not tested adding/removing nodes/disks yet. But this working to create multiple VMs with multiple data disks attached to each VM.
I use a variable file that I source to substitute the variables in the *.tf files.
variables.tf
variable "azure_subscription_id" {
type = string
description = "Azure Subscription ID"
default = ""
}
variable "dca" {
type = string
description = "datacenter [dca0|dca2|dca4|dca6]."
default = ""
}
variable "location" {
type = string
description = "Location of the resource group."
default = ""
}
variable "resource_group" {
type = string
description = "resource group name."
default = ""
}
variable "subnet_name" {
type = string
description = "subnet name"
default = ""
}
variable "vnet_name" {
type = string
description = "vnet name"
default = ""
}
variable "vnet_rg" {
type = string
description = "vnet resource group"
default = ""
}
variable "vm_size" {
type = string
description = "vm size"
default = ""
}
variable "os_disk_size_gb" {
type = string
description = "vm os disk size gb"
default = ""
}
variable "data_disk_size_gb" {
type = string
description = "vm data disk size gb"
default = ""
}
variable "admin_username" {
type = string
description = "admin user name"
default = ""
}
variable "ssh_pub_key" {
type = string
description = "public key for admin user"
default = ""
}
variable "source_image_id" {
type = string
description = "image id"
default = ""
}
variable "os_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "data_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "vm_list" {
type = map(object({
hostname = string
}))
default = {
vm0 ={
hostname = "${dca}-${name}-node-0"
},
vm1 = {
hostname = "${dca}-${name}-node-1"
}
vm2 = {
hostname = "${dca}-${name}-node-2"
}
}
}
variable "disks_per_instance" {
type = string
description = ""
default = ""
}
terraform.tfvars
# subscription
azure_subscription_id = "${azure_subscription_id}"
# name and location
resource_group = "${dca}-${name}"
location = "${location}"
dca = "${dca}"
# Network
subnet_name = "${subnet_name}"
vnet_name = "${dca}vnet"
vnet_rg = "th-${dca}-vnet"
# VM
vm_size = "${vm_size}"
os_disk_size_gb = "${os_disk_size_gb}"
os_disk_storage_account_type = "${os_disk_storage_account_type}"
source_image_id = "${source_image_id}"
# User/key info
admin_username = "${admin_username}"
ssh_pub_key = ${ssh_pub_key}
# data disk info
data_disk_storage_account_type = "${data_disk_storage_account_type}"
data_disk_size_gb = "${data_disk_size_gb}"
disks_per_instance= "${disks_per_instance}"
main.tf
# set locals for multi data disks
locals {
vm_datadiskdisk_count_map = { for k, query in var.vm_list : k => var.disks_per_instance }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s", var.resource_group)
location = var.location
}
# create data disk(s)
resource "azurerm_managed_disk" "managed_disk" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
name = each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.data_disk_storage_account_type
create_option = "Empty"
disk_size_gb = var.data_disk_size_gb
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-availability-set", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-linux-vm-nsg", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for vms
resource "azurerm_network_interface" "nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
for_each = var.vm_list
name = "${each.value.hostname}-nic"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-proxy-ip", var.resource_group)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create VMs
resource "azurerm_linux_virtual_machine" "vms" {
for_each = var.vm_list
name = each.value.hostname
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [azurerm_network_interface.nics[each.key].id]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
source_image_id = var.source_image_id
custom_data = filebase64("cloud-init.sh")
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = var.os_disk_storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# attache data disks vms
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}

Creating varying number of VM's with unique NIC's using for_each

I'm trying to create varying number of VM's with different configurations. I'm setting for_each on the azurerm_windows_virtual_machine resource and looping through a map set in a tfvars file. The variable is set in the module but defined in the tfvars file.
I want to be able to create x amount of VM's with each one having a unique NIC attached. I am able to create the VM's but provisioning fails because the NIC is not unique. I have tried adding for_each using the same variable but I get the following errors:
Error: Incorrect attribute value type
on ../modules/compute/windows_vm/windows_vm.tf line 96, in resource "azurerm_network_interface_application_security_group_association" "application_security_group_association":
96: network_interface_id = [azurerm_network_interface.network_interface[each.key].id]
Inappropriate value for attribute "network_interface_id": string required.
Error: Incorrect attribute value type
on ../modules/compute/windows_vm/windows_vm.tf line 96, in resource "azurerm_network_interface_application_security_group_association" "application_security_group_association":
96: network_interface_id = [azurerm_network_interface.network_interface[each.key].id]
Inappropriate value for attribute "network_interface_id": string required.
This is my code:
# VM Network Interface
resource "azurerm_network_interface" "network_interface" {
for_each = var.servers
name = "nic-${var.environment}-${var.directorate}-${var.business_unit}-${var.vm_identifier}${each.value.name}"
resource_group_name = var.resource_group
location = var.location
enable_ip_forwarding = "false"
enable_accelerated_networking = "false"
ip_configuration {
name = "ipconfig1"
subnet_id = data.azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
primary = "true"
}
}
# Application Security Group
resource "azurerm_application_security_group" "application_security_group" {
name = "asg-${var.environment}-${var.directorate}-${var.business_unit}-${var.vm_identifier}"
resource_group_name = var.resource_group
location = var.location
}
resource "azurerm_network_interface_application_security_group_association" "application_security_group_association" {
for_each = var.servers
network_interface_id = [azurerm_network_interface.network_interface[each.key].id]
application_security_group_id = azurerm_application_security_group.application_security_group.id
}
resource "azurerm_network_interface_security_group_association" "network_security_group_association" {
for_each = var.servers
network_interface_id = [azurerm_network_interface.network_interface[each.key].id]
network_security_group_id = azurerm_network_security_group.network_security_group.id
}
# Azure Virtual Machine
resource "azurerm_windows_virtual_machine" "virtual_machine" {
for_each = var.servers
name = "vm-${var.environment}-${var.vm_identifier}${each.value.name}"
location = var.location
resource_group_name = var.resource_group
zone = each.value.zone
size = var.vm_size
network_interface_ids = [azurerm_network_interface.network_interface[each.key].id]
computer_name = "${var.vm_identifier}${each.value.name}"
admin_username = xxxx
admin_password = xxxx
provision_vm_agent = "true"
source_image_id = data.azurerm_shared_image.shared_image.id
boot_diagnostics {
storage_account_uri = data.azurerm_storage_account.diag_storage_account.primary_blob_endpoint
}
os_disk {
name = "vm-${var.environment}-${var.directorate}-${var.business_unit}-${var.vm_identifier}-os${each.value.name}"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
depends_on = [azurerm_network_interface.network_interface]
}
Variable within the root module that is used in the for_each and set in module variables.tf:
variable "servers" {
description = "Variable for defining each instance"
}
Variables in the module that map to each tfvars per environment:
variable "desktop_servers" {
description = "Variable for defining each instance"
}
variable "db_servers" {
description = "Variable for defining each instance"
}
The above are then defined in the tfvars as below:
desktop_servers = {
"Server_1" = {
name = 1,
zone = 1
}
"Server_2" = {
name = 2,
zone = 2
}
"Server_3" = {
name = 3,
zone = 3
}
}
db_servers = {
"Server_1" = {
name = 1,
zone = 1
}
}
You are assigning a list into network_interface_id, however it should be a string only.
Thus, instead of:
network_interface_id = [azurerm_network_interface.network_interface[each.key].id]
it should be
network_interface_id = azurerm_network_interface.network_interface[each.key].id

Terraform referencing output from another module with for_each

I am having trouble referencing an output from a module in another module. The resources in the first module was deployed using for_each. The resources in the second module is trying to reference the resources from first module
There are 2 modules created
Security Group
VM
The intention is to assign the Security Group to the VM
The following is the module for the Security Group
variable "configserver" {
type = map(object({
name = string
location = string
subnet = string
availability_zone = string
vm_size = string
hdd_size = string
}))
}
module "configserver_nsg" {
for_each = var.configserver
source = "../../../terraform/modules/azure-network-security-group"
resource_group_name = var.resource_group_name
tags = var.tags
location = each.value.location
nsg_name = "${each.value.name}-nsg"
security_rules = [
{
name = "Office",
priority = "100"
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_ranges = [
"22"]
source_address_prefix = "192.168.1.100"
destination_address_prefixes = [
module.configserver_vm[each.key].private_ip
]
},
{
name = "Deny-All-Others"
priority = 4096
direction = "Inbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
]
}
// Value
configserver = {
config1 = {
name = "config1"
location = "eastus"
subnet = "services"
availability_zone = 1
vm_size = "Standard_F2s_v2"
hdd_size = 30
}
}
The security group module source has an output file which outputs the id of the nsg
output "nsg_id" {
description = "The ID of the newly created Network Security Group"
value = azurerm_network_security_group.nsg.id
}
Generally, if there isn't a for_each, I could access the nsg_id like this
module.configserver_nsg.id
So far this is good, now the issue is that I am not able to access the nsg_id from another module
module "configserver_vm" {
for_each = var.configserver
source = "../../../terraform/modules/azure-linux-vm"
resource_group = module.resource_group.name
ssh_public_key = var.ssh_public_key
tags = var.tags
vm_name = each.value.name
location = each.value.location
subnet_id = each.value.subnet
availability-zones = each.value.availability_zone
vm_size = each.value.vm_size
hdd-size = each.value.hdd_size
nsg_id = module.configserver_nsg[each.key].nsg_id
}
Based on my research these, a number of posts (here, here, here say I should be able to loop through the map using each.key however
nsg_id = module.configserver_nsg[each.key].nsg_id
this produces the error
Error: Cycle: module.configserver_nsg (close), module.configserver_vm.var.nsg_id (expand), module.configserver_vm.azurerm_network_interface_security_group_association.this, module.configserver_vm (close), module.configserver_nsg.var.security_rules (expand), module.configserver_nsg.azurerm_network_security_group.nsg, module.configserver_nsg.output.nsg_id (expand)
Is there any other way to reference the value?
As I see the first problem is that you use the wrong way to quote the things from the module configserver_nsg for the NSG id, it should be like this:
nsg_id = module.configserver_nsg[each.value.name].nsg_id
And the second problem has been said by #Matt. It's a cyclical dependency between the two modules. The thing that makes the cyclical dependency is the NSG rule, it seems the NSG rule needs the VM private IP address. According to my knowledge, you cannot solve the cyclical dependency if you do not make a change. So I recommend you make a change that separates the NSG rule from the module configserver_nsg and use the resource azurerm_network_security_rule instead after the two modules.
And finally, it seems like this:
variable "configserver" {
type = map(object({
name = string
location = string
subnet = string
availability_zone = string
vm_size = string
hdd_size = string
}))
}
module "configserver_nsg" {
for_each = var.configserver
source = "../../../terraform/modules/azure-network-security-group"
resource_group_name = var.resource_group_name
tags = var.tags
location = each.value.location
nsg_name = "${each.value.name}-nsg"
security_rules = [
{
},
{
name = "Deny-All-Others"
priority = 4096
direction = "Inbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
]
}
// Value
configserver = {
config1 = {
name = "config1"
location = "eastus"
subnet = "services"
availability_zone = 1
vm_size = "Standard_F2s_v2"
hdd_size = 30
}
}
module "configserver_vm" {
for_each = var.configserver
source = "../../../terraform/modules/azure-linux-vm"
resource_group = module.resource_group.name
ssh_public_key = var.ssh_public_key
tags = var.tags
vm_name = each.value.name
location = each.value.location
subnet_id = each.value.subnet
availability-zones = each.value.availability_zone
vm_size = each.value.vm_size
hdd-size = each.value.hdd_size
nsg_id = module.configserver_nsg[each.value.name].nsg_id
}
resource "azurerm_network_security_rule" "configserver_nsg" {
for_each = var.configserver
name = "Office",
priority = "100"
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_ranges = ["22"]
source_address_prefix = "192.168.1.100"
destination_address_prefixes = [
module.configserver_vm[each.key].private_ip
]
resource_group_name = var.resource_group_name
network_security_group_name = "${each.value.name}-nsg"
}

Incorrect terraform config post 0.12upgrade command

In its simplest form,
main.tf is as below:
data "azurerm_resource_group" "tf-rg-external" {
name = var.rg_name
}
# Reference existing Virtual Network
data "azurerm_virtual_network" "tf-vn" {
name = var.vnet_name
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
}
# Reference existing subnet
data "azurerm_subnet" "tf-sn" {
name = var.subnet_name
virtual_network_name = data.azurerm_virtual_network.tf-vn.name
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
}
resource "azurerm_network_security_group" "tf-nsg" {
name = var.app_nsg
location = data.azurerm_virtual_network.tf-vn.location
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
}
resource "azurerm_network_security_rule" "tf-nsr-5986" {
name = "Open Port 5986"
priority = 101
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "5986"
source_address_prefixes = var.allowed_source_ips
destination_address_prefix = "VirtualNetwork"
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
network_security_group_name = azurerm_network_security_group.tf-nsg.name
}
resource "azurerm_network_security_rule" "tf-nsr-3389" {
name = "Open Port 3389"
priority = 102
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefixes = var.allowed_source_ips
destination_address_prefix = "VirtualNetwork"
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
network_security_group_name = azurerm_network_security_group.tf-nsg.name
}
# Assosciate NSG to subnet
resource "azurerm_subnet_network_security_group_association" "tf-snnsg" {
subnet_id = data.azurerm_subnet.tf-sn.id
network_security_group_id = azurerm_network_security_group.tf-nsg.id
}
# Network inteface for Interface
resource "azurerm_network_interface" "tf-ni" {
count = var.vm_count
name = "${var.base_hostname}${format("%02d", count.index + 1)}-nic01"
location = data.azurerm_virtual_network.tf-vn.location
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
ip_configuration {
name = "${var.base_hostname}${format("%02d", count.index)}-iip01"
subnet_id = data.azurerm_subnet.tf-sn.id
private_ip_address_allocation = "dynamic"
public_ip_address_id = element(azurerm_public_ip.tf-pip.*.id, count.index)
}
}
resource "azurerm_public_ip" "tf-pip" {
count = var.vm_count
location = data.azurerm_virtual_network.tf-vn.location
name = "${var.base_hostname}${format("%02d", count.index + 1)}-pip01"
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
allocation_method = "Dynamic"
}
# Storage Account
resource "azurerm_storage_account" "tf-sa" {
count = var.vm_count
name = "${lower(var.base_hostname)}${format("%02d", count.index + 1)}${var.sto_acc_suffix}01"
location = data.azurerm_virtual_network.tf-vn.location
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
account_tier = var.sto_acc_tier_std
account_replication_type = var.sto_acc_rep_type_lrs
}
resource "azurerm_virtual_machine" "tf-vm" {
count = var.vm_count
name = "${var.base_hostname}${format("%02d", count.index + 1)}"
location = data.azurerm_virtual_network.tf-vn.location
resource_group_name = data.azurerm_resource_group.tf-rg-external.name
network_interface_ids = [element(azurerm_network_interface.tf-ni.*.id, count.index)]
vm_size = var.vm_size
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = var.vm_publisher
offer = var.vm_offer
sku = var.vm_sku
version = var.vm_img_version
}
storage_os_disk {
name = "${var.base_hostname}${format("%02d", count.index + 1)}-wosdsk01"
caching = var.caching_option
create_option = var.create_option
managed_disk_type = var.managed_disk_std_lrs
}
os_profile {
computer_name = "${var.base_hostname}${format("%02d", count.index + 1)}"
admin_username = var.username
admin_password = var.password
}
os_profile_windows_config {
enable_automatic_upgrades = false
provision_vm_agent = "true"
}
}
variables.tf is below:
# Declare env variable
variable "rg_name" {
type = string
}
variable "vnet_name" {
type = string
}
variable "subnet_name" {
type = string
}
variable "app_nsg" {
type = string
}
variable "vm_count" {
type = number
}
variable "base_hostname" {
type = string
}
variable "sto_acc_suffix" {
type = string
}
variable "sto_acc_tier_std" {
type = string
default = "Standard"
}
variable "sto_acc_rep_type_lrs" {
type = string
default = "LRS"
}
variable "vm_size" {
type = string
}
variable "vm_publisher" {
type = string
}
variable "vm_offer" {
type = string
}
variable "vm_sku" {
type = string
}
variable "vm_img_version" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}
variable "caching_option" {
type = string
default = "ReadWrite"
}
variable "create_option" {
type = string
default = "FromImage"
}
variable "managed_disk_std_lrs" {
type = string
default = "Standard_LRS"
}
variable "managed_disk_prem_lrs" {
type = string
default = "Premium_LRS"
}
variable "allowed_source_ips" {
description = "List of ips from which inbound connection to VMs is allowed"
type = list(string)
}
I run below command for upgrading terraform config to 0.12 and above
terraform 0.12upgrade
Error:
Error: Syntax error in configuration file
on main.tf line 22, in data "azurerm_resource_group" "tf-rg-external":
22: name = var.rg_name
Error while parsing: At 22:10: Unknown token: 22:10 IDENT var.rg_name
Error: Syntax error in configuration file
on variable.tf line 3, in variable "rg_name":
3: type = string
Error while parsing: At 3:10: Unknown token: 3:10 IDENT string
Any idea, what is the problem? this would work if I don't run terrafom 0.12upgrade command. It is intriguing me why it is not working. I did same upgrade command in another terraform config and I get similar error there.
One observation. This IDENT error comes for first variable in main.tf and variable.tf file. Unable to correlate this error.
You're already using 0.12 syntax, the 0.12upgrade command expects to find 0.11 syntax and will attempt to automatically update it.
e.g. name = var.rg_name - note the lack of ${}
See https://www.terraform.io/docs/commands/0.12upgrade.html
The terraform 0.12upgrade command applies several automatic upgrade rules to help prepare a module that was written for Terraform v0.11 to be used with Terraform v0.12.
(Emphasis mine)

Resources