I am working on provisioning the new azure VM using terraform and attached 2 nic to that VM.
I am getting below error.
azurerm_virtual_machine.vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="VirtualMachineMustHaveOneNetworkInterfaceAsPrimary" Message="Virtual machine AZLXSPTOPTFWTEST must have one network interface set as the primary." Details=[]
I have referred https://www.terraform.io/docs/providers/azurerm/r/network_interface.html this URL for creating NIC
This is my Terraform code.
resource "azurerm_resource_group" "main" {
name = "RG-EASTUS-FW-TEST"
location = "eastus"
}
#create a virtual Network
resource "azurerm_virtual_network" "privatenetwork" {
name = "VNET-EASTUS-FWTEST"
address_space = ["10.100.0.0/16"]
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
}
#create a subnet with externel virtual network
resource "azurerm_subnet" "external"{
name = "SNET-FWTEST-OUT"
virtual_network_name = "${azurerm_virtual_network.privatenetwork.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.10.0/24"
}
#Create a public IP address
resource "azurerm_public_ip" "public" {
name = "PFTEST-PUBLIC"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
allocation_method = "Static"
}
# Create a Subnet within the Virtual Network
resource "azurerm_subnet" "internal" {
name = "SNET-FWTEST-IN"
virtual_network_name = "${azurerm_virtual_network.privatenetwork.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.11.0/24"
}
resource "azurerm_network_interface" "OUT" {
name = "NIC-FWTEST-OUT"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
# network_security_group_id = "${azurerm_network_interface.main.id}"
primary = "true"
ip_configuration {
name = "OUT"
subnet_id = "${azurerm_subnet.external.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.100.10.5"
public_ip_address_id = "${azurerm_public_ip.public.id}"
}
}
# Create a network interface for VMs and attach the PIP and the NSG
resource "azurerm_network_interface" "main" {
name = "NIC-FWTEST-IN"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
# network_security_group_id = "${azurerm_network_security_group.main.id}"
ip_configuration {
name = "IN"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.100.11.5"
}
}
# Create a new Virtual Machine based on the Golden Image
resource "azurerm_virtual_machine" "vm" {
name = "AZLXSPTOPTFWTEST"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.OUT.id}","${azurerm_network_interface.main.id}"]
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
I am expecting result as provision azure VM with 2 nic.
Thanks In Advance
I got the solution as we need to specify the primary network interface id and add this to the network interface id. As well as we need to add in ipconfiguration while creating network interface.
Please refer below code.
ip_configuration {
name = "OUT"
subnet_id = "${azurerm_subnet.external.id}"
primary = true
private_ip_address_allocation = "static"
private_ip_address = "10.100.10.5"
public_ip_address_id = "${azurerm_public_ip.public.id}"
}
resource "azurerm_virtual_machine" "vm" {
name = "AZLXSPTOPTFWTEST"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.id}","${azurerm_network_interface.OUT.id}"]
primary_network_interface_id = "${azurerm_network_interface.OUT.id}"
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
Related
How we can call subnet into Virtual network gateway?
Subnet
resource "azurerm_virtual_network" "virtual_network" {
name = "vNetVPN-Dev"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
address_space = ["10.1.16.0/23", "10.2.0.0/16", "172.16.100.0/24"]
subnet {
name = "snet-vpg-dev"
address_prefix = "10.2.1.0/24"
}
tags = {
environment = var.tag_dev
}
}
Virtual network gateway
resource "azurerm_virtual_network_gateway" "virtual_network_gateway" {
name = "vgw-vgp-dev"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "Basic"
ip_configuration {
name = azurerm_public_ip.public_ip_address.name
public_ip_address_id = azurerm_public_ip.public_ip_address.id
private_ip_address_allocation = "Static"
subnet_id = **here I wan to call my subnet which is defined in the code above**
}
}
so as you can see that there are 2 code blocks, 1 is subnet and the other is virtual network gateway.
I want to refer subnet (snet-vpg-dev) into virtual network gateway as a value of parameter called subnet_id
To get the Id of the subnet, you can take the subnet exported attribute of the vnet, convert it to a list and take the first element, like this
ip_configuration {
name = azurerm_public_ip.public_ip_address.name
public_ip_address_id = azurerm_public_ip.public_ip_address.id
private_ip_address_allocation = "Static"
subnet_id = tolist(azurerm_virtual_network.virtual_network.subnet)[0].id
}
Another solution is to use the azurerm_subnet resource rather than the inline subnet blocks.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet
You can directly retrieve the id of the subnet since it has a dedicated resource
The template would be something like
Subnet
resource "azurerm_virtual_network" "virtual_network" {
name = "vNetVPN-Dev"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
address_space = ["10.1.16.0/23", "10.2.0.0/16", "172.16.100.0/24"]
tags = {
environment = var.tag_dev
}
}
resource "azurerm_subnet" "subnet" {
name = "vNetVPN-Dev"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.2.1.0/24"]
}
Virtual network gateway
resource "azurerm_virtual_network_gateway" "virtual_network_gateway" {
name = "vgw-vgp-dev"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "Basic"
ip_configuration {
name = azurerm_public_ip.public_ip_address.name
public_ip_address_id = azurerm_public_ip.public_ip_address.id
private_ip_address_allocation = "Static"
subnet_id = azurerm_subnet.subnet.id
}
}
I am trying to create resources in azure using terraform, a SQL server database and also a virtual machine.I get the error.
│ Error: creating Subnet: (Name "db_subnetn" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'db_subnetn' is not valid in virtual network 'tf_dev-network'." Details=[]
What have I done ?
followed the link here Error while provisioning Terraform subnet using azurerm
I deleted other network resources using thesame IP range.
My network understanding is pretty basic, however from my research it appears that 10.0.0.0/16 is quite a large IP range and can lead to overlaps. So what did I do, I changed the virtual network IP range from 10.0.0.0/16 to 10.0.1.0/24 to restrict the range, what simply happened is that the error changed to
│ Error: creating Subnet: (Name "internal" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'internal' is not valid in virtual network 'tf_dev-network'." Details=[]
At this stage, I would be grateful if someone can explain what is going wrong here and what needs to be done. Thanks in advance
My files are as follows.
dbcode.tf
resource "azurerm_sql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
resource_group_name = azurerm_resource_group.resource_gp.name
server_name = azurerm_sql_server.sqlserver.name
subnet_id = azurerm_subnet.db_subnet.id
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
main.tf
resource "azurerm_resource_group" "resource_gp" {
name="terraform_youtube"
location = "UK South"
tags = {
"owner" = "Rahman"
"purpose" = "Practice terraform"
}
}
variable "prefix" {
default = "tf_dev"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_B1ls"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Tested with your code in my environment was getting the same error.
To fix the issue you need to change address_prefixes for db_subnet to ["10.0.3.0/24"] as ["10.0.2.0/24"] address range is already using by internal subnet in your main.tf and also check update for sqlvnetrule and do the changes in your dbcode.tf file.
resource "azurerm_mssql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.3.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mssql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
#resource_group_name = azurerm_resource_group.resource_gp.name
#server_name = azurerm_sql_server.sqlserver.name
server_id = azurerm_mssql_server.sqlserver.id
subnet_id = azurerm_subnet.db_subnet.id
}
I want to create two vm's with terraform in Azure. I have configured two "azurerm_network_interface" but when I try to apply the changes, I receive an error. Do you have any idea? Is there any issue if I try to create them on different regions?
The error is something like: vm2-nic was not found azurerm_network_interface
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
name = "north-network"
address_space = ["10.0.0.0/16"]
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main.name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
name = "test-pip"
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
...
Error:
Error: Error creating Network Interface "vm2-nic" (Resource Group "candidate-d7f5a2-rg"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/virtualNetworks/north-network/subnets/internal referenced by resource /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.Network/networkInterfaces/vm2-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
on environment.tf line 47, in resource "azurerm_network_interface" "main":
47: resource "azurerm_network_interface" "main" {
According to the documentation each NIC attached to a VM must exist in the same location (Region) and subscription as the VM. https://learn.microsoft.com/en-us/azure/virtual-machines/windows/network-overview.
If you can re-create the NIC in the same location as the VM or create the VM in the same location as the NIC that will likely solve your problem.
Since you have for_each and in the resource "azurerm_network_interface", it will create two NICs in the location = "${each.value}" while the subnet or VNet has a fixed region "North Europe". You need to create the NICs or other Azure VM related resources like subnets in the same region, you could change the codes like this,
resource "azurerm_resource_group" "test" {
name = "myrg"
location = "West US"
}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
idle_timeout_in_minutes = 30
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}
I have some code which makes multiple vms I'm trying dynamically add them to the load balancer address pool but I'm met with the following error which I have no idea what it means, any help will be appreciated as the error appears to be somewhat obscure
Error: Error: IP Configuration "azure_network_interface_address_pool_association" was not found on Network Interface "client_host_nic-0" (Resource Group "client_rg")
on vm2.tf line 99, in resource "azurerm_network_interface_backend_address_pool_association" "network_interface_backend_address_pool_association":
99: resource "azurerm_network_interface_backend_address_pool_association" "network_interface_backend_address_pool_association" {
vm2.tf file includes
# Create virtual machine
resource "azurerm_network_interface" "client_nics" {
count = var.node_count
name = "client_host_nic-${count.index}"
location = var.resource_group_location
resource_group_name = module.network.azurerm_resource_group_client_name
# network_security_group_id = module.network.bastion_host_network_security_group
ip_configuration {
name = "client_host_nic"
subnet_id = module.network.client_subnet_id
private_ip_address_allocation = "Dynamic"
# public_ip_address_id = module.network.bastion_host_puplic_ip_address #optional field we have a bastion host so no need for public IP also its vnet peered so this adds an extra layer of securit in a way
}
tags = {
environment = "Production"
}
}
# Generate random text for a unique storage account name
resource "random_id" "randomId_Generator" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = var.resource_group_location
}
byte_length = 8
}
# Create storage account for boot diagnostics
resource "azurerm_storage_account" "client_storageaccount" {
name = "diag${random_id.randomId_Generator.hex}"
resource_group_name = module.network.azurerm_resource_group_client_name
location = var.resource_group_location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "Production"
}
}
resource "azurerm_virtual_machine" "node" {
count = var.node_count
name = "client-host-${count.index}"
location = var.resource_group_location
resource_group_name = module.network.azurerm_resource_group_client_name
network_interface_ids = ["${element(azurerm_network_interface.client_nics.*.id, count.index)}"]
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
# 1 vCPU, 3.5 Gb of RAM
vm_size = var.machine_type
storage_os_disk {
name = "myOsDisk-${count.index}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
os_profile {
computer_name = "Production"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys" #This cannot be changed as mentioned in https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html
key_data = file("~/.ssh/client.pub")
}
}
boot_diagnostics {
enabled = "true"
storage_uri = azurerm_storage_account.client_storageaccount.primary_blob_endpoint
}
tags = {
environment = "Production"
}
}
resource "azurerm_network_interface_backend_address_pool_association" "network_interface_backend_address_pool_association" {
count = var.node_count
network_interface_id = element(azurerm_network_interface.client_nics.*.id, count.index) #fixes interpolation issues
ip_configuration_name = "azure_network_interface_address_pool_association"
backend_address_pool_id = module.loadbalancer.azure_backend_pool_id
}
load balancer module main.tf file
#rember when using this module to call the network module for the resource group name
############## load balancer section ##############
resource "azurerm_public_ip" "azure_load_balancer_IP" {
name = "azure_load_balancer_IP"
location = var.resource_group_location
resource_group_name = var.resource_group_name
allocation_method = "Static"
}
resource "azurerm_lb" "azure_load_balancer" {
name = "TestLoadBalancer"
location = var.resource_group_location
resource_group_name = var.resource_group_name
frontend_ip_configuration {
name = "front_end_IP_configuration_for_azure_load_balancer"
public_ip_address_id = azurerm_public_ip.azure_load_balancer_IP.id
}
}
resource "azurerm_lb_backend_address_pool" "backend_address_pool" {
resource_group_name = var.resource_group_name
loadbalancer_id = azurerm_lb.azure_load_balancer.id
name = "BackEndAddressPool"
}
resource "azurerm_lb_rule" "azure_lb_rule" {
resource_group_name = var.resource_group_name
loadbalancer_id = azurerm_lb.azure_load_balancer.id
name = "LBRule"
protocol = "Tcp"
frontend_port = 80
backend_port = 80
frontend_ip_configuration_name = "front_end_IP_configuration_for_azure_load_balancer"
}
output.tf
output "azure_load_balancer_ip" {
value = azurerm_public_ip.azure_load_balancer_IP.id
}
output "azure_backend_pool_id" {
value = azurerm_lb_backend_address_pool.backend_address_pool.id
}
additional information
* provider.azurerm: version = "~> 2.1"
main.tf
module "loadbalancer" {
source = "./azure_load_balancer_module" #this may need to be a different git repo as we are not referencing branches here only the master
resource_group_name = module.network.azurerm_resource_group_client_name
resource_group_location = var.resource_group_location
}
The error means the Ipconfiguration name you set for the network interface is not the same as you set for the resource azurerm_network_interface_backend_address_pool_association. You can take a look at the description for ip_configuration_name here. And as I see, you want to associate multiple interfaces with the load balancer.
So I recommend you change the network interface and the association like this:
resource "azurerm_network_interface" "client_nics" {
count = var.node_count
name = "client_host_nic-${count.index}"
location = var.resource_group_location
resource_group_name = module.network.azurerm_resource_group_client_name
# network_security_group_id = module.network.bastion_host_network_security_group
ip_configuration {
name = "client_host_nic-${count.index}"
subnet_id = module.network.client_subnet_id
private_ip_address_allocation = "Dynamic"
# public_ip_address_id = module.network.bastion_host_puplic_ip_address #optional field we have a bastion host so no need for public IP also its vnet peered so this adds an extra layer of securit in a way
}
tags = {
environment = "Production"
}
}
resource "azurerm_network_interface_backend_address_pool_association" "network_interface_backend_address_pool_association" {
count = var.node_count
network_interface_id = element(azurerm_network_interface.client_nics.*.id, count.index) #fixes interpolation issues
ip_configuration_name = "client_host_nic-${count.index}"
backend_address_pool_id = module.loadbalancer.azure_backend_pool_id
}
I believe this was a bug with the azure 2.1 terraform provider and seems to have been fixed in version 2.33 according to upstream https://github.com/terraform-providers/terraform-provider-azurerm/issues/3794
I was able to create 2 Linux VM's in azurerm_availability_set and now would like to attach these VM's to azurerm_lb_backend_address_pool but other than the options listed below in my code. I don't see an availability set option but when I goto the Azure portal I see an availability set option through the portal. Not sure if I'm doing something wrong here.
Please review the code below and let me know as of where can I add availability set option. So that I can attach the 2 VMs.
resource "azurerm_lb_backend_address_pool" "backend_pool" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "webBackendPool"
}
Assigning VMs in the load balancer backend pool is actually to assign the network interfaces of VMs to the backend pool, so you could use azurerm_network_interface_backend_address_pool_association resource to binds the NICs of VMs to a backend pool.
For example,
...
resource "azurerm_network_interface" "test" {
name = "${var.prefix}-nic"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
ip_configuration {
name = "testconfiguration1"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${azurerm_public_ip.test.id}"
}
}
...
resource "azurerm_network_interface_backend_address_pool_association" "test" {
network_interface_id = "${azurerm_network_interface.test.id}"
ip_configuration_name = "testconfiguration1"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.backend_pool.id}"
}
resource "azurerm_lb" "lb" {
name = "weblb"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
sku = "${var.lb_sku}"
frontend_ip_configuration {
name = "${var.frontend_name}"
subnet_id = "${azurerm_subnet.frontend.id}"
private_ip_address = "10.0.1.10"
private_ip_address_allocation = "Static"
}
}
resource "azurerm_lb_backend_address_pool" "backend_pool" {
resource_group_name = "${azurerm_resource_group.test.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "webBackendPool"
}