Create multiple VM's in Azure with different configuration - azure

I want to create two vm's with terraform in Azure. I have configured two "azurerm_network_interface" but when I try to apply the changes, I receive an error. Do you have any idea? Is there any issue if I try to create them on different regions?
The error is something like: vm2-nic was not found azurerm_network_interface
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
name = "north-network"
address_space = ["10.0.0.0/16"]
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main.name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
name = "test-pip"
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
...
Error:
Error: Error creating Network Interface "vm2-nic" (Resource Group "candidate-d7f5a2-rg"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/virtualNetworks/north-network/subnets/internal referenced by resource /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.Network/networkInterfaces/vm2-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
on environment.tf line 47, in resource "azurerm_network_interface" "main":
47: resource "azurerm_network_interface" "main" {

According to the documentation each NIC attached to a VM must exist in the same location (Region) and subscription as the VM. https://learn.microsoft.com/en-us/azure/virtual-machines/windows/network-overview.
If you can re-create the NIC in the same location as the VM or create the VM in the same location as the NIC that will likely solve your problem.

Since you have for_each and in the resource "azurerm_network_interface", it will create two NICs in the location = "${each.value}" while the subnet or VNet has a fixed region "North Europe". You need to create the NICs or other Azure VM related resources like subnets in the same region, you could change the codes like this,
resource "azurerm_resource_group" "test" {
name = "myrg"
location = "West US"
}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
idle_timeout_in_minutes = 30
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}

Related

Virtual network was not found

I am trying to setup a databricks into subet and protect it by firewalls using the following code in Terraform:
Setup resource group:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = var.resource_group_name
location = var.resource_group_location
}
Setup virtual network:
resource "azurerm_virtual_network" "test_vnet" {
name = var.vnet_name
address_space = ["10.0.0.0/16"]
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
Setup subnets:
resource "azurerm_subnet" "private_snet" {
name = "subnet-private"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.1.0/24"]
delegation {
name = "databricksprivatermdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
resource "azurerm_subnet" "public_snet" {
name = "subnet-public"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.2.0/24"]
delegation {
name = "databrickspublicdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
Setup firewals:
resource "azurerm_network_security_group" "private_empty_nsg" {
name = "firewall-private"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.private_snet.id
network_security_group_id = azurerm_network_security_group.private_empty_nsg.id
}
resource "azurerm_network_security_group" "public_empty_nsg" {
name = "firewall-public"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "public_nsg_asso" {
subnet_id = azurerm_subnet.public_snet.id
network_security_group_id = azurerm_network_security_group.public_empty_nsg.id
}
And finally setup the databricks:
resource "azurerm_databricks_workspace" "forex_price_databricks" {
name = "databricks-test"
location = var.resource_group_location
resource_group_name = var.resource_group_name
sku = "standard"
custom_parameters {
virtual_network_id = azurerm_virtual_network.test_vnet.id
public_subnet_name = azurerm_subnet.public_snet.name
public_subnet_network_security_group_association_id = azurerm_network_security_group.public_empty_nsg.id
private_subnet_name = azurerm_subnet.private_snet.name
private_subnet_network_security_group_association_id = azurerm_network_security_group.private_empty_nsg.id
}
}
However, when i run the code in the first try i got the below error:
Error: Code="ResourceNotFound" Message="The Resource 'Microsoft.Network/virtualNetworks/my-vnet' under resource group 'My-Resource-Group' was not found.
So, the question is:
Why the Virtual Network is not created ? or cannot be found ?
Update:
When i remove the
resource_group {
prevent_deletion_if_contains_resources = false
}
I used to have this line, becuse i usually run the terraform destroy and i don't want t remove my resource group. However, even if i remove it ,I got the below error:
Message="Operation was canceled." Details=[{"code":"
CanceledAndSupersededDueToAnotherOperation","message":"Operation PutVirtualNetworkOperation was canceled and superseded by operation PutSubnetOpe
ration
Are you able to reproduce the same error ?

Terraform error: Error: creating Subnet: Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'internal' is not valid in virtual network

I am trying to create resources in azure using terraform, a SQL server database and also a virtual machine.I get the error.
│ Error: creating Subnet: (Name "db_subnetn" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'db_subnetn' is not valid in virtual network 'tf_dev-network'." Details=[]
What have I done ?
followed the link here Error while provisioning Terraform subnet using azurerm
I deleted other network resources using thesame IP range.
My network understanding is pretty basic, however from my research it appears that 10.0.0.0/16 is quite a large IP range and can lead to overlaps. So what did I do, I changed the virtual network IP range from 10.0.0.0/16 to 10.0.1.0/24 to restrict the range, what simply happened is that the error changed to
│ Error: creating Subnet: (Name "internal" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'internal' is not valid in virtual network 'tf_dev-network'." Details=[]
At this stage, I would be grateful if someone can explain what is going wrong here and what needs to be done. Thanks in advance
My files are as follows.
dbcode.tf
resource "azurerm_sql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
resource_group_name = azurerm_resource_group.resource_gp.name
server_name = azurerm_sql_server.sqlserver.name
subnet_id = azurerm_subnet.db_subnet.id
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
main.tf
resource "azurerm_resource_group" "resource_gp" {
name="terraform_youtube"
location = "UK South"
tags = {
"owner" = "Rahman"
"purpose" = "Practice terraform"
}
}
variable "prefix" {
default = "tf_dev"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_B1ls"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Tested with your code in my environment was getting the same error.
To fix the issue you need to change address_prefixes for db_subnet to ["10.0.3.0/24"] as ["10.0.2.0/24"] address range is already using by internal subnet in your main.tf and also check update for sqlvnetrule and do the changes in your dbcode.tf file.
resource "azurerm_mssql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.3.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mssql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
#resource_group_name = azurerm_resource_group.resource_gp.name
#server_name = azurerm_sql_server.sqlserver.name
server_id = azurerm_mssql_server.sqlserver.id
subnet_id = azurerm_subnet.db_subnet.id
}

How to use selective zone(s) while deploying azure vm through terraform

I am using the below while deploying a virtual machine to specific zone in region eastus
resource "azurerm_linux_virtual_machine" "vm" {
.........
.........
zones = [1]
}
but terraform validate says
An argument named "zones" is not expected here. Did you mean "zone"?
You should prefer for_each and set to count and index. you can use this transformer code to create a virtual Machine in 3 Availability Zone. You can set the number of Zones according to your needs
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "XXXXXXXXX"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
#count=3
name = "example-nic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_linux_virtual_machine" "example" {
#count=3
name = "example-machine"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
size = "Standard_F2"
for_each = local.zones
zone = each.value
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub")
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
locals {
zones = toset(["1","2","3"])
}

How do I create a Container Registry on Azure with a resource?

My terraform script is giving error like below;
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
on terra.tf line 106, in resource "azurerm_container_registry" "acr":
106: resource "azurerm_container_registry" "acr" {
Whole script is below;
I'm beginner at Terraform and tried different combinations but didn't worked. Not sure what can be the problem, is it possible to help?
variable "prefix" {
default = "tfvmex"
}
provider "azurerm" {
version = "=1.28.0"
}
resource "azurerm_resource_group" "rg" {
name = "aks-cluster"
location = "West Europe"
}
resource "azurerm_virtual_network" "network" {
name = "aks-vnet"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "subnet" {
name = "aks-subnet"
resource_group_name = azurerm_resource_group.rg.name
address_prefix = "10.1.1.0/24"
virtual_network_name = azurerm_virtual_network.network.name
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "aks"
location = azurerm_resource_group.rg.location
dns_prefix = "aks"
resource_group_name = azurerm_resource_group.rg.name
kubernetes_version = "1.17.3"
agent_pool_profile {
name = "aks"
count = 1
vm_size = "Standard_D2s_v3"
os_type = "Linux"
vnet_subnet_id = azurerm_subnet.subnet.id
}
service_principal {
client_id = "dxxxx"
client_secret = "xxxx"
}
network_profile {
network_plugin = "azure"
}
}
resource "azurerm_network_interface" "rg" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "rg" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_interface_ids = [azurerm_network_interface.rg.id]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "testtest"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_container_registry" "acr" {
name = "containerRegistry1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
resource "azurerm_network_security_group" "example" {
name = "acceptanceTestSecurityGroup1"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
security_rule {
name = "test123"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Test"
}
}
Thanks!
How do I create a Container Registry on Azure with a resource?
From your error message, it appears like you are using a non-unique ACR name in the below pasted terraform resource declaration:
resource "azurerm_container_registry" "acr" {
**name = "containerRegistry1"**
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
Azure CLI has az acr check-name to ensure that ACR name is globally unique.
I think your issue is because your acr name is already globally taken. Not within your subscription, but GLOBALLY.
This is because ACR need a url to be accessed, and the url needs to be unique. That means some other Azure account has taken that name.
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
containerregistry1.azurecr.io >>>> this url needs to be globally unique.

Terraform Vm Provision attached 2 NIC

I am working on provisioning the new azure VM using terraform and attached 2 nic to that VM.
I am getting below error.
azurerm_virtual_machine.vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="VirtualMachineMustHaveOneNetworkInterfaceAsPrimary" Message="Virtual machine AZLXSPTOPTFWTEST must have one network interface set as the primary." Details=[]
I have referred https://www.terraform.io/docs/providers/azurerm/r/network_interface.html this URL for creating NIC
This is my Terraform code.
resource "azurerm_resource_group" "main" {
name = "RG-EASTUS-FW-TEST"
location = "eastus"
}
#create a virtual Network
resource "azurerm_virtual_network" "privatenetwork" {
name = "VNET-EASTUS-FWTEST"
address_space = ["10.100.0.0/16"]
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
}
#create a subnet with externel virtual network
resource "azurerm_subnet" "external"{
name = "SNET-FWTEST-OUT"
virtual_network_name = "${azurerm_virtual_network.privatenetwork.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.10.0/24"
}
#Create a public IP address
resource "azurerm_public_ip" "public" {
name = "PFTEST-PUBLIC"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
allocation_method = "Static"
}
# Create a Subnet within the Virtual Network
resource "azurerm_subnet" "internal" {
name = "SNET-FWTEST-IN"
virtual_network_name = "${azurerm_virtual_network.privatenetwork.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.11.0/24"
}
resource "azurerm_network_interface" "OUT" {
name = "NIC-FWTEST-OUT"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
# network_security_group_id = "${azurerm_network_interface.main.id}"
primary = "true"
ip_configuration {
name = "OUT"
subnet_id = "${azurerm_subnet.external.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.100.10.5"
public_ip_address_id = "${azurerm_public_ip.public.id}"
}
}
# Create a network interface for VMs and attach the PIP and the NSG
resource "azurerm_network_interface" "main" {
name = "NIC-FWTEST-IN"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
# network_security_group_id = "${azurerm_network_security_group.main.id}"
ip_configuration {
name = "IN"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.100.11.5"
}
}
# Create a new Virtual Machine based on the Golden Image
resource "azurerm_virtual_machine" "vm" {
name = "AZLXSPTOPTFWTEST"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.OUT.id}","${azurerm_network_interface.main.id}"]
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
I am expecting result as provision azure VM with 2 nic.
Thanks In Advance
I got the solution as we need to specify the primary network interface id and add this to the network interface id. As well as we need to add in ipconfiguration while creating network interface.
Please refer below code.
ip_configuration {
name = "OUT"
subnet_id = "${azurerm_subnet.external.id}"
primary = true
private_ip_address_allocation = "static"
private_ip_address = "10.100.10.5"
public_ip_address_id = "${azurerm_public_ip.public.id}"
}
resource "azurerm_virtual_machine" "vm" {
name = "AZLXSPTOPTFWTEST"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.id}","${azurerm_network_interface.OUT.id}"]
primary_network_interface_id = "${azurerm_network_interface.OUT.id}"
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true

Resources