I am trying to setup a databricks into subet and protect it by firewalls using the following code in Terraform:
Setup resource group:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = var.resource_group_name
location = var.resource_group_location
}
Setup virtual network:
resource "azurerm_virtual_network" "test_vnet" {
name = var.vnet_name
address_space = ["10.0.0.0/16"]
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
Setup subnets:
resource "azurerm_subnet" "private_snet" {
name = "subnet-private"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.1.0/24"]
delegation {
name = "databricksprivatermdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
resource "azurerm_subnet" "public_snet" {
name = "subnet-public"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.2.0/24"]
delegation {
name = "databrickspublicdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
Setup firewals:
resource "azurerm_network_security_group" "private_empty_nsg" {
name = "firewall-private"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.private_snet.id
network_security_group_id = azurerm_network_security_group.private_empty_nsg.id
}
resource "azurerm_network_security_group" "public_empty_nsg" {
name = "firewall-public"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "public_nsg_asso" {
subnet_id = azurerm_subnet.public_snet.id
network_security_group_id = azurerm_network_security_group.public_empty_nsg.id
}
And finally setup the databricks:
resource "azurerm_databricks_workspace" "forex_price_databricks" {
name = "databricks-test"
location = var.resource_group_location
resource_group_name = var.resource_group_name
sku = "standard"
custom_parameters {
virtual_network_id = azurerm_virtual_network.test_vnet.id
public_subnet_name = azurerm_subnet.public_snet.name
public_subnet_network_security_group_association_id = azurerm_network_security_group.public_empty_nsg.id
private_subnet_name = azurerm_subnet.private_snet.name
private_subnet_network_security_group_association_id = azurerm_network_security_group.private_empty_nsg.id
}
}
However, when i run the code in the first try i got the below error:
Error: Code="ResourceNotFound" Message="The Resource 'Microsoft.Network/virtualNetworks/my-vnet' under resource group 'My-Resource-Group' was not found.
So, the question is:
Why the Virtual Network is not created ? or cannot be found ?
Update:
When i remove the
resource_group {
prevent_deletion_if_contains_resources = false
}
I used to have this line, becuse i usually run the terraform destroy and i don't want t remove my resource group. However, even if i remove it ,I got the below error:
Message="Operation was canceled." Details=[{"code":"
CanceledAndSupersededDueToAnotherOperation","message":"Operation PutVirtualNetworkOperation was canceled and superseded by operation PutSubnetOpe
ration
Are you able to reproduce the same error ?
Related
I have a virtual network with 2 subnets
Virtual network: vNetVPN-Dev
Subnet: snet-vgp-dev
Subnet: snet-internal-vm
resource "azurerm_virtual_network" "virtual_network" {
name = "vNetVPN-Dev"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
address_space = ["10.1.16.0/23", "10.2.0.0/16", "172.16.100.0/24"]
subnet {
name = "snet-vgp-dev"
address_prefix = "10.2.1.0/24"
}
# =================== Virtual network for vm
subnet {
name = "snet-internal-vm"
address_prefix = "10.2.10.0/24"
}
tags = {
environment = var.tag_dev
}
}
and now I want to reference snet-internal-vm in this block of code (below)
resource "azurerm_network_interface" "nic" {
name = "internal-nic-vm"
location = var.resource_group_location_north_europe
resource_group_name = var.resource_group_name
ip_configuration {
name = "internal-vm"
subnet_id = **here_I_want_to_reference**
private_ip_address_allocation = "Dynamic"
}
}
I tried to reproduce the same in my environment to create NIC creation with Subnet reference:
Terraform Code
provider "azurerm" {
features {}
}
resource "azurerm_virtual_network" "virtual_network" {
name = "vNetVPN-Dev"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
address_space = ["10.1.16.0/23", "10.2.0.0/16", "172.16.100.0/24"]
subnet {
name = "snet-vgp-dev"
address_prefix = "10.2.1.0/24"
}
subnet {
name = "snet-internal-vm"
address_prefix = "10.2.10.0/24"
}
}
#-----NIC Creation-----------
resource "azurerm_network_interface" "nic" {
name = "internal-nic-vm"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
ip_configuration {
name = "internal-vm"
subnet_id = azurerm_virtual_network.virtual_network.subnet.*.id[1]
private_ip_address_allocation = "Dynamic"
}
}
Terraform Apply.
As mentioned by #sylvainmtz , I could get subnet referred successfully while creating NIC.
Based on your question, I can only assume you should be using data blocks to do this. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/subnet
I need to test my azure private-endpoint using the following scenario.
We have a virtual netwroks with two sub-nets (vm_subnet and storage_account_subnet)
The virtual-machine (vm) should be able to connect to the storage-account using a private-link.
So, the below image explains the scenario:
and then i need to test my endpoint using the below manual test-case:
Connect to the azure virtual-machine using ssh and putty username: adminuser and password: P#$$w0rd1234!
In the terminal ping formuleinsstorage.blob.core.windows.net (Expect to see the ip of storage account in the range of storage_account_subnet (10.0.2.0/24))
I deploy all the infrastructure using the below Terraform code:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "vm_nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = "storagaccountfortest"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet_link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = "dns-record"
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
The problem is, my endpoint does not work ! But, if i try to add the service-endpoint manually then everything works well like a charm. So,i think, my DNS Zone is correct, also apparently the link to the storage-account is also working well. So, I think there should be something wrong with my private-link ! Any idea ?
Update:
Here are versions:
Terraform v1.2.5
on windows_386
+ provider registry.terraform.io/hashicorp/azurerm v3.30.0
I believe the issue lies in the name of the dns_a_record. This should be the name of the storage account you want to reach via the private link.
The following Terraform code is working for me:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.30.0"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "storage-private-endpoint-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "storage-private-endpoint-ip-config"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = <STORAGE_ACCOUNT_NAME>
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet-link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "storage-private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = azurerm_storage_account.storage_account.name
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
Additionally, I'm not sure whether it is possible to ping storage accounts. To test I ran nslookup <STORAGE_ACCOUNT_NAME>.blob.core.windows.net both from my local machine and from the Azure VM. In the former case, I got a public IP while in the latter I got a private IP in the range defined in the Terraform config, which seems to be the behaviour you are looking for.
Im quite new to Terraform so maybe i make a very basic mistake but after multiple hours maybe here someone can help me out.
So i tried to peer to vlans together. I viewed multiple tutorials about it and the only difference between my configuration i can see is that i want to make a peering between 2 vlans that are in 2 different resource groups. I also noticed that if i put the peering in one of the 2 vlan ressource groups i have fever errors.
error
#Creating Resource Groups
resource "azurerm_resource_group" "network" {
name = "network"
location = "West Europe"
}
resource "azurerm_resource_group" "front" {
name = "front"
location = "West Europe"
}
resource "azurerm_resource_group" "middle" {
name = "middle"
location = "West Europe"
}
resource "azurerm_resource_group" "back" {
name = "back"
location = "West Europe"
}
resource "azurerm_resource_group" "peerings" {
name = "peerings"
location = "West Europe"
}
#Creating Virtual Networks
resource "azurerm_virtual_network" "network" {
name = "network"
location = azurerm_resource_group.network.location
resource_group_name = azurerm_resource_group.network.name
address_space = ["10.1.0.0/16"]
subnet {
name = "default"
address_prefix = "10.1.0.0/24"
}
subnet {
name = "gatewaysubnet"
address_prefix = "10.1.1.0/24"
}
subnet {
name = "azurefirewallsubnet"
address_prefix = "10.1.3.0/24"
}
subnet {
name = "azurebastionsubnet"
address_prefix = "10.1.2.0/24"
}
}
resource "azurerm_virtual_network" "front" {
name = "network"
location = azurerm_resource_group.front.location
resource_group_name = azurerm_resource_group.front.name
address_space = ["10.2.0.0/16"]
}
resource "azurerm_virtual_network" "middle" {
name = "network"
location = azurerm_resource_group.middle.location
resource_group_name = azurerm_resource_group.middle.name
address_space = ["10.3.0.0/16"]
}
resource "azurerm_virtual_network" "back" {
name = "network"
location = azurerm_resource_group.back.location
resource_group_name = azurerm_resource_group.back.name
address_space = ["10.4.0.0/16"]
}
#Create peerings
#network <--> front
resource "azurerm_virtual_network_peering" "networktofront" {
name = "networktofront"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.front.id
}
resource "azurerm_virtual_network_peering" "fronttonetwork" {
name = "fronttonetwork"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.front.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
#network <--> middle
resource "azurerm_virtual_network_peering" "networktomiddle" {
name = "networktomiddle"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.middle.id
}
resource "azurerm_virtual_network_peering" "middletonetwork" {
name = "middletonetwork"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.middle.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
#network <--> back
resource "azurerm_virtual_network_peering" "networktoback" {
name = "networktoback"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.back.id
}
resource "azurerm_virtual_network_peering" "backtonetwork" {
name = "backtonetwork"
resource_group_name = azurerm_resource_group.peerings.name
virtual_network_name = azurerm_virtual_network.back.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
Virtual Network Peerings are a subset of the Virtual Network Resource (Microsoft.Network/virtualNetworks/network/virtualNetworkPeerings) and it is therefore not possible to carve these out into different resource groups.
Besides that, your code is accurate and should work as soon as you create the peerings in the corresponding virtual network resource groups:
#Creating Resource Groups
resource "azurerm_resource_group" "network" {
name = "network"
location = "West Europe"
}
resource "azurerm_resource_group" "front" {
name = "front"
location = "West Europe"
}
resource "azurerm_resource_group" "middle" {
name = "middle"
location = "West Europe"
}
resource "azurerm_resource_group" "back" {
name = "back"
location = "West Europe"
}
#Creating Virtual Networks
resource "azurerm_virtual_network" "network" {
name = "network"
location = azurerm_resource_group.network.location
resource_group_name = azurerm_resource_group.network.name
address_space = ["10.1.0.0/16"]
subnet {
name = "default"
address_prefix = "10.1.0.0/24"
}
subnet {
name = "gatewaysubnet"
address_prefix = "10.1.1.0/24"
}
subnet {
name = "azurefirewallsubnet"
address_prefix = "10.1.3.0/24"
}
subnet {
name = "azurebastionsubnet"
address_prefix = "10.1.2.0/24"
}
}
resource "azurerm_virtual_network" "front" {
name = "network"
location = azurerm_resource_group.front.location
resource_group_name = azurerm_resource_group.front.name
address_space = ["10.2.0.0/16"]
}
resource "azurerm_virtual_network" "middle" {
name = "network"
location = azurerm_resource_group.middle.location
resource_group_name = azurerm_resource_group.middle.name
address_space = ["10.3.0.0/16"]
}
resource "azurerm_virtual_network" "back" {
name = "network"
location = azurerm_resource_group.back.location
resource_group_name = azurerm_resource_group.back.name
address_space = ["10.4.0.0/16"]
}
#Create peerings
#network <--> front
resource "azurerm_virtual_network_peering" "networktofront" {
name = "networktofront"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.front.id
}
resource "azurerm_virtual_network_peering" "fronttonetwork" {
name = "fronttonetwork"
resource_group_name = azurerm_resource_group.front.name
virtual_network_name = azurerm_virtual_network.front.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
#network <--> middle
resource "azurerm_virtual_network_peering" "networktomiddle" {
name = "networktomiddle"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.middle.id
}
resource "azurerm_virtual_network_peering" "middletonetwork" {
name = "middletonetwork"
resource_group_name = azurerm_resource_group.middle.name
virtual_network_name = azurerm_virtual_network.middle.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
#network <--> back
resource "azurerm_virtual_network_peering" "networktoback" {
name = "networktoback"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.network.name
remote_virtual_network_id = azurerm_virtual_network.back.id
}
resource "azurerm_virtual_network_peering" "backtonetwork" {
name = "backtonetwork"
resource_group_name = azurerm_resource_group.back.name
virtual_network_name = azurerm_virtual_network.back.name
remote_virtual_network_id = azurerm_virtual_network.network.id
}
I am trying to create resources in azure using terraform, a SQL server database and also a virtual machine.I get the error.
│ Error: creating Subnet: (Name "db_subnetn" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'db_subnetn' is not valid in virtual network 'tf_dev-network'." Details=[]
What have I done ?
followed the link here Error while provisioning Terraform subnet using azurerm
I deleted other network resources using thesame IP range.
My network understanding is pretty basic, however from my research it appears that 10.0.0.0/16 is quite a large IP range and can lead to overlaps. So what did I do, I changed the virtual network IP range from 10.0.0.0/16 to 10.0.1.0/24 to restrict the range, what simply happened is that the error changed to
│ Error: creating Subnet: (Name "internal" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'internal' is not valid in virtual network 'tf_dev-network'." Details=[]
At this stage, I would be grateful if someone can explain what is going wrong here and what needs to be done. Thanks in advance
My files are as follows.
dbcode.tf
resource "azurerm_sql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
resource_group_name = azurerm_resource_group.resource_gp.name
server_name = azurerm_sql_server.sqlserver.name
subnet_id = azurerm_subnet.db_subnet.id
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
main.tf
resource "azurerm_resource_group" "resource_gp" {
name="terraform_youtube"
location = "UK South"
tags = {
"owner" = "Rahman"
"purpose" = "Practice terraform"
}
}
variable "prefix" {
default = "tf_dev"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_B1ls"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Tested with your code in my environment was getting the same error.
To fix the issue you need to change address_prefixes for db_subnet to ["10.0.3.0/24"] as ["10.0.2.0/24"] address range is already using by internal subnet in your main.tf and also check update for sqlvnetrule and do the changes in your dbcode.tf file.
resource "azurerm_mssql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.3.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mssql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
#resource_group_name = azurerm_resource_group.resource_gp.name
#server_name = azurerm_sql_server.sqlserver.name
server_id = azurerm_mssql_server.sqlserver.id
subnet_id = azurerm_subnet.db_subnet.id
}
I want to create two vm's with terraform in Azure. I have configured two "azurerm_network_interface" but when I try to apply the changes, I receive an error. Do you have any idea? Is there any issue if I try to create them on different regions?
The error is something like: vm2-nic was not found azurerm_network_interface
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
name = "north-network"
address_space = ["10.0.0.0/16"]
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main.name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
name = "test-pip"
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
...
Error:
Error: Error creating Network Interface "vm2-nic" (Resource Group "candidate-d7f5a2-rg"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/virtualNetworks/north-network/subnets/internal referenced by resource /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.Network/networkInterfaces/vm2-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
on environment.tf line 47, in resource "azurerm_network_interface" "main":
47: resource "azurerm_network_interface" "main" {
According to the documentation each NIC attached to a VM must exist in the same location (Region) and subscription as the VM. https://learn.microsoft.com/en-us/azure/virtual-machines/windows/network-overview.
If you can re-create the NIC in the same location as the VM or create the VM in the same location as the NIC that will likely solve your problem.
Since you have for_each and in the resource "azurerm_network_interface", it will create two NICs in the location = "${each.value}" while the subnet or VNet has a fixed region "North Europe". You need to create the NICs or other Azure VM related resources like subnets in the same region, you could change the codes like this,
resource "azurerm_resource_group" "test" {
name = "myrg"
location = "West US"
}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
idle_timeout_in_minutes = 30
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}