Can I perform consecutive changes on resource with Terraform? - terraform

Sometimes I need to perform several changes to the resource with TF ( within same declaration file ) , for example:
Create Azure VNET/Subnet A
Create Private Endpoint
Change properties of Subnet A from #1
I tried to create same resource with depends_on statement, but it doesn't work.
module.vnet-stage2[1].azurerm_virtual_network.vnet: Creating...
module.vnet-stage2[0].azurerm_virtual_network.vnet: Creating...
╷
│ Error: A resource with the ID "/subscriptions/6fd2b24c-1ffa-43ca-abc1-8127c30dcb39/resourceGroups/PE-TF-RG/providers/Microsoft.Network/virtualNetworks/client-vnet" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_virtual_network" for more information.
│
│ with module.vnet-stage2[0].azurerm_virtual_network.vnet,
│ on ../../modules/vnet/main.tf line 6, in resource "azurerm_virtual_network" "vnet":
│ 6: resource azurerm_virtual_network "vnet" {
│
╵
╷
│ Error: A resource with the ID "/subscriptions/6fd2b24c-1ffa-43ca-abc1-8127c30dcb39/resourceGroups/PE-TF-RG/providers/Microsoft.Network/virtualNetworks/server-vnet" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_virtual_network" for more information.
│
│ with module.vnet-stage2[1].azurerm_virtual_network.vnet,
│ on ../../modules/vnet/main.tf line 6, in resource "azurerm_virtual_network" "vnet":
│ 6: resource azurerm_virtual_network "vnet" {
│
╵

I tried testing your requirement with the below code. It's not possible to change the subnet enforce_private_link_service_network_policies = true to false from the same declaration file.
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "example" {
name = "yourresourcegroup"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_subnet" "service" {
name = "service"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_service_network_policies = true
}
resource "azurerm_subnet" "endpoint" {
name = "endpoint"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_public_ip" "example" {
name = "example-pip"
sku = "Standard"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
allocation_method = "Static"
}
resource "azurerm_lb" "example" {
name = "example-lb"
sku = "Standard"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
frontend_ip_configuration {
name = azurerm_public_ip.example.name
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_private_link_service" "example" {
name = "example-privatelink"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
nat_ip_configuration {
name = azurerm_public_ip.example.name
primary = true
subnet_id = azurerm_subnet.service.id
}
load_balancer_frontend_ip_configuration_ids = [
azurerm_lb.example.frontend_ip_configuration.0.id,
]
}
resource "azurerm_private_endpoint" "example" {
name = "example-endpoint"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
subnet_id = azurerm_subnet.endpoint.id
private_service_connection {
name = "example-privateserviceconnection"
private_connection_resource_id = azurerm_private_link_service.example.id
is_manual_connection = false
}
}
Output:
When you try to change the value to false, you get the below error:
Solution:
You can create Vnet+Subnet first on file and then create private endpoint in another using data sources of the vnet and subnet. After private endpoint is created you can change the properties of subnet by going to the vnet+subnet file.
Or
You can create everything at once then use PowerShell or CLI to change that property of subnet.
Command for CLI:
az network vnet subnet update --disable-private-endpoint-network-policies false --name service --resource-group resourcegroup --vnet-name example-network.
Reference:
Manage network policies for private endpoints - Azure Private Link | Microsoft Docs
Note: enforce_private_link_service_network_policies = true on a subnet is mandatory for creating a private endpoint. After creation you can change to enforce_private_link_service_network_policies = false.

Related

Terraform azurerm_storage_share_directory does not work with file share 'NFS'

We created an Azure storage account with the intention of creating an 'Azure File' to be mounted using NFS (default is SMB). Below is the Terraform code which creates a storage account, a file share and a private endpoint to the file share so that it can be mounted using NFS.
resource "azurerm_storage_account" "az_file_sa" {
name = "abcdxxxyyyzzz"
resource_group_name = local.resource_group_name
location = var.v_region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = var.v_region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
This works fine. Now, if we try to create a directory on this file share using below Terraform code
resource "azurerm_storage_share_directory" "xxx" {
name = "dev"
share_name = "fileshare"
storage_account_name = "abcdxxxyyyzzz"
}
error we get is,
│ Error: checking for presence of existing Directory "dev" (File Share "fileshare" / Storage Account "abcdxxxyyyzzz" / Resource Group "RG_XXX_YO"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Clearly, this share is not accessible over public https endpoint.
Is there a way to create a directory using 'azurerm_storage_share_directory' when file share is of type 'NFS'?
We were able to mount NFS on a Linux VM (in the same virtual network) using below code where 10.10.2.4 is private IP of the NFS fileshare endpoint.
sudo mkdir -p /mount/abcdxxxyyyzzz/fileshare
sudo mount -t nfs 10.10.2.4:/abcdxxxyyyzzz/fileshare /mount/abcdxxxyyyzzz/fileshare -o vers=4,minorversion=1,sec=sys
regards, Yogesh
full Terraform files
vnet.tf
resource "azurerm_virtual_network" "vnet" {
name = "yogimogi-vnet"
address_space = ["10.10.0.0/16"]
location = local.region
resource_group_name = local.resource_group_name
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_subnet" "subnet1" {
name = "yogimogi-vnet-subnet1"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.1.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
resource "azurerm_subnet" "subnet2" {
name = "yogimogi-vnet-subnet2"
resource_group_name = local.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.10.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
main.tf
resource "azurerm_resource_group" "rg" {
name = local.resource_group_name
location = local.region
tags = {
description = "Resource group for some testing, Yogesh KETKAR"
createdBy = "AutomationEdge"
createDate = "UTC time: ${timestamp()}"
}
}
resource "azurerm_storage_account" "sa" {
name = local.storage_account_name
resource_group_name = local.resource_group_name
location = local.region
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
depends_on = [ azurerm_resource_group.rg ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.sa ]
}
resource "azurerm_storage_account_network_rules" "network_rule" {
storage_account_id = azurerm_storage_account.sa.id
default_action = "Allow"
ip_rules = ["127.0.0.1"]
virtual_network_subnet_ids = [azurerm_subnet.subnet2.id, azurerm_subnet.subnet1.id]
bypass = ["Metrics"]
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = local.region
resource_group_name = local.resource_group_name
subnet_id = azurerm_subnet.subnet2.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share_directory" "d1" {
name = "d1"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.sa.name
depends_on = [ azurerm_storage_share.file_share, azurerm_private_endpoint.fileshare-endpoint ]
}
error is
╷
│ Error: checking for presence of existing Directory "d1" (File Share "fileshare" / Storage Account "22xdkkdkdkdkdkdkdx22" / Resource Group "RG_Central_US_YOGIMOGI"): directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get
"https://22xdkkdkdkdkdkdkdx22.file.core.windows.net/fileshare/d1?restype=directory": read tcp 10.41.7.110:54240->20.209.18.37:443: read: connection reset by peer
│
│ with azurerm_storage_share_directory.d1,
│ on main.tf line 60, in resource "azurerm_storage_share_directory" "d1":
│ 60: resource "azurerm_storage_share_directory" "d1" {
│
╵
I tried to reproduce the same having private endpoint ,having NFS enabled
and got errors as network rule is not created when NFS enabled.
As virtual network provides access control for NFS , after vnet creation you must configure a virtual network rule,for file share to be accessed.
resource "azurerm_virtual_network" "example" {
name = "ka-vnet"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
// tags = local.common_tags
}
resource "azurerm_subnet" "storage" {
name = "ka-subnet"
resource_group_name = data.azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_storage_account" "az_file_sa" {
name = "kaabdx"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Premium"
account_kind = "FileStorage"
account_replication_type = "LRS"
enable_https_traffic_only = false
//provide network rules
network_rules {
default_action = "Allow"
ip_rules = ["127.0.0.1/24"]
//23.45.1.0/24
virtual_network_subnet_ids = ["${azurerm_subnet.storage.id }"]
}
}
resource "azurerm_private_endpoint" "fileshare-endpoint" {
name = "fileshare-endpoint"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
subnet_id = azurerm_subnet.storage.id
private_service_connection {
name = "fileshare-endpoint-connection"
private_connection_resource_id = azurerm_storage_account.az_file_sa.id
is_manual_connection = false
subresource_names = [ "file" ]
}
depends_on = [ azurerm_storage_share.file_share ]
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.az_file_sa.name
quota = 100
enabled_protocol = "NFS"
depends_on = [ azurerm_storage_account.az_file_sa ]
}
resource "azurerm_storage_share_directory" "mynewfileshare" {
name = "kadev"
share_name = azurerm_storage_share.file_share.name
storage_account_name = azurerm_storage_account.az_file_sa.name
}
regarding the error that you got :
Error: checking for presence of existing Directory ... directories.Client#Get: Failure sending request: StatusCode=0 -- Original Error: Get "https://abcdxxxyyyzzz.file.core.windows.net/fileshare/dev?restype=directory": read tcp 192.168.1.3:61175->20.60.179.37:443: read: connection reset by peer
Please note that :
VNet peering will not be able to give access to file share. Virtual
network peering with virtual networks hosted in the private endpoint
give NFS share access to the clients in peered virtual networks .Each
of virtual network or subnet must be individually added to the
allowlist.
A checking for presence of existing Directory occurs if the terraform is not initiated .Run Terraform init and then try to Terraform plan and terraform apply.
References:
Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules · GitHub
NFS Azure file share problems | learn.microsoft.com

Deployment of Azure order issue Terraform

When deploying Azure resources with Terraform Cloud I'm expierencing an unexpected bahaviour.
It looks like the order of deployment or the wait time between the resources is failing.
The error says that the deployment of the network inteface failed because the subnet is not created.
I already tried to implement the depends_on function, but this doesnt seem to help at all.
# Create a virtual network within the core resource group
resource "azurerm_virtual_network" "avd_default" {
name = "Vnet_${var.prefix}_Core-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
location = azurerm_resource_group.avd_default_core_rg.location
address_space = [var.avd_address_space]
}
# Create a Core internal subnet within vNet
resource "azurerm_subnet" "avd_default_core_internal" {
name = "Subnet_${var.prefix}_Core-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
virtual_network_name = azurerm_virtual_network.avd_default.name
address_prefixes = [var.core_address_prefixes]
depends_on = [
azurerm_virtual_network.avd_default
]
}
# Create a Core external subnet within vNet
resource "azurerm_subnet" "avd_default_core_external" {
name = "Subnet_${var.prefix}_Internet-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
virtual_network_name = azurerm_virtual_network.avd_default.name
address_prefixes = [var.internet_address_prefixes]
depends_on = [
azurerm_virtual_network.avd_default
]
}
# Create the Network interface for DC01
resource "azurerm_network_interface" "avd_default_dc01" {
name = "dc01-nic"
location = azurerm_resource_group.avd_default_core_rg.location
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
dns_servers = [var.private_ip_dc01,"8.8.8.8"]
ip_configuration {
name = "ipconfig1"
subnet_id = azurerm_subnet.avd_default_core_internal.id
private_ip_address_allocation = "Static"
private_ip_address = var.private_ip_dc01
}
depends_on = [
azurerm_subnet.avd_default_core_internal
]
}
# Create DC01 Windows Server 2022
resource "azurerm_windows_virtual_machine" "avd_default_dc01" {
name = "${var.prefix}-dc01"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
location = azurerm_resource_group.avd_default_core_rg.location
size = var.dc01_vm_size
admin_username = "username"
admin_password = var.dc01_admin_password
network_interface_ids = [azurerm_network_interface.avd_default_dc01.id]
os_disk {
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
disk_size_gb = "128"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2022-datacenter-azure-edition"
version = "latest"
}
}
Error written below:
Error: Subnet "Subnet_gro_Core-Prod" (Virtual Network "Vnet_gro_Core-Prod" / Resource Group "RG_gro_Core-Prod") was not found!
with azurerm_subnet_route_table_association.avd_default_wg
on main.tf line 316, in resource "azurerm_subnet_route_table_association" "avd_default_wg":
resource "azurerm_subnet_route_table_association" "avd_default_wg" {
Error: creating Network Interface: (Name "dc01-nic" / Resource Group "RG_gro_Core-Prod"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/virtualNetworks/Vnet_gro_Core-Prod/subnets/Subnet_gro_Core-Prod referenced by resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/networkInterfaces/dc01-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
with azurerm_network_interface.avd_default_dc01
on main.tf line 78, in resource "azurerm_network_interface" "avd_default_dc01":
resource "azurerm_network_interface" "avd_default_dc01" {
Error: creating Network Interface: (Name "wg-nic-internal" / Resource Group "RG_gro_Watchguard-Prod"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/virtualNetworks/Vnet_gro_Core-Prod/subnets/Subnet_gro_Core-Prod referenced by resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Watchguard-Prod/providers/Microsoft.Network/networkInterfaces/wg-nic-internal was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
with azurerm_network_interface.avd_default_wg_internal
on main.tf line 156, in resource "azurerm_network_interface" "avd_default_wg_internal":
resource "azurerm_network_interface" "avd_default_wg_internal" {
Running the terraform deploy command for a second time after this errors it is working as expected.

Terraform source error - Error: Failed to query available provider packages

I am trying to deploy a new infrastructure using terraform (for the first time) and I am getting the following error. I've tried everything but nothing seems to fix the issue.
Looks like it is asking for a provider hashicorp/azure ?
Can anyone help please??
Initializing provider plugins...
- Finding latest version of hashicorp/azure...
- Finding hashicorp/azurerm versions matching "2.98.0"...
- Installing hashicorp/azurerm v2.98.0...
- Installed hashicorp/azurerm v2.98.0 (signed by HashiCorp)
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/azure: provider registry registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/azure
│
│ Did you intend to use terraform-providers/azure? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on hashicorp/azure, run the following command:
│ terraform providers
╵
lucas#Azure:~$ terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/azurerm] 2.98.0
└── provider[registry.terraform.io/hashicorp/azure]
The code that I am using to create the infrastructure is the below:
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.98.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = "910910be-a61e-4e1f-a72a-7e43456c0836"
}
# Create a resource group
resource "azurerm_resource_group" "rg" {
name = "default"
location = "West Europe"
}
# Create a virtual network
resource "azurerm_virtual_network" "vpc" {
name = "default-network"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
address_space = ["10.0.0.0/16"]
}
# Create frontend subnet
resource "azurerm_subnet" "subnet_frontend" {
name = "internal"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vpc.name
address_prefixes = ["10.0.1.0/24"]
}
# Create backend subnet
resource "azurerm_subnet" "subnet_backend" {
name = "internal"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vpc.name
address_prefixes = ["10.0.2.0/24"]
}
# Create frontend network interface
resource "azurerm_network_interface" "frontend_nic" {
name = "frontend_nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.subnet_frontend.id
private_ip_address_allocation = "Dynamic"
}
}
# Create backend network interface
resource "azurerm_network_interface" "backend_nic" {
name = "backend_nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.subnet_backend.id
private_ip_address_allocation = "Dynamic"
}
}
# Create frontend VM based on module
resource "azure_instance" "frontend" {
source = "./vm"
name = "frontend"
rg = module.azurerm_resource_group.rg.name
location = module.azurerm_resource_group.rg.location
nic = module.azurerm_network_interface.frontend_nic
}
# Create backend VM based on module
resource "azure_instance" "backend" {
source = "./vm"
name = "backend"
rg = module.azurerm_resource_group.rg.name
location = module.azurerm_resource_group.rg.location
nic = module.azurerm_network_interface.backend_nic
}
My terraform version is: Terraform v1.1.5 and I am using on Azure CLI via bash.
Any idea of what is causing this issue and how to fix it?
Thanks!
This often happens when one accidentally specifies "hashicorp/azure" or "hashicorp/azurem" instead of "hashicorp/azurerm" in the required_providers block. Did you check the "vm" module referenced in the "azure_instance" module calls? There might be an erroneous "hashicorp/azure" specified there.

Subnet not creating

I keep getting this weird error according to me, is there a fix for this.
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
#environment = var.environment
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = var.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet" {
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
I keep getting this error
azurerm_subnet.subnet: Creating...
azurerm_virtual_network.vnet: Creating...
azurerm_virtual_network.vnet: Creation complete after 5s [id=/subscriptions/e4da9536-6759-4506-b0cf-10c70facd033/resourceGroups/rg-sagar/providers/Microsoft.Network/virtualNetworks/vnet]
╷
│ Error: creating Subnet: (Name "subnet" / Virtual Network Name "vnet" / Resource Group "rg-sagar"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ResourceNotFound" Message="The Resource 'Microsoft.Network/virtualNetworks/vnet' under resource group 'rg-sagar' was not found. For more details please
go to https://aka.ms/ARMResourceNotFoundFix"│
│ with azurerm_subnet.subnet,
│ on main.tf line 14, in resource "azurerm_subnet" "subnet":
│ 14: resource "azurerm_subnet" "subnet" {
│
Even after the vnet is created, it is unable to create a vnet, any idea how I can make this work
Any idea how to fix this?
You need to use this statement virtual_network_name = azurerm_virtual_network.vnet.name instead of virtual_network_name = var.vnet_name.
Becuause virtual_network_name = var.vnet_name in subnet resource block simultaneously creating subnet and vnet so this is not good fit in azure. Because subnet dependent on Vnet. So Vnet Should create first. So you need to use virtual_network_name = azurerm_virtual_network.vnet.name for using the existing Vnet.
Terraform Code
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
#environment = var.environment
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = data.azurerm_resource_group.rg.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet"{
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
I think this question requires a bit more explanation, since there is nothing wrong with the code. Terraform is trying to be smart about the way it creates resources, so it tries to create as much as it can in one run. This is why there is an option called -parallelism:
-parallelism=n Limit the number of parallel resource operations.
Defaults to 10.
This means that when running terraform apply, Terraform will try to run 10 resource operations including resource creation. In your case, it will try to create both the vnet and the subnet resource (parallelism applies in apply, plan and destroy). However, since you are using the same variable in both resources (var.vnet_name), Terraform is not aware that there are dependencies between the two. The way you have structured your code now would work if you were to create the vnet first and add the subnet resource after the vnet is created. Or if you are feeling adventurous you could set the parallelism to 1. Since you probably do not want that, the best way to tell Terraform in which order to create stuff is by using resource dependencies. Terraform has a concept of implicit [1] and explicit [2] dependencies. Dependencies help Terraform decide what needs to be created, based on the graph it creates [3].
There are two options in your case:
Create an implicit dependency between vnet and subnet
Create an explicit dependency between vnet and subnet
As using depends_on (or explicit dependency) is advised only in cases where there is not another way to tell Terraform that two resources are interdependent, the best way to do it is by using the implicit dependency:
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = var.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet" {
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name # <-- implicit dependency
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
The vnet resource exports some attributes after it is created [4], including the name attribute. This helps with creating the implicit dependency: by referencing a resource and one of the attributes that is available after the resource is created, you are telling Terraform that it first needs to create the vnet and only after it is available it can start with subnet creation.
[1] https://www.terraform.io/language/resources/behavior#resource-dependencies
[2] https://www.terraform.io/language/meta-arguments/depends_on
[3] https://www.terraform.io/internals/graph#resource-graph
[4] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network#attributes-reference

Terraform deletes Azure resources in subsequent 'apply' without any config change

I was trying to test the scenario of handling external changes to existing resources and then syncing my HCL config to the current state in the next apply. I could achieve that using 'taint' for the modified resource, but TF deleted other resources which were deployed during the first 'apply'. Here is the module code for a VNet with 3 subnets(prod,dmz and app) and 3 NSGs associated. And I tested with modifying one of the NSGs but TF deleted all of the subnets-
VNET-
resource "azurerm_virtual_network" "BP-VNet" {
name = var.Vnetname
location = var.location
resource_group_name = var.rgname
address_space = var.vnetaddress
subnet {
name = "GatewaySubnet"
address_prefix = "10.0.10.0/27"
}
}
Subnet -
resource "azurerm_subnet" "subnets" {
count = var.subnetcount
name = "snet-prod-${lookup(var.snettype, count.index, "default")}-001"
address_prefixes = ["10.0.${count.index+1}.0/24"]
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
}
NSGs-
resource "azurerm_network_security_group" "nsgs" {
count = var.subnetcount
name = "nsg-prod-${lookup(var.snettype, count.index, "default")}"
resource_group_name = var.rgname
location = var.location
--------
}
BastionSubnet-
resource "azurerm_subnet" "bastionsubnet" {
name = "AzureBastionSubnet"
virtual_network_name = azurerm_virtual_network.BP-VNet.name
resource_group_name = var.rgname
address_prefixes = [ "10.0.5.0/27" ]
}
The end result of second apply is -
With just Gateway subnet. It should not have deleted rest of the 4 subnets. Why is this happening?
The solution may confuse you. You can separate the GatewaySubnet from the azurerm_virtual_network block into an azurerm_subnet block. The code looks like this:
resource "azurerm_subnet" "gateway" {
name = "GatewaySubnet"
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
address_prefixes = ["10.0.10.0/27"]
}
I don't know the certain reason, but it solves your issue.

Resources