I keep getting this weird error according to me, is there a fix for this.
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
#environment = var.environment
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = var.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet" {
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
I keep getting this error
azurerm_subnet.subnet: Creating...
azurerm_virtual_network.vnet: Creating...
azurerm_virtual_network.vnet: Creation complete after 5s [id=/subscriptions/e4da9536-6759-4506-b0cf-10c70facd033/resourceGroups/rg-sagar/providers/Microsoft.Network/virtualNetworks/vnet]
╷
│ Error: creating Subnet: (Name "subnet" / Virtual Network Name "vnet" / Resource Group "rg-sagar"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ResourceNotFound" Message="The Resource 'Microsoft.Network/virtualNetworks/vnet' under resource group 'rg-sagar' was not found. For more details please
go to https://aka.ms/ARMResourceNotFoundFix"│
│ with azurerm_subnet.subnet,
│ on main.tf line 14, in resource "azurerm_subnet" "subnet":
│ 14: resource "azurerm_subnet" "subnet" {
│
Even after the vnet is created, it is unable to create a vnet, any idea how I can make this work
Any idea how to fix this?
You need to use this statement virtual_network_name = azurerm_virtual_network.vnet.name instead of virtual_network_name = var.vnet_name.
Becuause virtual_network_name = var.vnet_name in subnet resource block simultaneously creating subnet and vnet so this is not good fit in azure. Because subnet dependent on Vnet. So Vnet Should create first. So you need to use virtual_network_name = azurerm_virtual_network.vnet.name for using the existing Vnet.
Terraform Code
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
#environment = var.environment
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = data.azurerm_resource_group.rg.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet"{
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
I think this question requires a bit more explanation, since there is nothing wrong with the code. Terraform is trying to be smart about the way it creates resources, so it tries to create as much as it can in one run. This is why there is an option called -parallelism:
-parallelism=n Limit the number of parallel resource operations.
Defaults to 10.
This means that when running terraform apply, Terraform will try to run 10 resource operations including resource creation. In your case, it will try to create both the vnet and the subnet resource (parallelism applies in apply, plan and destroy). However, since you are using the same variable in both resources (var.vnet_name), Terraform is not aware that there are dependencies between the two. The way you have structured your code now would work if you were to create the vnet first and add the subnet resource after the vnet is created. Or if you are feeling adventurous you could set the parallelism to 1. Since you probably do not want that, the best way to tell Terraform in which order to create stuff is by using resource dependencies. Terraform has a concept of implicit [1] and explicit [2] dependencies. Dependencies help Terraform decide what needs to be created, based on the graph it creates [3].
There are two options in your case:
Create an implicit dependency between vnet and subnet
Create an explicit dependency between vnet and subnet
As using depends_on (or explicit dependency) is advised only in cases where there is not another way to tell Terraform that two resources are interdependent, the best way to do it is by using the implicit dependency:
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
}
resource "azurerm_virtual_network" "vnet" {
name = var.vnet_name
location = var.location
resource_group_name = var.resource_group_name
address_space = var.address_space
}
resource "azurerm_subnet" "subnet" {
name = var.subnet_name
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name # <-- implicit dependency
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Sql"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
The vnet resource exports some attributes after it is created [4], including the name attribute. This helps with creating the implicit dependency: by referencing a resource and one of the attributes that is available after the resource is created, you are telling Terraform that it first needs to create the vnet and only after it is available it can start with subnet creation.
[1] https://www.terraform.io/language/resources/behavior#resource-dependencies
[2] https://www.terraform.io/language/meta-arguments/depends_on
[3] https://www.terraform.io/internals/graph#resource-graph
[4] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network#attributes-reference
Related
When deploying Azure resources with Terraform Cloud I'm expierencing an unexpected bahaviour.
It looks like the order of deployment or the wait time between the resources is failing.
The error says that the deployment of the network inteface failed because the subnet is not created.
I already tried to implement the depends_on function, but this doesnt seem to help at all.
# Create a virtual network within the core resource group
resource "azurerm_virtual_network" "avd_default" {
name = "Vnet_${var.prefix}_Core-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
location = azurerm_resource_group.avd_default_core_rg.location
address_space = [var.avd_address_space]
}
# Create a Core internal subnet within vNet
resource "azurerm_subnet" "avd_default_core_internal" {
name = "Subnet_${var.prefix}_Core-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
virtual_network_name = azurerm_virtual_network.avd_default.name
address_prefixes = [var.core_address_prefixes]
depends_on = [
azurerm_virtual_network.avd_default
]
}
# Create a Core external subnet within vNet
resource "azurerm_subnet" "avd_default_core_external" {
name = "Subnet_${var.prefix}_Internet-Prod"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
virtual_network_name = azurerm_virtual_network.avd_default.name
address_prefixes = [var.internet_address_prefixes]
depends_on = [
azurerm_virtual_network.avd_default
]
}
# Create the Network interface for DC01
resource "azurerm_network_interface" "avd_default_dc01" {
name = "dc01-nic"
location = azurerm_resource_group.avd_default_core_rg.location
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
dns_servers = [var.private_ip_dc01,"8.8.8.8"]
ip_configuration {
name = "ipconfig1"
subnet_id = azurerm_subnet.avd_default_core_internal.id
private_ip_address_allocation = "Static"
private_ip_address = var.private_ip_dc01
}
depends_on = [
azurerm_subnet.avd_default_core_internal
]
}
# Create DC01 Windows Server 2022
resource "azurerm_windows_virtual_machine" "avd_default_dc01" {
name = "${var.prefix}-dc01"
resource_group_name = azurerm_resource_group.avd_default_core_rg.name
location = azurerm_resource_group.avd_default_core_rg.location
size = var.dc01_vm_size
admin_username = "username"
admin_password = var.dc01_admin_password
network_interface_ids = [azurerm_network_interface.avd_default_dc01.id]
os_disk {
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
disk_size_gb = "128"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2022-datacenter-azure-edition"
version = "latest"
}
}
Error written below:
Error: Subnet "Subnet_gro_Core-Prod" (Virtual Network "Vnet_gro_Core-Prod" / Resource Group "RG_gro_Core-Prod") was not found!
with azurerm_subnet_route_table_association.avd_default_wg
on main.tf line 316, in resource "azurerm_subnet_route_table_association" "avd_default_wg":
resource "azurerm_subnet_route_table_association" "avd_default_wg" {
Error: creating Network Interface: (Name "dc01-nic" / Resource Group "RG_gro_Core-Prod"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/virtualNetworks/Vnet_gro_Core-Prod/subnets/Subnet_gro_Core-Prod referenced by resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/networkInterfaces/dc01-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
with azurerm_network_interface.avd_default_dc01
on main.tf line 78, in resource "azurerm_network_interface" "avd_default_dc01":
resource "azurerm_network_interface" "avd_default_dc01" {
Error: creating Network Interface: (Name "wg-nic-internal" / Resource Group "RG_gro_Watchguard-Prod"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Core-Prod/providers/Microsoft.Network/virtualNetworks/Vnet_gro_Core-Prod/subnets/Subnet_gro_Core-Prod referenced by resource /subscriptions/xxxb91c5-4fe5-44af-9c98-cdd8e73ee240/resourceGroups/RG_gro_Watchguard-Prod/providers/Microsoft.Network/networkInterfaces/wg-nic-internal was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
with azurerm_network_interface.avd_default_wg_internal
on main.tf line 156, in resource "azurerm_network_interface" "avd_default_wg_internal":
resource "azurerm_network_interface" "avd_default_wg_internal" {
Running the terraform deploy command for a second time after this errors it is working as expected.
I have a pipeline that downloads standard Terraform modules and creates resources.
"Module-ResourceGroup" deploys resource group.
"Module-Vnet" which deploys vnet.
"Module-Subnet" which deploys subnets.
My problem is when I kick the pipeline for the first time my pipeline fails reason because Module-Subnet gives me an error message that Vnet does not exist.
However, when I run the same pipeline for a second time, my Subnets get deployed without any issues as during the first run, then Vnet gets created.
I guess I have two solutions :
depends_on where I can say that by subnet module is dependent on vnet module.
Introduce a wait of 3 mins in the subnet module before it gets executed.
Q1. why it is happening? whereas as per terraform "Most of the time, Terraform infers dependencies between resources based on the configuration given" https://learn.hashicorp.com/tutorials/terraform/dependencies
if anything is wrong with the way I have written modules?
Q2. What is a better solution depends_on OR introduce wait
Q3. Is there any other way to fix it?
Below are my modules.
Module-ResourceGroup/main.tf
resource "azurerm_resource_group" "my-resourcegroup" {
name = format("%s-%s",var.resource_group_name,var.env)
location = var.location
}
Module-Vnet/main.tf
resource "azurerm_virtual_network" "my-vnet" {
name = format("%s-%s",var.vnet_name,var.env)
resource_group_name = format("%s-%s",var.resource_group_name,var.env)
location = var.location
address_space = var.address_space
}
Module-Subnet/main.tf
resource "azurerm_subnet" "my-subnet" {
for_each = var.subnetsconfig
name = format("%s-%s",each.key,var.env)
address_prefixes = each.value["address_prefixes"]
virtual_network_name = format("%s-%s",var.vnet_name,var.env)
resource_group_name = format("%s-%s",var.resource_group_name,var.env)
}
If you use the output of a resource as the input of another resource then Terraform will understand it as an implicit dependency. For example (as you did not post all of your code):
Module-ResourceGroup/main.tf
resource "azurerm_resource_group" "my-resourcegroup" {
name = format("%s-%s",var.resource_group_name,var.env)
location = var.location
}
Module-Vnet/main.tf
resource "azurerm_virtual_network" "my-vnet" {
name = format("%s-%s",var.vnet_name,var.env)
resource_group_name = azurerm_resource_group.my-resourcegroup.name
location = var.location
address_space = var.address_space
}
Module-Subnet/main.tf
resource "azurerm_subnet" "my-subnet" {
for_each = var.subnetsconfig
name = format("%s-%s",each.key,var.env)
address_prefixes = each.value["address_prefixes"]
virtual_network_name = azurerm_virtual_network.my-vnet.name
resource_group_name = azurerm_resource_group.my-resourcegroup.name
}
I was trying to test the scenario of handling external changes to existing resources and then syncing my HCL config to the current state in the next apply. I could achieve that using 'taint' for the modified resource, but TF deleted other resources which were deployed during the first 'apply'. Here is the module code for a VNet with 3 subnets(prod,dmz and app) and 3 NSGs associated. And I tested with modifying one of the NSGs but TF deleted all of the subnets-
VNET-
resource "azurerm_virtual_network" "BP-VNet" {
name = var.Vnetname
location = var.location
resource_group_name = var.rgname
address_space = var.vnetaddress
subnet {
name = "GatewaySubnet"
address_prefix = "10.0.10.0/27"
}
}
Subnet -
resource "azurerm_subnet" "subnets" {
count = var.subnetcount
name = "snet-prod-${lookup(var.snettype, count.index, "default")}-001"
address_prefixes = ["10.0.${count.index+1}.0/24"]
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
}
NSGs-
resource "azurerm_network_security_group" "nsgs" {
count = var.subnetcount
name = "nsg-prod-${lookup(var.snettype, count.index, "default")}"
resource_group_name = var.rgname
location = var.location
--------
}
BastionSubnet-
resource "azurerm_subnet" "bastionsubnet" {
name = "AzureBastionSubnet"
virtual_network_name = azurerm_virtual_network.BP-VNet.name
resource_group_name = var.rgname
address_prefixes = [ "10.0.5.0/27" ]
}
The end result of second apply is -
With just Gateway subnet. It should not have deleted rest of the 4 subnets. Why is this happening?
The solution may confuse you. You can separate the GatewaySubnet from the azurerm_virtual_network block into an azurerm_subnet block. The code looks like this:
resource "azurerm_subnet" "gateway" {
name = "GatewaySubnet"
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
address_prefixes = ["10.0.10.0/27"]
}
I don't know the certain reason, but it solves your issue.
Problem
I discovered that I can integrate Application Security Groups (ASG) into a Network Interface when using the azurestack resource provider, but I cannot do so when using the azurerm resource provider.
My Understanding
I do not understand why I cannot. I actually do not understand the difference between Azure Stack and Azure RM. This article suggests that Azure Stack is for hybrid deployments and Azure RM (or Azure Provider) is for pure cloud deployments.
All the previous work that I and other colleagues have done has been with azurerm. I would prefer to stick with azurerm if I could. Or, if possible, I would like to "mix and match" azurerm and azurestack, using azurestack only when I have to, like in this case. But I'd really like to know why some things are only possible with one provider, since they both should have the same offering, with respect to pure Azure services.
Any Ideas?
Ultimately, though, I am just trying to solve the problem of attaching a network interface to a VM, where the NIC has associated ASGs. I would like to do this with azurerm if possible. I can do it with azurestack, as long as azurestack is compatible with other services launched through azurerm.
There is no need to use azurestack to associate NIC with ASGs
Terraform provider azurerm has resource called azurerm_network_interface_application_security_group_association
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_application_security_group_association
You just need to create ASG and associate it with NIC.
Example:
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_application_security_group" "example" {
name = "example-asg"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_application_security_group_association" "example" {
network_interface_id = azurerm_network_interface.example.id
application_security_group_id = azurerm_application_security_group.example.id
}
In my main terraform file I have:
resource "azurerm_resource_group" "rg" {
name = var.rg_name
location = var.location
}
resource "azurerm_public_ip" "public_ip" {
name = "PublicIP"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
domain_name_label = var.domain_name_label
allocation_method = "Dynamic"
}
And in my outputs file I have:
data "azurerm_public_ip" "public_ip" {
name = "PublicIP"
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_resource_group.rg, azurerm_public_ip.public_ip]
}
output "public_ip" {
value = data.azurerm_public_ip.public_ip.ip_address
}
All the resources including IP get created, however the output is blank. How can I fix this?
Make sure output.tf contains only output tags and main.tf contains resources tags
The following works just fine for me:
Main.tf
resource "azurerm_resource_group" "example" {
name = "resourceGroup1"
location = "West US"
}
resource "azurerm_public_ip" "example" {
name = "acceptanceTestPublicIp1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
tags = {
environment = "Production"
}
}
Output.tf
output "azurerm_public_ip" {
value = azurerm_public_ip.example.ip_address
}
In case you want to have a dependency between resources, use depends_on inside the resource tag.
For example:
depends_on = [azurerm_resource_group.example]
Steps to reproduce:
terraform init
terraform plan
terraform apply
terraform output
Update-
The reason you get blank public IP is since declaring allocation_method = "Dynamic"
From the docs:
Note Dynamic - Public IP Addresses aren't allocated until they're assigned to a
resource (such as a Virtual Machine or a Load Balancer) by design
within Azure.
Full working example with dynamic allocation.
I had the same issue. The actual problem seems to be the dynamic allocation. The IP address is not known until it is actually used by a VM.
In my case, I could solve the issue by adding the VM (azurerm_linux_virtual_machine.testvm) to the depends_on list in the data source:
data "azurerm_public_ip" "public_ip" {
name = "PublicIP"
resource_group_name = azurerm_resource_group.rg.name
depends_on = [ azurerm_public_ip.public_ip, azurerm_linux_virtual_machine.testvm ]
}
Unfortunately, this seems not to be documented in https://www.terraform.io/docs/providers/azurerm/d/public_ip.html