I was looking at a GitHub project ibm-cloud-architecture/terraform-openshift4-azure to install OpenShift using Terraform.
Using Terraform 1.3.7 this project fails on the following code
resource "azurerm_lb_backend_address_pool" "internal_lb_controlplane_pool_v4" {
count = var.use_ipv4 ? 1 : 0
resource_group_name = var.resource_group_name
loadbalancer_id = azurerm_lb.internal.id
name = var.cluster_id
}
with the message
Error: Unsupported argument
on vnet/internal-lb.tf line 40, in resource "azurerm_lb_backend_address_pool" internal_lb_controlplane_pool_v4":
40: resource_group_name = var.resource_group_name
An argument named "resource_group_name" is not expected here.
Why is this code failing? How can we specify the name of a resource group with the current version of Terraform and Azure?
If you check docs for azurerm_lb_backend_address_pool you will see that it does not take resource_group_name argument. So it should be:
resource "azurerm_lb_backend_address_pool" "internal_lb_controlplane_pool_v4" {
count = var.use_ipv4 ? 1 : 0
loadbalancer_id = azurerm_lb.internal.id
name = var.cluster_id
}
Issue was caused because of syntax error. There resource_group_name is not required.
resource "azurerm_lb_backend_address_pool" "example" {
loadbalancer_id = azurerm_lb.example.id
name = "BackEndAddressPool"
}
here is the code reference and replicated the same
main tf as follow:
data "azurerm_client_config" "current" {}
resource "azurerm_resource_group" "example" {
name = "********"
location = "West Europe"
}
resource "azurerm_public_ip" "example" {
name = "swarnaPublicIPForLB"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
}
resource "azurerm_lb" "example" {
name = "swarnaTestLoadBalancer"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
frontend_ip_configuration {
name = "swanraPublicIPAddress"
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_lb_backend_address_pool" "example" {
loadbalancer_id = azurerm_lb.example.id
name = "BackEndAddressPool"
}
upon plan and apply
From Portal
Related
I'm trying to create an instance of Application Gateway. While doing so, I get the following error:
Error: creating Application Gateway: (Name "name-gateway-wgrkecswbk" / Resource Group "name03n62mct"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceName" Message="Resource name is invalid. The name can be up to 80 characters long. It must begin with a word character, and it must end with a word character or with '_'. The name may contain word characters or '.', '-', '_'." Details=[]
The name used is name-gateway-wgrkecswbk which, looks to be a valid name according the error description.
The location used is
with module.name.module.gateway[0].azurerm_application_gateway.res,
on .terraform/modules/name/modules/gateway/main.tf line 20, in resource "azurerm_application_gateway" "name":
20: resource "azurerm_application_gateway" "name" {
Tried removed dashes and making it shorter, with the same results.
The unicode character [i.e., space] on the gateway name may cause a problem. I have repeated the procedure using the same application gateway name, "name-gateway-wgrkecswbk."
below code reference from harshicop
main tf as follows:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "**********"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "examples-network"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
address_space = ["10.254.0.0/16"]
}
resource "azurerm_subnet" "frontend" {
name = "frontend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.0.0/24"]
}
resource "azurerm_subnet" "backend" {
name = "backend"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.254.2.0/24"]
}
resource "azurerm_public_ip" "example" {
name = "examples-pip"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Dynamic"
}
locals {
backend_address_pool_name = "${azurerm_virtual_network.example.name}-beap"
frontend_port_name = "${azurerm_virtual_network.example.name}-feport"
frontend_ip_configuration_name = "${azurerm_virtual_network.example.name}-feip"
http_setting_name = "${azurerm_virtual_network.example.name}-be-htst"
listener_name = "${azurerm_virtual_network.example.name}-httplstn"
request_routing_rule_name = "${azurerm_virtual_network.example.name}-rqrt"
redirect_configuration_name = "${azurerm_virtual_network.example.name}-rdrcfg"
}
resource "azurerm_application_gateway" "network" {
name = "name-gateway-wgrkecswbk"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku {
name = "Standard_Small"
tier = "Standard"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
frontend_port {
name = local.frontend_port_name
port = 80
}
frontend_ip_configuration {
name = local.frontend_ip_configuration_name
public_ip_address_id = azurerm_public_ip.example.id
}
backend_address_pool {
name = local.backend_address_pool_name
}
backend_http_settings {
name = local.http_setting_name
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = local.listener_name
frontend_ip_configuration_name = local.frontend_ip_configuration_name
frontend_port_name = local.frontend_port_name
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = local.listener_name
backend_address_pool_name = local.backend_address_pool_name
backend_http_settings_name = local.http_setting_name
}
}
provide tf as follows:
terraform {
required_version = "~>1.3.3"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.5.0"
}
}
}
Output while running plan command
terraform plan
upon apply
terraform apply -auto-approve
From Portal:
The issue was that under ssl_certificate, the name property was using a variable ssl_certificate_name which turned to be empty.
Then, the error coming back from Azure was half correct; It was an invalid name used, since it was an empty var, but not at the resource level ( azurerm_application_gateway.name ), instead it was at the inner block azurerm_application_gateway.name.ssl_certificate.name level.
Code:
resource "azurerm_application_gateway" "name" {
//....
ssl_certificate {
// var contents were empty
name = var.ssl_certificate_name
}
}
Already reported this issue to Azure so hopefully it gets resolved soon.
Provider version was 3.37
I am trying to setup a databricks into subet and protect it by firewalls using the following code in Terraform:
Setup resource group:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = var.resource_group_name
location = var.resource_group_location
}
Setup virtual network:
resource "azurerm_virtual_network" "test_vnet" {
name = var.vnet_name
address_space = ["10.0.0.0/16"]
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
Setup subnets:
resource "azurerm_subnet" "private_snet" {
name = "subnet-private"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.1.0/24"]
delegation {
name = "databricksprivatermdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
resource "azurerm_subnet" "public_snet" {
name = "subnet-public"
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = ["10.0.2.0/24"]
delegation {
name = "databrickspublicdelegation"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
Setup firewals:
resource "azurerm_network_security_group" "private_empty_nsg" {
name = "firewall-private"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.private_snet.id
network_security_group_id = azurerm_network_security_group.private_empty_nsg.id
}
resource "azurerm_network_security_group" "public_empty_nsg" {
name = "firewall-public"
location = var.resource_group_location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet_network_security_group_association" "public_nsg_asso" {
subnet_id = azurerm_subnet.public_snet.id
network_security_group_id = azurerm_network_security_group.public_empty_nsg.id
}
And finally setup the databricks:
resource "azurerm_databricks_workspace" "forex_price_databricks" {
name = "databricks-test"
location = var.resource_group_location
resource_group_name = var.resource_group_name
sku = "standard"
custom_parameters {
virtual_network_id = azurerm_virtual_network.test_vnet.id
public_subnet_name = azurerm_subnet.public_snet.name
public_subnet_network_security_group_association_id = azurerm_network_security_group.public_empty_nsg.id
private_subnet_name = azurerm_subnet.private_snet.name
private_subnet_network_security_group_association_id = azurerm_network_security_group.private_empty_nsg.id
}
}
However, when i run the code in the first try i got the below error:
Error: Code="ResourceNotFound" Message="The Resource 'Microsoft.Network/virtualNetworks/my-vnet' under resource group 'My-Resource-Group' was not found.
So, the question is:
Why the Virtual Network is not created ? or cannot be found ?
Update:
When i remove the
resource_group {
prevent_deletion_if_contains_resources = false
}
I used to have this line, becuse i usually run the terraform destroy and i don't want t remove my resource group. However, even if i remove it ,I got the below error:
Message="Operation was canceled." Details=[{"code":"
CanceledAndSupersededDueToAnotherOperation","message":"Operation PutVirtualNetworkOperation was canceled and superseded by operation PutSubnetOpe
ration
Are you able to reproduce the same error ?
The error message is:
There are some problems with the configuration, described below. The Terraform configuration must be valid before initialization so that Terraform can determine which modules and providers need to be installed.
Here's what I'm doing:
Written a terraform script in azvm.tf to create a VM.
Defined the variables resourcegroup, location, and pub_key in the variables.tf file, and call those variables in the azvm.tf file using string interpolation syntax.
Created a VM with the following features:
a) Ubuntu 18.04 server
b) VM name : any custom name
c) Admin_username : Any custom name
d) disable password authentication
e) size : Standard DS1_v2
f) Allow traffic for ssh, http, https
g) use public ssh key generated in the above step
With help of vi command we created variables.tf file as:
variable "resourcegroup"
{
default = "user-pbtwiiiuofyu"
}
variable "location"
{
default = ["East US"]
}
variable "pub_key"
{
default = ["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDH7+uZHdf3SXSfz3Z80FCahr1Bt2t2u+RBlgE/QPN5Qy/CdrpcNH/8oncuDLKW5r+Ie+J5yNVwJz96gdLAQPNMmAYEPGjNct63kQp8hZPlWFdQIEyvHZwwP4YWf5vyRjozQ1PZN/0WyBjqTYmJh3z6haFxqnmIf6a9G8JldjtTPuF2oAnHaUDpqD4qBy/+OSN7CX1Lowny83NLNFoTgmm2npU+RIS2FDYpf1z30tejZSO0V3Z+iT3Xddlzh2NwunzOPv9avzwrLbSSvdbugcXTSPURMLgXwCapH9oHt4RCB/2mgsrRcLQb14FKG9UVBwzh07v2w8fDAHLnv2g4bpnhIyZMpX3zLO9mtQ4ix9DpyqPMqLV7b1uTSyj9t3UEcfprGWRPDJgkpGXEzP0iQgrGlfpctL/6/Hy009123uzb4ugXwndPbuQnPtj45PX2D4zEOFPrlvFBofazuGp65wxdzlzZpGsgqxiZQbdFCU4eQV2Y2kBPj/uv/3DOI2x27ML6i9D/shiVz/sSpryjKRHIl9bU7rU4vClgED/1NuhhTGLk5qPFOjsvvHpIORu5zFKNmeGFHW6e0TJnJMsckszseQuyjqUMkBoFV0NIOyS/mGLrxxzOTpzQyM9duat4T3kGLjjy+ozpHRaahXO79I91pDP66enitmj861kS6G5chQ== root#6badb6ae71d1"]
}
And vi command we created azvm.tf file as
az vm create -n Myuyuuy5Vm -g var.resourcegroup --ssh-key-values var.pub_key
We have another similar task but less easy in compare to it i.e.-
Write a terraform script in azmonitor.tf to create a storage account,
storage container, storage blob to monitor and send log reports
everyday.
Define the variables resourcegroup and location in the variables.tf
file, and call those variables in the azmonitor.tf file using string
interpolation syntax
variables.tf as-
variable "resourcegroup"
{
default = "user-pbtwiiiuofyu"
}
variable "location"
{
default = ["East US"]
}
azmonitor.tf we had created as-
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_storage_account" "example" {
name = "sa123321123"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.location
sku = "Standard_LRS"
}
resource "azurerm_storage_container" "example" {
name = "sc123321123"
resource_group_name = azurerm_resource_group.example.name
account_key = azurerm_storage_account.eample.name
public_access = "blob"
}
resource "azurerm_storage_blob" "example" {
name = "sb123321123"
resource_group_name = azurerm_resource_group.example.name
account_key = azurerm_storage_account.eample.name
source = azurerm_storage_container.name
}
So as mentioned above, you're using a mix of Terraform and Az CLI here - this is not right. You should use one or the other.
It seems you've been tasked to create a Linux VM using Terraform. You need a 'main' Terraform file for your main code/terraform objects and then a 'variable' Terraform file for your variables. Technically, Terraform will flatten every Terraform file that are in the same directory when you do a Terraform init/plan - this means you could put everything in just one single .tf file.
However, for the purpose of the tutorial, it's a good practice to split both so they can be managed easily. This is the Terraform code that you would want to use - it will help you create a Linux VM.
For simplicity, I'm putting a sample here that links back to the variables in your variables.tf
terraform {
required_version = "= 0.14.10" //change this accordingly
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.55.0"
}
}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_DS1_v2"
admin_username = "Admin_username"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub") //put your public key here
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
Once you're done with validating the code above, you'll need two more objects,
The azurerm_network_security_group that will help you create the rules to allow inbound connectivity to the VM
The azurerm_subnet_network_security_group_association that will help you attach the subnet to the NSG.
Once you have the complete code, then you can run a terraform init to initialise the modules, followed by terraform plan to verify your plan and finally terraform apply to deploy the VM.
For the blob part:
variable.tf
variable "resourcegroup" {
default = "user-pbtwiiiuofyu"
}
variable "location" {
default = "East US"
}
azmonitor.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = var.resourcegroup
location = var.location
}
resource "azurerm_storage_account" "example" {
name = "examplestoracc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "content"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "example" {
name = "my-awesome-content.zip"
storage_account_name = azurerm_storage_account.example.name
storage_container_name = azurerm_storage_container.example.name
type = "Block"
source = "./some-local-file.zip"
}
I want to create two vm's with terraform in Azure. I have configured two "azurerm_network_interface" but when I try to apply the changes, I receive an error. Do you have any idea? Is there any issue if I try to create them on different regions?
The error is something like: vm2-nic was not found azurerm_network_interface
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
name = "north-network"
address_space = ["10.0.0.0/16"]
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main.name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
name = "test-pip"
location = "North Europe"
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example.id
}
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = "${each.value}"
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
...
Error:
Error: Error creating Network Interface "vm2-nic" (Resource Group "candidate-d7f5a2-rg"): network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/virtualNetworks/north-network/subnets/internal referenced by resource /subscriptions/xxxxx/resourceGroups/xxxxx/providers/Microsoft.Network/networkInterfaces/vm2-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
on environment.tf line 47, in resource "azurerm_network_interface" "main":
47: resource "azurerm_network_interface" "main" {
According to the documentation each NIC attached to a VM must exist in the same location (Region) and subscription as the VM. https://learn.microsoft.com/en-us/azure/virtual-machines/windows/network-overview.
If you can re-create the NIC in the same location as the VM or create the VM in the same location as the NIC that will likely solve your problem.
Since you have for_each and in the resource "azurerm_network_interface", it will create two NICs in the location = "${each.value}" while the subnet or VNet has a fixed region "North Europe". You need to create the NICs or other Azure VM related resources like subnets in the same region, you could change the codes like this,
resource "azurerm_resource_group" "test" {
name = "myrg"
location = "West US"
}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
idle_timeout_in_minutes = 30
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = "${each.value}"
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}
Also for public ip id getting: "
Error: Can not parse "ip_configuration.0.public_ip_address_id" as a
resource id: Cannot parse Azure ID: parse
module.resource.azurerm_public_ip.primary.id: invalid URI for request
"
As the network is a nested module for the resource module, will you please suggest, where I'm missing?
main.tf file:
#Select provider
provider "azurerm" {
subscription_id = "xxxxxxxxxxxxxxxxxxxxxx"
version = "~> 2.2"
features {}
}
module "resource" {
source = "./modules/resource"
resource_group_name = "DevOpsPoc-primary"
location = "southeastasia"
}
module "network" {
source = "./modules/network"
virtual_network = "primaryvnet"
subnet = "primarysubnet"
address_space = "192.168.0.0/16"
address_prefix = "192.168.1.0/24"
public_ip = "backendvmpip"
location = "southeastasia"
primary_nic = "backendvmnic"
#vnet_subnet_id = element(module.network.vnet_subnets, 0)
primary_ip_conf = "backendvm"
}
resource module main.tf file:
resource "azurerm_resource_group" "primary" {
name = "var.resource_group_name"
location = "var.location"
tags = {
environment = "env"
}
}
network module main.tf file:
#Create Virtual Network in Primary Resource Group
resource "azurerm_virtual_network" "primary" {
name = "var.virtual_network"
resource_group_name = "module.resource.azurerm_resource_group.primary.name"
address_space = ["var.address_space"]
location = "module.resource.azurerm_resource_group.primary.location"
tags = {
environment = "env"
}
}
#Create Subnet in Virtual Network
resource "azurerm_subnet" "primary" {
name = "var.subnet"
resource_group_name = "module.resource.azurerm_resource_group.primary.name"
virtual_network_name = "module.resource.azurerm_virtual_network.primary.name"
address_prefix = "var.address_prefix"
# tags = {
# environment = "env"
# }
}
output "subnet_id"{
value = "module.resource.azurerm_subnet.primary.id"
}
#Create public IP address
resource "azurerm_public_ip" "primary" {
name = "var.public_ip"
location = "module.resource.azurerm_resource_group.primary.location"
resource_group_name = "module.resource.azurerm_resource_group.primary.name"
allocation_method = "Dynamic"
tags = {
environment = "env"
}
}
output "public_ip_id"{
value = "module.resource.azurerm_public_ip.id"
}
#Create Network Interface
resource "azurerm_network_interface" "primary" {
name = "var.primary_nic"
location = "module.resource.azurerm_resource_group.primary.location"
resource_group_name = "module.resource.azurerm_resource_group.primary.name"
ip_configuration {
name = "var.primary_ip_conf"
subnet_id = "module.resource.azurerm_subnet.primary.id"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "module.resource.azurerm_public_ip.primary.id"
}
tags = {
environment = "env"
}
}
There are some places need to be corrected in your codes:
You don't need double quotes"" in variables or expression refers to Interpolation Syntax. For example "var.virtual_network" should be var.virtual_network.
You can directly reference resources in the same main.tf file instead of from the module block. For example, change virtual_network_name = "module.resource.azurerm_virtual_network.primary.name" to virtual_network_name = azurerm_virtual_network.primary.name in the resource "azurerm_subnet" block.
The syntax for referencing module outputs is ${module.NAME.OUTPUT}, where NAME is the module name given in the header of the module configuration block and OUTPUT is the name of the output to reference. You can declare resource group name and location in module "network" instead of using it from the ./modules/network/main.tf file.
Here is the working code and you could get more references in this document:
main.tf file in the root directory
module "resource" {
source = "./modules/resource"
resource_group_name = "DevOpsPoc-primary"
location = "southeastasia"
}
module "network" {
source = "./modules/network"
resource_group_name = module.resource.RGname
location = module.resource.location
virtual_network = "primaryvnet"
subnet = "primarysubnet"
address_space = ["192.168.0.0/16"]
address_prefix = "192.168.1.0/24"
public_ip = "backendvmpip"
primary_nic = "backendvmnic"
#vnet_subnet_id = element(module.network.vnet_subnets, 0)
primary_ip_conf = "backendvm"
}
main.tf in the directory ./modules/resource
variable "resource_group_name" {}
variable "location" {}
resource "azurerm_resource_group" "primary" {
name = var.resource_group_name
location = var.location
}
output "RGname" {
value = "${azurerm_resource_group.primary.name}"
}
output "location" {
value = "${azurerm_resource_group.primary.location}"
}
main.tf in the directory ./modules/network and also declare the variables in the same directory.
#Create Virtual Network in Primary Resource Group
resource "azurerm_virtual_network" "primary" {
name = var.virtual_network
resource_group_name = var.resource_group_name
address_space = var.address_space
location = var.location
}
#Create Subnet in Virtual Network
resource "azurerm_subnet" "primary" {
name = var.subnet
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.primary.name
address_prefix = var.address_prefix
}
output "subnet_id"{
value = azurerm_subnet.primary.id
}
#Create public IP address
resource "azurerm_public_ip" "primary" {
name = var.public_ip
location = var.location
resource_group_name = var.resource_group_name
allocation_method = "Dynamic"
}
output "public_ip_id"{
value = azurerm_public_ip.primary.id
}
#Create Network Interface
resource "azurerm_network_interface" "primary" {
name = var.primary_nic
location = var.location
resource_group_name = var.resource_group_name
ip_configuration {
name = var.primary_ip_conf
subnet_id = azurerm_subnet.primary.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.primary.id
}
}
I had a similar error when setting up an Azure App Service using Terraform.
module.app_service.azurerm_app_service.app_service: Creating...
│ Error: Cannot parse Azure ID: parse "27220": invalid URI for request
│
│ with module.app_service.azurerm_app_service.app_service,
│ on ../../../modules/azure/app-service/main.tf line 1, in resource "azurerm_app_service" "app_service":
│ 1: resource "azurerm_app_service" "app_service" {
Here's how I fixed it:
The issue was that I used the wrong value for the App Service Plan ID in my module.
I was using 27220 as the App Service Plan ID, instead of the actual value of the App Service Plan ID which of this format:
"/subscriptions/fec545cd-bead-43ba-84c6-5738cdc7e458/resourceGroups/MyDevRG/providers/Microsoft.Web/serverfarms/MyDevLinuxASP"
That's all