I'm trying to create a Terraform project to create everything I need in an Azure subscription, so resource groups, vnets, subnets and VM's.
However when I've run this once and try again, it states that it cannot delete a subnet that is in use. I haven't changed anything about the subnet or the VM connected to it.
Error: creating/updating Virtual Network: (Name "" / Resource Group ""): network.VirtualNetworksClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeDeleted" Message="Subnet build-agent is in use by /subscriptions/mysub/resourceGroups/myrg/providers/Microsoft.Network/networkInterfaces/mynic/ipConfigurations/internal and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]
terraform {
required_version = ">= 1.1.0"
backend "azurerm" {
}
required_providers {
azurerm = {
version = "=3.5.0"
source = "hashicorp/azurerm" # https://registry.terraform.io/providers/hashicorp/azurerm/latest
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
locals {
name_suffix = "<mysuffix>"
}
resource "azurerm_resource_group" "rg-infra" {
name = "rg-${local.name_suffix}"
location = "UK South"
}
resource "azurerm_virtual_network" "vnet-mgmt" {
name = "vnet-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
address_space = ["<myiprange>"]
subnet {
name = "virtual-machines"
address_prefix = "<myiprange>"
}
subnet {
name = "databases"
address_prefix = "<myiprange>"
}
}
data "azurerm_virtual_network" "network" {
name = "vnet-${local.name_suffix}"
resource_group_name = azurerm_resource_group.rg-infra.name
}
resource "azurerm_subnet" "sb-ansible" {
name = "build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
virtual_network_name = data.azurerm_virtual_network.network.name
address_prefixes = ["<myiprange>"]
depends_on = [azurerm_virtual_network.vnet-mgmt]
}
data "azurerm_subnet" "prd-subnet" {
name = "build-agent"
virtual_network_name = data.azurerm_virtual_network.network.name
resource_group_name = azurerm_resource_group.rg-infra.name
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_network_interface" "ni-ansible" {
name = "nic-ansible-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.prd-subnet.id
private_ip_address_allocation = "Dynamic"
}
lifecycle {
ignore_changes = ["ip_configuration"]
}
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_linux_virtual_machine" "ansible-vm" {
name = "ansible-build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
location = azurerm_resource_group.rg-infra.location
size = "Standard_D2as_v4"
admin_username = "myadminuser"
network_interface_ids = [
azurerm_network_interface.ni-ansible.id,
]
admin_ssh_key {
username = "myadminuser"
public_key = ""
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
lifecycle {
ignore_changes = ["source_image_reference"]
}
depends_on = [azurerm_network_interface.ni-ansible]
}
Any help on why it's behaving like this, or a workaround would be greatly appreciated!
Many thanks
Turns out you can't mix nested subnets in the vnet block with an explicitly defined azurerm_subnet
Related
When deploying a custom script extension for a VM in Azure, it times out after 15 minutes. The timeout block is set to 2hrs. I cannot figure out why it keeps timing out. Could anyone point me in the right direction please? Thanks.
Resource to deploy (https://i.stack.imgur.com/lIfKj.png)
Error (https://i.stack.imgur.com/GFYRL.png)
In Azure, each resource will take a particular amount of time for provisioning. For Virtual Network Gateway's/ Virtual machines, timeout is up to 2 hours as mentioned in terraform timeouts.
Therefore, the timeout block we provide for any virtual machine has to be less than two hours (2h).
I tried creating a replica for azure vm extension resource by using below terraform code and it deployed successfully.
timeout block:
timeouts {
create = "1h30m"
delete = "20m"
}
azure_VM_extension:
resource "azurerm_virtual_machine_extension" "xxxxx" {
name = "xxxxname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Created a virtual machine by adding required configurations under resource group.
main.tf:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "xxxxxRG" {
name = "xxxxx-RG"
location = "xxxxxx"
}
resource "azurerm_virtual_network" "example" {
name = "xxxxx"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "xxxxxx"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "xxxxconfiguration"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_storage_account" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "example" {
name = "xxxxxx"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
resource "azurerm_virtual_machine" "example" {
name = "xxxxxxVM"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_F2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "xxxxx"
vhd_uri = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/myosdisk1.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "xxxxxname"
admin_username = "xxxx"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Executed:
terraform init:
terraform plan:
terraform apply:
Extension added successfully after deployment:
You can upgrade status if you want to use extensions.
I resolved the issue by changing the type_handler_version to 1.9.
I am trying to create resources in azure using terraform, a SQL server database and also a virtual machine.I get the error.
│ Error: creating Subnet: (Name "db_subnetn" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'db_subnetn' is not valid in virtual network 'tf_dev-network'." Details=[]
What have I done ?
followed the link here Error while provisioning Terraform subnet using azurerm
I deleted other network resources using thesame IP range.
My network understanding is pretty basic, however from my research it appears that 10.0.0.0/16 is quite a large IP range and can lead to overlaps. So what did I do, I changed the virtual network IP range from 10.0.0.0/16 to 10.0.1.0/24 to restrict the range, what simply happened is that the error changed to
│ Error: creating Subnet: (Name "internal" / Virtual Network Name "tf_dev-network" / Resource Group "terraform_youtube"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="NetcfgInvalidSubnet" Message="Subnet 'internal' is not valid in virtual network 'tf_dev-network'." Details=[]
At this stage, I would be grateful if someone can explain what is going wrong here and what needs to be done. Thanks in advance
My files are as follows.
dbcode.tf
resource "azurerm_sql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
resource_group_name = azurerm_resource_group.resource_gp.name
server_name = azurerm_sql_server.sqlserver.name
subnet_id = azurerm_subnet.db_subnet.id
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
main.tf
resource "azurerm_resource_group" "resource_gp" {
name="terraform_youtube"
location = "UK South"
tags = {
"owner" = "Rahman"
"purpose" = "Practice terraform"
}
}
variable "prefix" {
default = "tf_dev"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.resource_gp.location
resource_group_name = azurerm_resource_group.resource_gp.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_B1ls"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Tested with your code in my environment was getting the same error.
To fix the issue you need to change address_prefixes for db_subnet to ["10.0.3.0/24"] as ["10.0.2.0/24"] address range is already using by internal subnet in your main.tf and also check update for sqlvnetrule and do the changes in your dbcode.tf file.
resource "azurerm_mssql_server" "sqlserver" {
name = "tom556sqlserver"
resource_group_name = azurerm_resource_group.resource_gp.name
location = azurerm_resource_group.resource_gp.location
version = "12.0"
administrator_login = "khdfd9898rerer"
administrator_login_password = "4-v3ry-jlhdfdf89-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_subnet" "db_subnet" {
name = "db_subnetn"
resource_group_name = azurerm_resource_group.resource_gp.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.3.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mssql_virtual_network_rule" "sqlvnetrule" {
name = "sql_vnet_rule"
#resource_group_name = azurerm_resource_group.resource_gp.name
#server_name = azurerm_sql_server.sqlserver.name
server_id = azurerm_mssql_server.sqlserver.id
subnet_id = azurerm_subnet.db_subnet.id
}
I'm trying to build an AKS cluster in Azure using Terraform. However, I do not want AKS deployed into its own VNET and Subnet, I already have built a subnet within a vnet that I want it to use. When trying to just give it the subnet ID, I get an overlapping CIDER issue. My networking is:
VNET: 10.0.0.0/16
Subnets: 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24. I need AKS to use the 10.0.1.0./24 subnet within this VNET. However, my Terraform config is trying to use a CIDR of 10.0.0.0/16, which is an obviouis conflict. I don't know how to fix this issue inside of Terraform, with the portal I can just choose the vnet/subnet for AKS. Below is my Terraform configuration which generates the error:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
subscription_id = "####"
tenant_id = "####"
}
locals {
azure_location = "East US"
azure_location_short = "eastus"
}
resource "azurerm_resource_group" "primary_vnet_resource_group" {
name = "vnet-prod-002-eastus-001"
location = local.azure_location
}
resource "azurerm_virtual_network" "primary_vnet_virtual_network" {
name = "vnet_primary_eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "aks-subnet" {
name = "snet-aks-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.1.0/24"]
}
output "aks_subnet_id" {
value = azurerm_subnet.aks-subnet.id
}
resource "azurerm_subnet" "application-subnet" {
name = "snet-app-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_subnet" "postgres-subnet" {
name = "snet-postgres-prod-002-eastus-001"
# location = local.azure_location
virtual_network_name = azurerm_virtual_network.primary_vnet_virtual_network.name
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
address_prefixes = ["10.0.3.0/24"]
}
output "postgres_subnet_id" {
value = azurerm_subnet.postgres-subnet.id
}
resource "azurerm_kubernetes_cluster" "aks-prod-002-eastus-001" {
name = "aks-prod-002-eastus-001"
location = local.azure_location
resource_group_name = azurerm_resource_group.primary_vnet_resource_group.name
dns_prefix = "aks-prod-002-eastus-001"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.aks-subnet.id
}
network_profile {
network_plugin = "azure"
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = false
}
}
}
I'm not a Terraform expert and really need a hand with this if anyone knows how to accomplish this. I've been up and down the documentation and I can find a way to specify the subnet id but that's about all I can do. If I don't specify the subnet id then everything is built, but there is a new vnet created which is what I don't want.
Thanks in advance
All the following properties need to be set under network_profile as following:
network_profile {
network_plugin = "azure"
network_policy = "azure"
service_cidr = "10.0.4.0/24"
dns_service_ip = "10.0.4.10"
docker_bridge_cidr = "172.17.0.1/16"
}
These were missed, I hope this helps anyone who is having problems similar to mine.
More info about this block can be found here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#network_plugin
I want to implement traffic manager in Terraform between two vm's from different locations (West Europe and North Europe). I attach my code, but I don't know how to configure "target_resource_id" for each vm, because the vm's were created in a for loop, also the networks. The traffic manager would switch to the secondary vm, in case of failure of the first vm. Any ideas?
My code:
variable "subscription_id" {}
variable "tenant_id" {}
variable "environment" {}
variable "azurerm_resource_group_name" {}
variable "locations" {
type = map(string)
default = {
vm1 = "North Europe"
vm2 = "West Europe"
}
}
# Configure the Azure Provider
provider "azurerm" {
subscription_id = var.subscription_id
tenant_id = var.tenant_id
version = "=2.10.0"
features {}
}
resource "azurerm_virtual_network" "main" {
for_each = var.locations
name = "${each.key}-network"
address_space = ["10.0.0.0/16"]
location = each.value
resource_group_name = var.azurerm_resource_group_name
}
resource "azurerm_subnet" "internal" {
for_each = var.locations
name = "${each.key}-subnet"
resource_group_name = var.azurerm_resource_group_name
virtual_network_name = azurerm_virtual_network.main[each.key].name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "example" {
for_each = var.locations
name = "${each.key}-pip"
location = each.value
resource_group_name = var.azurerm_resource_group_name
allocation_method = "Static"
idle_timeout_in_minutes = 30
tags = {
environment = "dev01"
}
}
resource "azurerm_network_interface" "main" {
for_each = var.locations
name = "${each.key}-nic"
location = each.value
resource_group_name = var.azurerm_resource_group_name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal[each.key].id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example[each.key].id
}
}
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "azurerm_virtual_machine" "main" {
for_each = var.locations
name = "${each.key}t-vm"
location = each.value
resource_group_name = var.azurerm_resource_group_name
network_interface_ids = [azurerm_network_interface.main[each.key].id]
vm_size = "Standard_D2s_v3"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
storage_os_disk {
name = "${each.key}-myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "${each.key}-hostname"
admin_username = "testadmin"
admin_password = random_password.password.result
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "dev01"
}
}
resource "random_id" "server" {
keepers = {
azi_id = 1
}
byte_length = 8
}
resource "azurerm_traffic_manager_profile" "example" {
name = random_id.server.hex
resource_group_name = var.azurerm_resource_group_name
traffic_routing_method = "Priority"
dns_config {
relative_name = random_id.server.hex
ttl = 100
}
monitor_config {
protocol = "http"
port = 80
path = "/"
interval_in_seconds = 30
timeout_in_seconds = 9
tolerated_number_of_failures = 3
}
tags = {
environment = "dev01"
}
}
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = "[azurerm_network_interface.main[each.key].id]"
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
My error:
Error: trafficmanager.EndpointsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error.
Status=400 Code="BadRequest" Message="The 'resourceTargetId' property of endpoint 'vm1-TF' is invalid or missing.
The property must be specified only for the following endpoint types: AzureEndpoints, NestedEndpoints.
You must have read access to the resource to which it refers."
According to this documentation:
Azure endpoints are used for Azure-based services in Traffic Manager. The following Azure resource types are supported:
PaaS cloud services
Web Apps
Web App Slots
PublicIPAddress resources (which can be connected to VMs either directly or via an Azure Load Balancer). The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
In your code, you need to change the Target ResourceId to PublicIPAddress. You should not pass the Network Interface ID.
resource "azurerm_traffic_manager_endpoint" "first-vm" {
for_each = var.locations
name = "${each.key}-TF"
resource_group_name = var.azurerm_resource_group_name
profile_name = "${azurerm_traffic_manager_profile.example.name}"
target_resource_id = azurerm_public_ip.example[each.key].id
type = "azureEndpoints"
priority = "${[each.key] == "vm1" ? 1 : 2}"
}
Please Note: The publicIpAddress must have a DNS name assigned to be used in a Traffic Manager profile.
Also you can check this ARM Template which is equivalent to this terraform template.
Hope this helps!
I would like to create Virtual network gateway for an existing Vnet using terraform. Can someone help me with Code.
try updating your azurerm Terraform provider
resource "azurerm_subnet" "test" {
name = "GatewaySubnet"
resource_group_name = "${var.resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
address_prefix = "192.168.2.0/24"
}
resource "azurerm_public_ip" "test" {
name = "test"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
public_ip_address_allocation = "Dynamic"
}
resource "azurerm_virtual_network_gateway" "test" {
name = "test"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
type = "Vpn"
vpn_type = "RouteBased"
active_active = false
enable_bgp = false
sku = "Basic"
ip_configuration {
name = "vnetGatewayConfig"
public_ip_address_id = "${azurerm_public_ip.test.id}"
private_ip_address_allocation = "Dynamic"
subnet_id = "${azurerm_subnet.test.id}"
}
vpn_client_configuration {
address_space = [ "10.2.0.0/24" ]
root_certificate {
name = "DigiCert-Federated-ID-Root-CA"
public_cert_data = <<EOF
MIIDuzCCAqOgAwIBAgIQCHTZWCM+IlfFIRXIvyKSrjANBgkqhkiG9w0BAQsFADBn
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSYwJAYDVQQDEx1EaWdpQ2VydCBGZWRlcmF0ZWQgSUQg
Um9vdCBDQTAeFw0xMzAxMTUxMjAwMDBaFw0zMzAxMTUxMjAwMDBaMGcxCzAJBgNV
BAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdp
Y2VydC5jb20xJjAkBgNVBAMTHURpZ2lDZXJ0IEZlZGVyYXRlZCBJRCBSb290IENB
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvAEB4pcCqnNNOWE6Ur5j
QPUH+1y1F9KdHTRSza6k5iDlXq1kGS1qAkuKtw9JsiNRrjltmFnzMZRBbX8Tlfl8
zAhBmb6dDduDGED01kBsTkgywYPxXVTKec0WxYEEF0oMn4wSYNl0lt2eJAKHXjNf
GTwiibdP8CUR2ghSM2sUTI8Nt1Omfc4SMHhGhYD64uJMbX98THQ/4LMGuYegou+d
GTiahfHtjn7AboSEknwAMJHCh5RlYZZ6B1O4QbKJ+34Q0eKgnI3X6Vc9u0zf6DH8
Dk+4zQDYRRTqTnVO3VT8jzqDlCRuNtq6YvryOWN74/dq8LQhUnXHvFyrsdMaE1X2
DwIDAQABo2MwYTAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNV
HQ4EFgQUGRdkFnbGt1EWjKwbUne+5OaZvRYwHwYDVR0jBBgwFoAUGRdkFnbGt1EW
jKwbUne+5OaZvRYwDQYJKoZIhvcNAQELBQADggEBAHcqsHkrjpESqfuVTRiptJfP
9JbdtWqRTmOf6uJi2c8YVqI6XlKXsD8C1dUUaaHKLUJzvKiazibVuBwMIT84AyqR
QELn3e0BtgEymEygMU569b01ZPxoFSnNXc7qDZBDef8WfqAV/sxkTi8L9BkmFYfL
uGLOhRJOFprPdoDIUBB+tmCl3oDcBy3vnUeOEioz8zAkprcb3GHwHAK+vHmmfgcn
WsfMLH4JCLa/tRYL+Rw/N3ybCkDp00s0WUZ+AoDywSl0Q/ZEnNY0MsFiw6LyIdbq
M/s/1JRtO3bDSzD9TazRVzn2oBqzSa8VgIo5C1nOnoAKJTlsClJKvIhnRlaLQqk=
EOF
}
revoked_certificate {
name = "Verizon-Global-Root-CA"
thumbprint = "912198EEF23DCAC40939312FEE97DD560BAE49B1"
}
}
}
Terraform v0.11.7
provider.azurerm v1.13.0
It works well on my site and you can change the resources into yours. For more details about virtual network gateway with Terraform, see azurerm_virtual_network_gateway.