Terraform Azure 2xLinux VM with httpd + LoadBalancer not working - azure

I am trying to create a Terraform PoC that has two centos VMs and an Azure Load Balancer.
Each VM has one private and one public IP and installed the httpd package.
Even the elements are provisioned successful, accessing the Public IP of the Load Balancer does not return the default httpd content (inside the CentOS VM curl localhost or the IP returns the correct content).
No firewall enabled on CentOS.
Below it the Terraform file. (Location i am using is westeurope).
Q: What am I missing in the configuration for the Load Balancer? All items are provisioned, no error from terraform, when accessing the public ip of the load balancer I get time out instead of the default apache page.
resource "azurerm_resource_group" "test" {
name = var.rg_name
location = var.location
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_virtual_network" "test" {
name = var.vnet_name
address_space = ["192.168.0.0/16"]
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_subnet" "test" {
name = var.networks["subnet1"]
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = ["192.168.0.0/24"]
}
resource "azurerm_public_ip" "testlb" {
name = "tf-demo-publicIPForLB"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
sku = "Standard"
allocation_method = "Static"
domain_name_label = "acndemo"
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_lb" "test" {
name = "tf-demo-loadBalancer"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
sku = "Standard"
frontend_ip_configuration {
name = "tf-demo-lb-publicIPAddress"
public_ip_address_id = azurerm_public_ip.testlb.id
}
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_lb_backend_address_pool" "test" {
loadbalancer_id = azurerm_lb.test.id
name = "tf-demo-BackEndAddressPool"
}
resource "azurerm_network_interface_backend_address_pool_association" "test" {
count = 2
network_interface_id = "${azurerm_network_interface.test[count.index].id}"
ip_configuration_name = "tf-demo-nic-config${count.index}"
backend_address_pool_id = azurerm_lb_backend_address_pool.test.id
}
resource "azurerm_lb_probe" "test" {
resource_group_name = azurerm_resource_group.test.name
loadbalancer_id = azurerm_lb.test.id
name = "tf-demo-http-running-probe"
protocol = "Http"
port = 80
request_path = "/"
}
resource "azurerm_lb_rule" "test" {
resource_group_name = azurerm_resource_group.test.name
loadbalancer_id = azurerm_lb.test.id
name = "tf-demo-LBRule"
protocol = "Tcp"
frontend_port = 80
backend_port = 80
frontend_ip_configuration_name = "tf-demo-lb-publicIPAddress"
backend_address_pool_id = azurerm_lb_backend_address_pool.test.id
probe_id = azurerm_lb_probe.test.id
}
resource "azurerm_public_ip" "test" {
count = 2
name = "tf-demo-publicIPForVM${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
sku = "Standard"
allocation_method = "Static"
domain_name_label = "acngrvm${count.index}"
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_network_interface" "test" {
count = 2
name = "tf-demo-nic${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "tf-demo-nic-config${count.index}"
subnet_id = azurerm_subnet.test.id
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.test[count.index].id}"
}
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_network_security_group" "test" {
name = "tf-demo-vm-nsg"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_network_interface_security_group_association" "test" {
count = length(azurerm_network_interface.test)
network_interface_id = "${azurerm_network_interface.test[count.index].id}"
network_security_group_id = azurerm_network_security_group.test.id
}
resource "azurerm_availability_set" "test" {
name = "tf-demo-availabilityset"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
platform_fault_domain_count = 2
platform_update_domain_count = 2
managed = true
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}
resource "azurerm_linux_virtual_machine" "test" {
count = 2
name = "tfdemovm${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
network_interface_ids = [azurerm_network_interface.test[count.index].id]
size = "Standard_DS1_v2"
admin_username = "centos"
computer_name = "tfdemovm${count.index}"
availability_set_id = azurerm_availability_set.test.id
admin_ssh_key {
username = "centos"
public_key = file("~/.ssh/id_rsa.pub")
}
os_disk {
name = "tfdemovm${count.index}_OsDisk${count.index}"
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "OpenLogic"
offer = "CentOS"
sku = "7_8-gen2"
version = "latest"
}
tags = {
Owner = var.tags["Owner"]
Environment = var.tags["Environment"]
}
}

Based on the comments.
The issue was caused by not opening port 80 in azurerm_network_security_group.test. Only port 22 was allowed. Thus opening port 80 solved the issue.

Related

How to setup private-link using Terraform to access storage-account?

I need to test my azure private-endpoint using the following scenario.
We have a virtual netwroks with two sub-nets (vm_subnet and storage_account_subnet)
The virtual-machine (vm) should be able to connect to the storage-account using a private-link.
So, the below image explains the scenario:
and then i need to test my endpoint using the below manual test-case:
Connect to the azure virtual-machine using ssh and putty username: adminuser and password: P#$$w0rd1234!
In the terminal ping formuleinsstorage.blob.core.windows.net (Expect to see the ip of storage account in the range of storage_account_subnet (10.0.2.0/24))
I deploy all the infrastructure using the below Terraform code:
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "vm_nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = "storagaccountfortest"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet_link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = "dns-record"
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
The problem is, my endpoint does not work ! But, if i try to add the service-endpoint manually then everything works well like a charm. So,i think, my DNS Zone is correct, also apparently the link to the storage-account is also working well. So, I think there should be something wrong with my private-link ! Any idea ?
Update:
Here are versions:
Terraform v1.2.5
on windows_386
+ provider registry.terraform.io/hashicorp/azurerm v3.30.0
I believe the issue lies in the name of the dns_a_record. This should be the name of the storage account you want to reach via the private link.
The following Terraform code is working for me:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.30.0"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "main_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
# Create Virtual-Network
resource "azurerm_virtual_network" "virtual_network" {
name = "Vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
}
# Create subnet for virtual-machine
resource "azurerm_subnet" "virtual_network_subnet" {
name = "vm_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.1.0/24"]
}
# Create subnet for storage account
resource "azurerm_subnet" "storage_account_subnet" {
name = "storage_account_subnet"
resource_group_name = azurerm_resource_group.main_resource_group.name
virtual_network_name = azurerm_virtual_network.virtual_network.name
address_prefixes = ["10.0.2.0/24"]
}
# Create Linux Virtual machine
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
size = "Standard_F2"
admin_username = "adminuser"
admin_password = "14394Las?"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.virtual_machine_network_interface.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
}
resource "azurerm_network_interface" "virtual_machine_network_interface" {
name = "vm-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.vm_public_ip.id
}
}
# Create Network-interface and public-ip for virtual-machien
resource "azurerm_public_ip" "vm_public_ip" {
name = "vm-public-ip-for-rdp"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "virtual_network_nic" {
name = "storage-private-endpoint-nic"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
ip_configuration {
name = "storage-private-endpoint-ip-config"
subnet_id = azurerm_subnet.virtual_network_subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Setup an Inbound rule because we need to connect to the virtual-machine using RDP (remote-desktop-protocol)
resource "azurerm_network_security_group" "traffic_rules" {
name = "vm_traffic_rules"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
security_rule {
name = "virtual_network_permission"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "private_nsg_asso" {
subnet_id = azurerm_subnet.virtual_network_subnet.id
network_security_group_id = azurerm_network_security_group.traffic_rules.id
}
# Setup storage_account and its container
resource "azurerm_storage_account" "storage_account" {
name = <STORAGE_ACCOUNT_NAME>
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
}
resource "azurerm_storage_data_lake_gen2_filesystem" "data_lake_storage" {
name = "rawdata"
storage_account_id = azurerm_storage_account.storage_account.id
lifecycle {
prevent_destroy = false
}
}
# Setup DNS zone
resource "azurerm_private_dns_zone" "dns_zone" {
name = "privatelink.blob.core.windows.net"
resource_group_name = azurerm_resource_group.main_resource_group.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "network_link" {
name = "vnet-link"
resource_group_name = azurerm_resource_group.main_resource_group.name
private_dns_zone_name = azurerm_private_dns_zone.dns_zone.name
virtual_network_id = azurerm_virtual_network.virtual_network.id
}
# Setup private-link
resource "azurerm_private_endpoint" "endpoint" {
name = "storage-private-endpoint"
location = azurerm_resource_group.main_resource_group.location
resource_group_name = azurerm_resource_group.main_resource_group.name
subnet_id = azurerm_subnet.storage_account_subnet.id
private_service_connection {
name = "storage-private-service-connection"
private_connection_resource_id = azurerm_storage_account.storage_account.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_dns_a_record" "dns_a" {
name = azurerm_storage_account.storage_account.name
zone_name = azurerm_private_dns_zone.dns_zone.name
resource_group_name = azurerm_resource_group.main_resource_group.name
ttl = 10
records = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}
Additionally, I'm not sure whether it is possible to ping storage accounts. To test I ran nslookup <STORAGE_ACCOUNT_NAME>.blob.core.windows.net both from my local machine and from the Azure VM. In the former case, I got a public IP while in the latter I got a private IP in the range defined in the Terraform config, which seems to be the behaviour you are looking for.

Issue creating backend pool for Azure load balancer with private link service

I am planning to access an application hosted on two servers using azure load balancer which will be accessed using private end point and private link server from on-prem network for private access. while i try to execute the code, getting the below error. If i don't use back end pool, i am able to create the load balancer with private link service and private end point, what could be an issue?
Error: creating Private Link Service: (Name "privatelink" / Resource Group "XXXXXXXX"): network.PrivateLinkServicesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="PrivateLinkServiceIsNotSupportedForIPBasedLoadBalancer" Message="Private link service is not supported for load balancer /subscriptions/XXXXXXXX/providers/Microsoft.Network/privateLinkServices/privatelink with backend addresses set by (virtualNetwork, ipAddress) or (subnet, ipAddress)." Details=[]
resource "azurerm_subnet" "lbsubnet" {
name = "lbsubnet"
resource_group_name = local.resource_group
virtual_network_name = azurerm_virtual_network.devvm_net.name
address_prefixes = ["10.20.1.0/24"]
enforce_private_link_service_network_policies = true
depends_on = [
azurerm_virtual_network.devvm_net
]
}
resource "azurerm_lb" "app_balancer" {
name = "app-balancer"
location = local.location
resource_group_name = local.resource_group
sku="Standard"
sku_tier = "Regional"
frontend_ip_configuration {
name = "frontend-ip"
subnet_id = azurerm_subnet.lbsubnet.id
# private_ip_address_allocation = "Dynamic"
}
}
// the backend pool
resource "azurerm_lb_backend_address_pool" "PoolA" {
loadbalancer_id = azurerm_lb.app_balancer.id
name = "PoolA"
depends_on=[
azurerm_lb.app_balancer
]
}
resource "azurerm_lb_backend_address_pool_address" "vm1" {
name = "vm1"
backend_address_pool_id = azurerm_lb_backend_address_pool.PoolA.id
virtual_network_id = azurerm_virtual_network.devvm_net.id
ip_address = azurerm_network_interface.devvm1_interface1.private_ip_address
#ip_address= "10.20.0.10"
}
resource "azurerm_lb_backend_address_pool_address" "appvm2_address" {
name = "appvm2"
backend_address_pool_id = azurerm_lb_backend_address_pool.PoolA.id
virtual_network_id = azurerm_virtual_network.devvm_net.id
#ip_address = azurerm_network_interface.devvm2_interface2.private_ip_address
ip_address = "10.20.0.5"
depends_on=[
azurerm_lb_backend_address_pool.PoolA
]
}
// Health Probe
resource "azurerm_lb_probe" "ProbeA" {
resource_group_name = local.resource_group
loadbalancer_id = azurerm_lb.app_balancer.id
name = "probeA"
port = 80
protocol = "Tcp"
depends_on=[
azurerm_lb.app_balancer
]
}
// Load Balancing Rule
resource "azurerm_lb_rule" "RuleA" {
resource_group_name = local.resource_group
loadbalancer_id = azurerm_lb.app_balancer.id
name = "RuleA"
protocol = "Tcp"
frontend_port = 80
backend_port = 80
frontend_ip_configuration_name = "frontend-ip"
backend_address_pool_ids = [ azurerm_lb_backend_address_pool.PoolA.id ]
depends_on=[
azurerm_lb.app_balancer
]
}
// the NAT Rules
resource "azurerm_lb_nat_rule" "NATRuleA" {
resource_group_name = local.resource_group
loadbalancer_id = azurerm_lb.app_balancer.id
name = "RDPAccess"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "frontend-ip"
depends_on=[
azurerm_lb.app_balancer
]
}
resource "azurerm_virtual_network" "pvt-endpoint-vnet" {
name = "pvtendpoint-network"
location = local.location
resource_group_name = local.resource_group
address_space = ["10.50.0.0/16"]
}
resource "azurerm_subnet" "endpoint-subnet" {
name = "endpoint-subnet"
resource_group_name = local.resource_group
virtual_network_name = azurerm_virtual_network.pvt-endpoint-vnet.name
address_prefixes = ["10.50.0.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_private_link_service" "privatelink-service" {
name = "privatelink"
location = local.location
resource_group_name = local.resource_group
load_balancer_frontend_ip_configuration_ids = [azurerm_lb.app_balancer.frontend_ip_configuration.0.id]
nat_ip_configuration {
name = "pls-ip"
primary = true
subnet_id = azurerm_subnet.lbsubnet.id
}
}
resource "azurerm_private_endpoint" "private_endpoint" {
name = "private-endpoint"
location = local.location
resource_group_name = local.resource_group
subnet_id = azurerm_subnet.endpoint-subnet.id
private_service_connection {
name = "privateserviceconnection"
private_connection_resource_id = azurerm_private_link_service.privatelink-service.id
is_manual_connection = false
}
}

502 Bad Gateway from Azure Application Gateway Connecting to Azure Container Instance

I am working on learning Terraform and Azure Web Services. After following a series of tutorials, I've been working on getting an Azure Container Instance setup that talks to a CosmosDB instance within a virtual network, and I want an Application Gateway setup that will allow HTTP connections to the Azure Container Instance.
Currently, when I call the IP address assigned to the Application Gateway, I receive a 502 Bad Gateway. I've verified that the image I'm running in the Azure Container Instance works locally. I have a feeling that the issues I'm facing are in relation to the back-end address pool I've configured, and possibly an issue with the rules I've setup in my network security group (nsg-myapp).
I was wondering if someone could look at my Terraform and identify what I've not configured correctly? The closest question I found similar to my scenario on StackOverflow as this unresolved question from last year.
network.tf
resource "azurerm_virtual_network" "myappdb" {
name = "myappdb-vnet"
address_space = ["10.7.0.0/16"]
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.myappdb.name
address_prefixes = ["10.7.2.0/24"]
service_endpoints = ["Microsoft.AzureCosmosDB"]
delegation {
name = "acidelegationservice"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_subnet" "frontend" {
name = "myapp-frontend"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.myappdb.name
address_prefixes = ["10.7.0.0/24"]
}
resource "azurerm_network_security_group" "nsg-myapp" {
name = "nsg-aci"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
security_rule {
name = "from-gateway-subnet"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_ranges = [22, 80, 443, 445, 8000]
source_address_prefixes = azurerm_subnet.internal.address_prefixes
destination_address_prefix = azurerm_subnet.internal.address_prefixes[0]
}
security_rule {
name = "DenyAllInBound-Override"
priority = 900
direction = "Inbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "to-internet"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_ranges = [80, 443, 445]
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "DenyAllOutBound-Override"
priority = 900
direction = "Outbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "sn-nsg-aci" {
subnet_id = azurerm_subnet.internal.id
network_security_group_id = azurerm_network_security_group.nsg-myapp.id
}
resource "azurerm_network_profile" "containergroup_profile" {
name = "acg-profile"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
container_network_interface {
name = "acg-nic"
ip_configuration {
name = "aciipconfig"
subnet_id = azurerm_subnet.internal.id
}
}
}
resource "azurerm_public_ip" "myappip" {
name = "myappip"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
allocation_method = "Static"
sku = "Standard"
}
locals {
backend_address_pool_name = "${azurerm_virtual_network.myappdb.name}-beap"
frontend_port_name = "${azurerm_virtual_network.myappdb.name}-feport"
frontend_ip_configuration_name = "${azurerm_virtual_network.myappdb.name}-feip"
http_setting_name = "${azurerm_virtual_network.myappdb.name}-be-htst"
listener_name = "${azurerm_virtual_network.myappdb.name}-httplstn"
request_routing_rule_name = "${azurerm_virtual_network.myappdb.name}-rqrt"
redirect_configuration_name = "${azurerm_virtual_network.myappdb.name}-rdrcfg"
}
resource "azurerm_application_gateway" "network" {
name = "myapp-appgateway"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
frontend_port {
name = local.frontend_port_name
port = 80
}
frontend_ip_configuration {
name = local.frontend_ip_configuration_name
public_ip_address_id = azurerm_public_ip.myappip.id
}
backend_address_pool {
name = local.backend_address_pool_name
ip_addresses = [azurerm_container_group.tf_cg_sampleapi.ip_address]
}
backend_http_settings {
name = local.http_setting_name
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = local.listener_name
frontend_ip_configuration_name = local.frontend_ip_configuration_name
frontend_port_name = local.frontend_port_name
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = local.listener_name
backend_address_pool_name = local.backend_address_pool_name
backend_http_settings_name = local.http_setting_name
}
}
container.tf
resource "azurerm_container_group" "tf_cg_sampleapi" {
depends_on = [azurerm_cosmosdb_account.db]
name = "cg_myapp"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_profile_id = azurerm_network_profile.containergroup_profile.id
ip_address_type = "Private"
# dns_name_label = "sampleapitf"
os_type = "Linux"
identity {
type = "SystemAssigned"
}
container {
name = "myapp"
image = "sample/myapp"
cpu = 1
memory = 1
ports {
port = 80
protocol = "TCP"
}
ports {
port = 443
protocol = "TCP"
}
secure_environment_variables = {
"MYAPP_CONNECTION_STRING" = azurerm_cosmosdb_account.db.connection_strings[0]
}
}
}
I met the similar issue and in my case(containers on top of Azure App Service) I needed to put the depends_on block inside the application gateway resource creation with regards to app services being created in the first place. So in your case should be:
resource "azurerm_application_gateway" "network" {
name = "myapp-appgateway"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
frontend_port {
name = local.frontend_port_name
port = 80
}
frontend_ip_configuration {
name = local.frontend_ip_configuration_name
public_ip_address_id = azurerm_public_ip.myappip.id
}
backend_address_pool {
name = local.backend_address_pool_name
ip_addresses = [azurerm_container_group.tf_cg_sampleapi.ip_address]
}
backend_http_settings {
name = local.http_setting_name
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = local.listener_name
frontend_ip_configuration_name = local.frontend_ip_configuration_name
frontend_port_name = local.frontend_port_name
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = local.listener_name
backend_address_pool_name = local.backend_address_pool_name
backend_http_settings_name = local.http_setting_name
}
depends_on = [ azurerm_container_group.tf_cg_sampleapi, ]
}
I figured out the root cause of my 502 Gateway error was due to health checks not being setup / not working. Consequently, I setup custom probes that would go to an API endpoint to return a 200 OK response. Of course, I will configure this endpoint to actually check to see if I can connect to my services, but this was just a test to verify this was the issue.
I also removed the DenyAllInBound-Override and DenyAllOutBound-Override rules within my nsg-aci security group, as this was causing issues with my ACI to connect to my Cosmos DB.
This was my resulting network.tf and container.tf files:
network.tf
resource "azurerm_virtual_network" "myappdb" {
name = "myappdb-vnet"
address_space = ["10.7.0.0/16"]
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.myappdb.name
address_prefixes = ["10.7.2.0/24"]
service_endpoints = ["Microsoft.AzureCosmosDB"]
delegation {
name = "acidelegationservice"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_subnet" "frontend" {
name = "myapp-frontend"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.myappdb.name
address_prefixes = ["10.7.0.0/24"]
}
resource "azurerm_network_security_group" "nsg-myapp" {
name = "nsg-aci"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
security_rule {
name = "from-gateway-subnet"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_ranges = [22, 80, 443, 445, 8000]
source_address_prefixes = azurerm_subnet.internal.address_prefixes
destination_address_prefixes = azurerm_subnet.internal.address_prefixes
}
security_rule {
name = "to-internet"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_ranges = [80, 443, 445]
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_subnet_network_security_group_association" "sn-nsg-aci" {
subnet_id = azurerm_subnet.internal.id
network_security_group_id = azurerm_network_security_group.nsg-myapp.id
}
resource "azurerm_network_profile" "containergroup_profile" {
name = "acg-profile"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
container_network_interface {
name = "acg-nic"
ip_configuration {
name = "aciipconfig"
subnet_id = azurerm_subnet.internal.id
}
}
}
resource "azurerm_public_ip" "myappip" {
name = "myappip"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
allocation_method = "Static"
sku = "Standard"
}
locals {
backend_address_pool_name = "${azurerm_virtual_network.myappdb.name}-beap"
frontend_port_name = "${azurerm_virtual_network.myappdb.name}-feport"
frontend_ip_configuration_name = "${azurerm_virtual_network.myappdb.name}-feip"
http_setting_name = "${azurerm_virtual_network.myappdb.name}-be-htst"
listener_name = "${azurerm_virtual_network.myappdb.name}-httplstn"
request_routing_rule_name = "${azurerm_virtual_network.myappdb.name}-rqrt"
redirect_configuration_name = "${azurerm_virtual_network.myappdb.name}-rdrcfg"
}
resource "azurerm_application_gateway" "network" {
name = "myapp-appgateway"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = azurerm_subnet.frontend.id
}
frontend_port {
name = local.frontend_port_name
port = 80
}
frontend_ip_configuration {
name = local.frontend_ip_configuration_name
public_ip_address_id = azurerm_public_ip.myappip.id
}
backend_address_pool {
name = local.backend_address_pool_name
ip_addresses = [azurerm_container_group.tf_cg_sampleapi.ip_address]
}
probe {
interval = 60
timeout = 60
name = "status"
protocol = "Http"
path = "/api/status/"
unhealthy_threshold = 3
host = "127.0.0.1"
}
backend_http_settings {
name = local.http_setting_name
cookie_based_affinity = "Disabled"
path = "/"
port = 80
protocol = "Http"
request_timeout = 60
probe_name = "status"
}
http_listener {
name = local.listener_name
frontend_ip_configuration_name = local.frontend_ip_configuration_name
frontend_port_name = local.frontend_port_name
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = local.listener_name
backend_address_pool_name = local.backend_address_pool_name
backend_http_settings_name = local.http_setting_name
}
depends_on = [azurerm_container_group.tf_cg_sampleapi, ]
}
container.tf
resource "azurerm_container_group" "tf_cg_sampleapi" {
depends_on = [azurerm_cosmosdb_account.db]
name = "cg_myapp"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_profile_id = azurerm_network_profile.containergroup_profile.id
ip_address_type = "Private"
# dns_name_label = "sampleapitf"
os_type = "Linux"
container {
name = "myapp"
image = "sample/myapp"
cpu = 1
memory = 1
ports {
port = 80
protocol = "TCP"
}
ports {
port = 443
protocol = "TCP"
}
secure_environment_variables = {
"MYAPP_CONNECTION_STRING" = azurerm_cosmosdb_account.db.connection_strings[0]
}
}
}

Application Gateways, Web apps and private DNS

I'm struggling to find the use or benefit of private DNS in web apps with application gateways.
Using terraform I am trying to create a web app with a private connection behind an application gateway.
I am using terraform to create this, and if I set the back end pool to the private IP address changing the host name to the .azurewebsite.net name then it works great.
However whenever I try to create a private DNS and point the back end pool to the web app using the back end pool DNS name, it gives me an error 502.
Here is the code I am using, I have read through a few guides and everyone I read people still seem to be pointing to the IP address rather than the DNS name. A push in the right direction would be appreciated!
resource "azurerm_virtual_network" "uks-network" {
name = "mrp-uks-tf-vnet"
location = azurerm_resource_group.uks-rg.location
resource_group_name = azurerm_resource_group.uks-rg.name
address_space = ["10.0.0.0/16"]
# dns_servers = ["10.0.0.4", "10.0.0.5"]
tags = {
environment = "staging"
Location = "UK South"
terraform = "True"
}
}
resource "azurerm_subnet" "mrp-uks-tf-sn-ag" {
name = "applicationgatewaysubnet"
resource_group_name = azurerm_resource_group.uks-rg.name
virtual_network_name = azurerm_virtual_network.uks-network.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_endpoint_network_policies = "true"
}
resource "azurerm_subnet" "mrp-uks-tf-sn-ws" {
name = "websitesubnet"
resource_group_name = azurerm_resource_group.uks-rg.name
virtual_network_name = azurerm_virtual_network.uks-network.name
address_prefixes = ["10.0.2.0/24"]
enforce_private_link_endpoint_network_policies = "true"
}
resource "azurerm_private_dns_zone" "mrp-tf-uks-dns" {
name = "privatelink.azurewebsites.net"
resource_group_name = azurerm_resource_group.uks-rg.name
}
resource "azurerm_subnet" "mrp-uks-tf-sn-sql" {
name = "sqlsubnet"
resource_group_name = azurerm_resource_group.uks-rg.name
virtual_network_name = azurerm_virtual_network.uks-network.name
address_prefixes = ["10.0.3.0/24"]
enforce_private_link_endpoint_network_policies = "true"
}
resource "azurerm_private_dns_a_record" "uks-webapp-privatendpoint" {
name = "webappuks"
zone_name = azurerm_private_dns_zone.mrp-tf-uks-dns.name
resource_group_name = azurerm_resource_group.uks-rg.name
ttl = 300
records = [azurerm_private_endpoint.uks-webapp-privatendpoint.private_service_connection[0].private_ip_address]
}
resource "azurerm_private_dns_zone_virtual_network_link" "ukswebapp" {
name = "${azurerm_app_service.uks-webapp.name}-dnslink"
resource_group_name = azurerm_resource_group.uks-rg.name
private_dns_zone_name = azurerm_private_dns_zone.mrp-tf-uks-dns.name
virtual_network_id = azurerm_virtual_network.uks-network.id
registration_enabled = false
}
#Create Private Endpoints for UKS app service
resource "azurerm_private_endpoint" "uks-webapp-privatendpoint" {
name = "uks-webapp-privatendpoint"
location = azurerm_resource_group.uks-rg.location
resource_group_name = azurerm_resource_group.uks-rg.name
subnet_id = azurerm_subnet.mrp-uks-tf-sn-ws.id
private_service_connection {
name = "uks-webapp-privatendpoint-com"
private_connection_resource_id = azurerm_app_service.uks-webapp.id
is_manual_connection = false
subresource_names = ["sites"]
}
}
#Create Private Endpoints for UKS app service
resource "azurerm_private_endpoint" "uks-webapp-privatendpoint" {
name = "uks-webapp-privatendpoint"
location = azurerm_resource_group.uks-rg.location
resource_group_name = azurerm_resource_group.uks-rg.name
subnet_id = azurerm_subnet.mrp-uks-tf-sn-ws.id
private_service_connection {
name = "uks-webapp-privatendpoint-com"
private_connection_resource_id = azurerm_app_service.uks-webapp.id
is_manual_connection = false
subresource_names = ["sites"]
}
}
resource "azurerm_public_ip" "mrp-tf-uks-ag-pip" {
name = "mrp-tf-uks-ag-pip"
resource_group_name = azurerm_resource_group.uks-rg.name
location = azurerm_resource_group.uks-rg.location
allocation_method = "Static"
sku = "Standard"
}
locals {
backend_address_pool_name = "${azurerm_virtual_network.uks-network.name}-beap"
frontend_port_name = "${azurerm_virtual_network.uks-network.name}-feport"
frontend_ip_configuration_name = "${azurerm_virtual_network.uks-network.name}-feip"
http_setting_name = "${azurerm_virtual_network.uks-network.name}-be-htst"
listener_name = "${azurerm_virtual_network.uks-network.name}-httplstn"
request_routing_rule_name = "${azurerm_virtual_network.uks-network.name}-rqrt"
redirect_configuration_name = "${azurerm_virtual_network.uks-network.name}-rdrcfg"
}
resource "azurerm_application_gateway" "mrp-tf-uks-ag" {
name = "mrp-tf-uks-ag"
resource_group_name = azurerm_resource_group.uks-rg.name
location = azurerm_resource_group.uks-rg.location
sku {
name = "WAF_V2"
tier = "WAF_V2"
capacity = 1
}
waf_configuration {
enabled = "true"
firewall_mode = "Detection"
rule_set_type = "OWASP"
rule_set_version = "3.0"
}
gateway_ip_configuration {
name = "mrp-tf-uks-ag-ipc"
subnet_id = azurerm_subnet.mrp-uks-tf-sn-ag.id
}
frontend_port {
name = local.frontend_port_name
port = 80
}
frontend_ip_configuration {
name = local.frontend_ip_configuration_name
public_ip_address_id = azurerm_public_ip.mrp-tf-uks-ag-pip.id
}
backend_address_pool {
name = local.backend_address_pool_name
fqdns = ["${azurerm_app_service.uks-webapp.name}.azurewebsites.net"]
}
backend_http_settings {
name = local.http_setting_name
cookie_based_affinity = "Disabled"
port = 80
protocol = "Http"
request_timeout = 1
pick_host_name_from_backend_address = true
}
http_listener {
name = local.listener_name
frontend_ip_configuration_name = local.frontend_ip_configuration_name
frontend_port_name = local.frontend_port_name
protocol = "Http"
}
request_routing_rule {
name = local.request_routing_rule_name
rule_type = "Basic"
http_listener_name = local.listener_name
backend_address_pool_name = local.backend_address_pool_name
backend_http_settings_name = local.http_setting_name
}
}

unable to create AKS cluster using "UserDefinedRouting" using terraform

I'm Setting up AKS cluster using userDefinedRouting with existing subnet and route table which are associated with network security group. Here is my code snippet.
provider "azurerm" {
version = "~> 2.25"
features {}
}
data "azurerm_resource_group" "aks" {
name = var.resource_group
}
#fetch existing subnet
data "azurerm_subnet" "aks" {
name = var.subnetname
virtual_network_name = var.virtual_network_name
resource_group_name = var.vnet_resource_group
}
resource "azurerm_network_interface" "k8svmnic" {
name = "k8svmnic"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.aks.id
private_ip_address_allocation = "Static"
private_ip_address = var.k8svmip #"10.9.56.10"
}
}
resource "azurerm_availability_set" "k8svmavset" {
name = "k8svmavset"
location = data.azurerm_resource_group.aks.location
resource_group_name = data.azurerm_resource_group.aks.name
platform_fault_domain_count = 3
platform_update_domain_count = 3
managed = true
}
resource "azurerm_network_security_group" "k8svmnsg" {
name = "k8vm-nsg"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
security_rule {
name = "allow_kube_tls"
protocol = "Tcp"
priority = 100
direction = "Inbound"
access = "Allow"
source_address_prefix = "VirtualNetwork"
destination_address_prefix = "*"
source_port_range = "*"
#destination_port_range = "443"
destination_port_ranges = ["443"]
description = "Allow kube-apiserver (tls) traffic to master"
}
security_rule {
name = "allow_ssh"
protocol = "Tcp"
priority = 101
direction = "Inbound"
access = "Allow"
source_address_prefix = "*"
destination_address_prefix = "*"
source_port_range = "*"
#destination_port_range = "22"
destination_port_ranges = ["22"]
description = "Allow SSH traffic to master"
}
}
resource "azurerm_network_interface_security_group_association" "k8svmnicnsg" {
network_interface_id = azurerm_network_interface.k8svmnic.id
network_security_group_id = azurerm_network_security_group.k8svmnsg.id
}
resource "azurerm_linux_virtual_machine" "k8svm" {
name = "k8svm"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
size = "Standard_D3_v2"
admin_username = var.admin_username
disable_password_authentication = true
availability_set_id = azurerm_availability_set.k8svmavset.id
network_interface_ids = [
azurerm_network_interface.k8svmnic.id,
]
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
disk_size_gb = 30
}
source_image_reference {
publisher = "microsoft-aks"
offer = "aks"
sku = "aks-engine-ubuntu-1804-202007"
version = "2020.07.24"
}
}
resource "azurerm_managed_disk" "k8svm-disk" {
name = "${azurerm_linux_virtual_machine.k8svm.name}-disk"
location = data.azurerm_resource_group.aks.location
resource_group_name = data.azurerm_resource_group.aks.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 512
}
resource "azurerm_virtual_machine_data_disk_attachment" "k8svm-disk-attachment" {
managed_disk_id = azurerm_managed_disk.k8svm-disk.id
virtual_machine_id = azurerm_linux_virtual_machine.k8svm.id
lun = 5
caching = "ReadWrite"
}
resource "azurerm_public_ip" "aks" {
name = "akspip"
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
allocation_method = "Static"
sku = "Standard"
depends_on = [azurerm_virtual_machine_data_disk_attachment.k8svm-disk-attachment]
}
resource "azurerm_route_table" "aks"{
name = "aks" #var.subnetname
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
disable_bgp_route_propagation = false
route {
name = "default_route"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = var.k8svmip
}
route {
name = var.route_name
address_prefix = var.route_address_prefix
next_hop_type = var.route_next_hop_type
}
}
resource "azurerm_subnet_route_table_association" "aks" {
subnet_id = data.azurerm_subnet.aks.id
route_table_id = azurerm_route_table.aks.id
}
resource "azurerm_subnet_network_security_group_association" "aks" {
subnet_id = data.azurerm_subnet.aks.id
network_security_group_id = var.network_security_group
}
resource "null_resource" "previous" {}
resource "time_sleep" "wait_90_seconds" {
depends_on = [null_resource.previous]
create_duration = "90s"
}
# This resource will create (at least) 30 seconds after null_resource.previous
resource "null_resource" "next" {
depends_on = [time_sleep.wait_90_seconds]
}
resource "azurerm_kubernetes_cluster" "aks" {
name = data.azurerm_resource_group.aks.name
resource_group_name = data.azurerm_resource_group.aks.name
location = data.azurerm_resource_group.aks.location
dns_prefix = "akstfelk" #The dns_prefix must contain between 3 and 45 characters, and can contain only letters, numbers, and hyphens. It must start with a letter and must end with a letter or a number.
kubernetes_version = "1.18.8"
private_cluster_enabled = false
node_resource_group = var.node_resource_group
#api_server_authorized_ip_ranges = [] #var.api_server_authorized_ip_ranges
default_node_pool {
enable_node_public_ip = false
name = "agentpool"
node_count = var.node_count
orchestrator_version = "1.18.8"
vm_size = var.vm_size
os_disk_size_gb = var.os_disk_size_gb
vnet_subnet_id = data.azurerm_subnet.aks.id
type = "VirtualMachineScaleSets"
}
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = var.ssh_key
}
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
network_profile {
network_plugin = "kubenet"
network_policy = "calico"
dns_service_ip = "172.16.1.10"
service_cidr = "172.16.0.0/16"
docker_bridge_cidr = "172.17.0.1/16"
pod_cidr = "172.40.0.0/16"
outbound_type = "userDefinedRouting"
load_balancer_sku = "Standard"
load_balancer_profile {
outbound_ip_address_ids = [ "${azurerm_public_ip.aks.id}" ]
}
# load_balancer_profile {
# managed_outbound_ip_count = 5
# #effective_outbound_ips = [ azurerm_public_ip.aks.id ]
# outbound_ip_address_ids = []
# outbound_ip_prefix_ids = []
# outbound_ports_allocated = 0
# }
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
kube_dashboard {
enabled = false
}
oms_agent {
enabled = false
}
}
depends_on = [azurerm_subnet_route_table_association.aks]
}
According to Azure doc it says: "By default, one public IP will automatically be created in the same resource group as the AKS cluster, if NO public IP, public IP prefix, or number of IPs is specified.
But in my case outbound connection not happening Hence cluster provision getting failed. I've even created another public Ip and trying through Loadbalancer profile but i'm getting below error.
Error: "network_profile.0.load_balancer_profile.0.managed_outbound_ip_count": conflicts with network_profile.0.load_balancer_profile.0.outbound_ip_address_ids
If i've removed loadbalancer_profile from script i'm getting below error
Error: creating Managed Kubernetes Cluster "aks-tf" (Resource Group "aks-tf"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidUserDefinedRoutingWithLoadBalancerProfile" Message="UserDefinedRouting and load balancer profile are mutually exclusive. Please refer to http://aka.ms/aks/outboundtype for more details" Target="networkProfile.loadBalancerProfile"
Kinldy help me where i'm missing .
Any help would be appreciated.
When you use the UserDefineRouting, you need to set the network_plugin as azure and put the AKS cluster inside the subnet with the user-defined router, here is the description:
The AKS cluster must be deployed into an existing virtual network with
a subnet that has been previously configured.
And if the network_plugin is set to azure, then the vnet_subnet_id field in the default_node_pool block must be set and pod_cidr must not be set. You can find this note in azurerm_kubernetes_cluster.
Update:
It's a little more complex than you think, here is the Network Architecture of it and steps to create it via CLI. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
For the outbound, instead of a Public Load Balancer you can use an internal Load Balancer for internal traffic.
In addition, some steps you cannot achieve via the Terraform, for example, the Azure Firewall. Take a look at the steps and prepare the resources which you cannot achieve via the CLI.

Resources