Multiple frontend_enpoint in Azure Front Door with Terraform - azure

I am trying to build an Azure FrontDoor with Terraform but I am having an issue when I am trying to configure two Front Ends and then bind one of them to a custom HTTPS configuration. But I am getting the following error The argument "frontend_endpoint_id" is required, but no definition was found.
I just can't work out how you would specify two Front Door Endpoints and then reference one of them in a custom https config. Code below.
resource "azurerm_frontdoor" "jccroutingrule" {
depends_on = [
cloudflare_record.create_frontdoor_CNAME,
azurerm_key_vault.jctestingenv_keyvault,
azurerm_key_vault_certificate.jcimportedcert
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
#enforce_backend_pools_certificate_name_check = false
routing_rule {
name = "jccroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["jccfrontendendpoint","frontendendpoint2"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "jccbackendpool"
}
}
backend_pool_load_balancing {
name = "jccloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "jcchealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "jccbackendpool"
backend {
host_header = format("portal-staging-westeurope.jason.website")
address = format("portal-staging-westeurope.jason.website")
http_port = 80
https_port = 443
weight = 50
priority = 1
enabled = true
}
load_balancing_name = "jccloadbalancesettings"
health_probe_name = "jcchealthprobesettings"
}
frontend_endpoint {
name = "jccfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = "frontendendpoint2"
host_name = format("portal-staging.jason.website")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
frontend_endpoint_id = azurerm_frontdoor.jccroutingrule.frontend_endpoint[1].id
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault_certificate.jcimportedcert.id
}
}

So from documentation of azurerm_frontdoor here, I see they export below field which I think is of your interest..
frontend_endpoints - A map/dictionary of Frontend Endpoint Names (key)
to the Frontend Endpoint ID (value)
frontend_endpoints is a map object containing endpoint name as key & the id as the value. So, you could make use of lookup function to extract value from the key.
In the end your azurerm_frontdoor_custom_https_configuration looks like below::
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
frontend_endpoint_id = lookup(azurerm_frontdoor.jccroutingrule.frontend_endpoints, "frontendendpoint2", "what?")
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault_certificate.jcimportedcert.id
}
}
In case, if you change your mind to use jccfrontendendpoint endpoint, feel free to put that key into lookup function :-)

from terraform docs:
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
frontend_endpoint_id = azurerm_frontdoor.jccroutingrule.frontend_endpoint["frontendendpoint2"]
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault_certificate.jcimportedcert.id
}
}

I fixed this in the end, following this post on github: https://github.com/hashicorp/terraform-provider-azurerm/pull/11456
What I had to in the end was change a couple of things, first I had to change the frontend_endpoint_id to "${azurerm_frontdoor.jccroutingrule.id}/frontendEndpoints/${local.frontendendpoint2}" for some reason you need to make the frontend_endpoint name value into a local variable. So your code will look like this:
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.jason.website")
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
frontend_endpoint_id = "${azurerm_frontdoor.jccroutingrule.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault_certificate.jcimportedcert.id
}
}
Now if you build Frontdoor before doing the https_configuration you literally have to destroy your state file, for front door to build and then apply the custom HTTPS config. I could not get this to build without destroying the state file and someone else on the link I shared said the same.
Also the docs are wrong for frontend_endpoint_id if you choose not to use the format I have given and want to do something like:azurerm_frontdoor.jccroutingrule.frontend_endpoint["frontendendpoint2"] you must make sure you append .id on the end otherwise it won't look up the key values correctly and you will just get an error. Example: azurerm_frontdoor.jccroutingrule.frontend_endpoint["frontendendpoint2"].id
Also last point to note, you need to change frontend_endpoints under routing rule to include your local value like this: frontend_endpoints = ["jccfrontendendpoint","${local.frontendendpoint2}"] otherwise when you come to https custom config again the lookup will fail.
To be honest this frontdoor config is buggy at best and the docs on it are very vauge and in some places just wrong.
My full config to make it easy to follow:
resource "azurerm_frontdoor" "jccroutingrule" {
depends_on = [
cloudflare_record.create_frontdoor_CNAME,
azurerm_key_vault.jctestingenv_keyvault,
azurerm_key_vault_certificate.jcimportedcert
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
#enforce_backend_pools_certificate_name_check = false
routing_rule {
name = "jccroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["jccfrontendendpoint","${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "jccbackendpool"
}
}
backend_pool_load_balancing {
name = "jccloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "jcchealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "jccbackendpool"
backend {
host_header = format("portal-staging-westeurope.jason.website")
address = format("portal-staging-westeurope.jason.website")
http_port = 80
https_port = 443
weight = 50
priority = 1
enabled = true
}
load_balancing_name = "jccloadbalancesettings"
health_probe_name = "jcchealthprobesettings"
}
frontend_endpoint {
name = "jccfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.jason.website")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
frontend_endpoint_id = "${azurerm_frontdoor.jccroutingrule.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault.jctestingenv_keyvault.id
}
}

Related

Azure Frontdoor Dynamic Block not working in Terraform

Im having some issues with coding a dynamic block for frontdoor in Terraform. I have found a good working example of one here: https://github.com/spy86/terraform-azure-front-door/blob/main/front_door.tf
Yet my frontdoor setup is not as complex as this persons and I do not need everything he has done on his.
What I am trying to achieve is to put two backend_pools on my front door to enable multiple regions. The only way to do this is to bring in dynamic blocks. Yet when I do this I am getting an error: │ Error: Unsupported attribute │ │ on frontdoor.tf line 96, in resource "azurerm_frontdoor" "jctestingfrontdoor": │ 96: for_each = var.backend_pool_settings.value.backend[*] │ ├──────────────── │ │ var.backend_pool_settings is a list of object, known only after apply │ │ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Here is my Frontdoor code:
Main.tf
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
dynamic "backend_pool" {
for_each = var.backend_pool_settings[*]
content {
name = var.backend_pool_settings.name
load_balancing_name = var.backend_pool_settings.load_balancing_name
health_probe_name = var.backend_pool_settings.health_probe_name
dynamic "backend" {
for_each = var.backend_pool_settings.backend
content {
address = var.backend_pool_settings.address
host_header = var.backend_pool_settings.host_header
http_port = var.backend_pool_settings.http_port
https_port = var.backend_pool_settings.https_port
priority = var.backend_pool_settings.priority
weight = var.backend_pool_settings.weight
enabled = var.backend_pool_settings.enabled
}
}
}
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
depends_on = [
azurerm_frontdoor.jctestingfrontdoor
]
frontend_endpoint_id = "${azurerm_frontdoor.jctestingfrontdoor.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault.jctestingenv_keyvault.id
}
}
variables.tf
variable "backend_pool_settings" {
description = "backend pool stettings for frontdoor"
type = object({
name = string
backend = list(object({
address = string
host_header = string
http_port = number
https_port = number
weight = number
priority = number
enabled = bool
}))
load_balancing_name = string
health_probe_name = string
})
}
locals.tf
locals {
frontendendpoint2 = "projfrondoordnsname"
backendpool1 = "uksouth"
backendpool2 = "westeurope"
}
inputvariables.tfvars
backend_pool_settings = (
{
name = "uksouth"
backend = {
address = "portal-staging-testing1.terraform.example"
host_header = "portal-staging-testing1.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projloadbalancesettings"
},
{
name = "westeurope"
backend = {
address = "portal-staging-testing2.terraform.example"
host_header = "portal-staging-testing2.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projloadbalancesettings"
})
I have coded the variables as object lists but I'm not sure if that's the right thing to do and I'm not sure if I should be splitting the backend_pool as two dynamic blocks like in the example.
UPDATE:
After working through my code I have simplified it a bit more,
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "projbackendpool"
dynamic "backend" {
for_each = var.backend_pool_settings.value.backend[*]
content {
address = backend.address
host_header = backend.host_header
http_port = backend.http_port
https_port = backend.https_port
priority = backend.priority
weight = backend.weight
enabled = backend.enabled
}
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projhealthprobesettings"
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
Now the error im getting is: │ Error: Unsupported attribute │ │ on frontdoor.tf line 96, in resource "azurerm_frontdoor" "jctestingfrontdoor": │ 96: for_each = var.backend_pool_settings.value.backend[*] │ ├──────────────── │ │ var.backend_pool_settings is a list of object, known only after apply │ │ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
I have managed to fix this by playing about with the map variable. Basically, front door does not require the object of the backend to be specified as it already knows its building a backend. Also I played around with a few bits of other code and got this working see my code for example:
mainj.tf
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "projbackendpool"
dynamic "backend" {
for_each = var.backend_pool_settings
content {
address = backend.value.address
host_header = backend.value.host_header
http_port = backend.value.http_port
https_port = backend.value.https_port
priority = backend.value.priority
weight = backend.value.weight
enabled = backend.value.enabled
}
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projhealthprobesettings"
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
depends_on = [
azurerm_frontdoor.jctestingfrontdoor
]
frontend_endpoint_id = "${azurerm_frontdoor.jctestingfrontdoor.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault.jctestingenv_keyvault.id
}
}
variables.tf
variable "backend_pool_settings" {
description = "backend pool stettings for frontdoor"
type = map(object({
address = string
host_header = string
http_port = number
https_port = number
weight = number
priority = number
enabled = bool
}))
}
inputvariables.tfvars
backend_pool_settings = {
backendone = {
address = "portal-staging-testing1.terraform.example"
host_header = "portal-staging-testing1.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
},
backendtwo = {
address = "portal-staging-testing2.terraform.example"
host_header = "portal-staging-testing2.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
}
This post also helped me to figure out messing about with map objects with Terraform: https://serverfault.com/questions/1063395/terraform-values-from-tfvars-are-not-loading-when-using-multi-level-maps

terraform for loop list for target_groups with a combine variable

Is there a way to use the below list in a for loop and add in the target_groups ? I am trying to use the prefix with target_groups variable in a for-loop. I have tested also for_each. The target_groups expects the list format but the for_each does not give that expected result.
variable "prefix" {
description = "NLB Prefix"
type = any
default = "test-target"
}
variable "target_groups" {
description = "NLB"
type = any
default = {
tg1 = {
name_prefix = "test"
backend_protocol = "TCP"
backend_port = 443
target_type = "ip"
deregistration_delay = 10
preserve_client_ip = true
stickiness = {
enabled = true
type = "source_ip"
}
targets = {
appl1 = {
target_id = "191.11.11.11"
port = 443
}
}
},
}
}
}
I tried the list below for_each
module "g-appl_nlb" {
source = "../../modules/compute/lb"
name = format("%s-g-appl-nlb", var.name_prefix)
load_balancer_type = "network"
vpc_id = data.aws_vpc.target_vpc.id
...
target_groups = [
for_each = var.target_groups
name_previs = var.prefix
backend_protocol = each.value["backend_protocol"]
backend_port = each.value["backend_port"]
target_type = each.value["target_type"]
deregistration_delay = each.value["deregistration_delay"]
preserve_client_ip = each.value["preserve_client_ip"]
stickiness = each.value["stickiness"]
]
....
Basically, I managed the solved my request with the below approach.
locals {
target_groups = flatten([
for tg_data in var.target_groups: {
name_prefix = "var.name_prefix"
backend_protocol = tg_data.backend_protocol
backend_port = tg_data.backend_port
target_type = tg_data.target_type
deregistration_delay = tg_data.deregistration_delay
preserve_client_ip = tg_data.preserve_client_ip
....
])
}
module "g-appl_nlb" {
source = "../../modules/compute/lb"
name = format("%s-g-appl-nlb", var.name_prefix)
load_balancer_type = "network"
vpc_id = data.aws_vpc.target_vpc.id
...
target_groups = local.target_groups

Application gateway request_routing_rules does not exist

I am trying to deploy a azure application gateway. I set the configuration as follow:
resource "azurerm_application_gateway" "demo-app-gateway" {
location = var.location
resource_group_name = azurerm_resource_group.rg-hri-testing-env.name
name = "demo-app-gateway"
autoscale_configuration {
max_capacity = 10
min_capacity = 2
}
frontend_port {
name = "port_443"
port = 443
}
sku {
name = "Standard_v2"
tier = "Standard_v2"
}
frontend_ip_configuration {
name = "appGwPublicFrontendIp"
public_ip_address_id = azurerm_public_ip.demo-app-gateway-public-ip.id
private_ip_address_allocation = "Dynamic"
}
backend_http_settings {
cookie_based_affinity = "Disabled"
name = "demo-http-settings"
port = 443
protocol = "Https"
host_name = "apim.test.com"
pick_host_name_from_backend_address = false
path = "/external/"
request_timeout = 20
probe_name = "demo-apim-probe"
trusted_root_certificate_names = ["demo-trusted-root-ca-certificate"]
}
probe {
interval = 30
name = "demo-apim-probe"
path = "/status-0123456789abcdef"
protocol = "Https"
timeout = 30
unhealthy_threshold = 3
pick_host_name_from_backend_http_settings = true
match {
body = ""
status_code = [
"200-399"
]
}
}
gateway_ip_configuration {
name = "appGatewayIpConfig"
subnet_id = azurerm_subnet.GatewaSubnet.id
}
backend_address_pool {
name = "demo-backend-pool"
}
http_listener {
frontend_ip_configuration_name = "appGwPublicFrontendIp"
frontend_port_name = "port_443"
name = "demo-app-gateway-listener"
protocol = "Https"
require_sni = false
ssl_certificate_name = "demo-app-gateway-certificate"
}
ssl_certificate {
data = filebase64(var.ssl_certificate_path)
name = "demo-app-gateway-certificate"
password = var.ssl_certificate_password
}
trusted_root_certificate {
data = filebase64(var.ssl_certificate_path)
name = "demo-trusted-root-ca-certificate"
}
request_routing_rule {
http_listener_name = "demo-app-gateway-listener"
name = "demo-rule"
rule_type = "Basic"
backend_address_pool_name = "demo-backend-pool"
backend_http_settings_name = "demo-http-setting"
}
}
But when I run terraform apply I get this error.
Error: creating/updating Application Gateway: (Name "demo-app-gateway" / Resource Group "rg-hri-testing-apim"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/my-sub/resourceGroups/rg-hri-testing-apim/providers/Microsoft.Network/applicationGateways/demo-app-gateway/backendHttpSettingsCollection/demo-http-setting referenced by resource /subscriptions/mysub/resourceGroups/rg-hri-testing-apim/providers/Microsoft.Network/applicationGateways/demo-app-gateway/requestRoutingRules/demo-rule was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
on app-gateway-main.tf line 1, in resource "azurerm_application_gateway" "demo-app-gateway":
1: resource "azurerm_application_gateway" "demo-app-gateway" {
The resource causing the error is the request_routing_rule not being found, but what it confuses me is that is looking for it before to create it?
Can anyone please help me to understand what am I doing wrong here?
Please if you need more infos, just let me know.
Thank you very much
Please check the Backend HTTP settings name which is being referenced by request routing rule block. You have to change it to demo-http-settings in request_routing_rule to resolve the error.
Issue:
You are using below as backend http setting :
backend_http_settings {
cookie_based_affinity = "Disabled"
name = "demo-http-settings"
port = 443
protocol = "Https"
host_name = "apim.test.com"
pick_host_name_from_backend_address = false
path = "/external/"
request_timeout = 20
probe_name = "demo-apim-probe"
trusted_root_certificate_names = ["demo-trusted-root-ca-certificate"]
}
But while referencing it in request request routing rule you are using :
request_routing_rule {
http_listener_name = "demo-app-gateway-listener"
name = "demo-rule"
rule_type = "Basic"
backend_address_pool_name = "demo-backend-pool"
backend_http_settings_name = "demo-http-setting"
As you have given the name of backend_http_setting_name = demo-http-settings and giving it as demo-http-setting in request_routing_rule. It will error out as it can't find the backend http setting.

Terraform for_each if value exists in object

I would like to dynamically create some subnets and route tables from a .tfvars file, and then link each subnet to the associated route table if specified.
Here is my .tfvars file:
vnet_spoke_object = {
specialsubnets = {
Subnet_1 = {
name = "test1"
cidr = ["10.0.0.0/28"]
route = "route1"
}
Subnet_2 = {
name = "test2"
cidr = ["10.0.0.16/28"]
route = "route2"
}
Subnet_3 = {
name = "test3"
cidr = ["10.0.0.32/28"]
}
}
}
route_table = {
route1 = {
name = "route1"
disable_bgp_route_propagation = true
route_entries = {
re1 = {
name = "rt-rfc-10-28"
prefix = "10.0.0.0/28"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "10.0.0.10"
}
}
}
route2 = {
name = "route2"
disable_bgp_route_propagation = true
route_entries = {
re1 = {
name = "rt-rfc-10-28"
prefix = "10.0.0.16/28"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = "10.0.0.10"
}
}
}
}
...and here is my build script:
provider "azurerm" {
version = "2.18.0"
features{}
}
variable "ARM_LOCATION" {
default = "uksouth"
}
variable "ARM_SUBSCRIPTION_ID" {
default = "asdf-b31e023c78b8"
}
variable "vnet_spoke_object" {}
variable "route_table" {}
module "names" {
source = "./nbs-azure-naming-standard"
env = "dev"
location = var.ARM_LOCATION
subId = var.ARM_SUBSCRIPTION_ID
}
resource "azurerm_resource_group" "test" {
name = "${module.names.standard["resource-group"]}-vnet"
location = var.ARM_LOCATION
}
resource "azurerm_virtual_network" "test" {
name = "${module.names.standard["virtual-network"]}-test"
location = var.ARM_LOCATION
resource_group_name = azurerm_resource_group.test.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "test" {
for_each = var.vnet_spoke_object.specialsubnets
name = "${module.names.standard["subnet"]}-${each.value.name}"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = each.value.cidr
}
resource "azurerm_route_table" "test" {
for_each = var.route_table
name = "${module.names.standard["route-table"]}-${each.value.name}"
location = var.ARM_LOCATION
resource_group_name = azurerm_resource_group.test.name
disable_bgp_route_propagation = each.value.disable_bgp_route_propagation
dynamic "route" {
for_each = each.value.route_entries
content {
name = route.value.name
address_prefix = route.value.prefix
next_hop_type = route.value.next_hop_type
next_hop_in_ip_address = contains(keys(route.value), "next_hop_in_ip_address") ? route.value.next_hop_in_ip_address: null
}
}
}
That part works fine in creating the vnet/subnet/route resources, but the problem I face is to dynamically link each subnet to the route table listed in the .tfvars. Not all the subnets will have a route table associated with it, thus it will need to only run IF the key/value route is listed.
resource "azurerm_subnet_route_table_association" "test" {
for_each = {
for key, value in var.vnet_spoke_object.specialsubnets:
key => value
if value.route != null
}
lifecycle {
ignore_changes = [
subnet_id
]
}
subnet_id = azurerm_subnet.test[each.key].id
route_table_id = azurerm_route_table.test[each.key].id
}
The error I face with the above code is:
Error: Unsupported attribute
on main.tf line 65, in resource "azurerm_subnet_route_table_association" "test":
65: if value.route != null
This object does not have an attribute named "route".
I have tried various ways with no success, and I'm at a loss here and would appreciate any guidance posisble.
Based on your scenario, I'm guessing vnet_spoke_object in input looks like this:
vnet_spoke_object = {
specialsubnets = {
subnetA = {
cidr = "..."
}
subnetB = {
cidr = "..."
route = "..."
}
}
}
The problem with that is that a missing route entry doesn't resolve to null, it causes a panic or crash. You'd need to write your input like this (with explicit nulls):
vnet_spoke_object = {
specialsubnets = {
subnetA = {
cidr = "..."
route = null
}
subnetB = {
cidr = "..."
route = "..."
}
}
}
Or lookup route by name and provide a null default in your for map generator expression like this:
for_each = {
for key, value in var.vnet_spoke_object.specialsubnets:
key => value
if lookup(value, "route", null) != null
}

ECS and Application Load Balancer Ephemeral Ports using Terraform

I tried to build an ECS cluster with ALB in front using terraform. As I used dynamic port mappping the targets will not be registerd as healthy. I played with the healthcheck and Success codes if I set it to 301 everything is fine.
ECS
data "template_file" "mb_task_template" {
template = file("${path.module}/templates/marketplace-backend.json.tpl")
vars = {
name = "${var.mb_image_name}"
port = "${var.mb_port}"
image = "${aws_ecr_repository.mb.repository_url}"
log_group = "${aws_cloudwatch_log_group.mb.name}"
region = "${var.region}"
}
}
resource "aws_ecs_cluster" "mb" {
name = var.mb_image_name
}
resource "aws_ecs_task_definition" "mb" {
family = var.mb_image_name
container_definitions = data.template_file.mb_task_template.rendered
volume {
name = "mb-home"
host_path = "/ecs/mb-home"
}
}
resource "aws_ecs_service" "mb" {
name = var.mb_repository_url
cluster = aws_ecs_cluster.mb.id
task_definition = aws_ecs_task_definition.mb.arn
desired_count = 2
iam_role = var.aws_iam_role_ecs
depends_on = [aws_autoscaling_group.mb]
load_balancer {
target_group_arn = var.target_group_arn
container_name = var.mb_image_name
container_port = var.mb_port
}
}
resource "aws_autoscaling_group" "mb" {
name = var.mb_image_name
availability_zones = ["${var.availability_zone}"]
min_size = var.min_instance_size
max_size = var.max_instance_size
desired_capacity = var.desired_instance_capacity
health_check_type = "EC2"
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.mb.name
vpc_zone_identifier = flatten([var.vpc_zone_identifier])
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = file("${path.module}/templates/user_data.tpl")
vars = {
ecs_cluster_name = "${var.mb_image_name}"
}
}
resource "aws_launch_configuration" "mb" {
name_prefix = var.mb_image_name
image_id = var.ami
instance_type = var.instance_type
security_groups = ["${var.aws_security_group}"]
iam_instance_profile = var.aws_iam_instance_profile
key_name = var.key_name
associate_public_ip_address = true
user_data = data.template_file.user_data.rendered
lifecycle {
create_before_destroy = true
}
}
resource "aws_cloudwatch_log_group" "mb" {
name = var.mb_image_name
retention_in_days = 14
}
ALB
locals {
target_groups = ["1", "2"]
}
resource "aws_alb" "mb" {
name = "${var.mb_image_name}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.mb_alb.id}"]
subnets = var.subnets
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_target_group" "mb" {
count = length(local.target_groups)
name = "${var.mb_image_name}-tg-${element(local.target_groups, count.index)}"
port = var.mb_port
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
path = "/health"
protocol = "HTTP"
timeout = "10"
interval = "15"
healthy_threshold = "3"
unhealthy_threshold = "3"
matcher = "200-299"
}
lifecycle {
create_before_destroy = true
}
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_listener" "mb_https" {
load_balancer_arn = aws_alb.mb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = module.dns.certificate_arn
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
}
resource "aws_alb_listener_rule" "mb_https" {
listener_arn = aws_alb_listener.mb_https.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
condition {
field = "path-pattern"
values = ["/health/"]
}
}
Okay. Looks like the code above is working. I had different issue with networking.

Resources