I really need some help here.
I am trying to use terraform to infrastructure s3, route53, cloudfront distribution. But it has been having this below error message and I have been stuck two days. I really appreciate if someone can help me out.
**Error:
missing .shecan.link DNS validation record: _aa34525380696522413db4f0382fdfd6.shecan.link*
Error infor:
with aws_acm_certificate_validation.cert_validation,
│ on acm.tf line 25, in resource "aws_acm_certificate_validation" "cert_validation":
│ 25: resource "aws_acm_certificate_validation" "cert_validation" {[enter image description here][1]
Here is my route53.tf:
resource "aws_route53_record" "root-a" {
zone_id = var.zone_id
name = var.domain_name
type = "A"
alias {
name = aws_cloudfront_distribution.root_s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.root_s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_route53_record" "www-a" {
zone_id = var.zone_id
name = "www.${var.domain_name}"
type = "A"
alias {
name = aws_cloudfront_distribution.www_s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.www_s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
# resource "aws_route53_record" "cert_validation" {
# for_each = {
# for dvo in aws_acm_certificate.ssl_certificate.domain_validation_options : dvo.domain_name => {
# name = dvo.resource_record_name
# record = dvo.resource_record_value
# type = dvo.resource_record_type
# zone_id = var.zone_id
# }
# }
# allow_overwrite = true
# name = each.value.name
# records = [each.value.record]
# ttl = 60
# type = each.value.type
# zone_id = each.value.zone_id
# }
acm.tf
resource "aws_acm_certificate" "ssl_certificate" {
provider = aws.acm_provider
domain_name = var.domain_name
subject_alternative_names = ["*.${var.domain_name}"]
validation_method = "DNS"
# tags = var.common_tags
lifecycle {
create_before_destroy = true
}
}
# data "aws_route53_zone" "root_bucket" {
# name = var.domain_name
# private_zone = false
# }
# data "aws_route53_zone" "www_bucket" {
# name = "www.${var.domain_name}"
# private_zone = false
# }
resource "aws_acm_certificate_validation" "cert_validation" {
provider = aws.acm_provider
certificate_arn = aws_acm_certificate.ssl_certificate.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}
resource "aws_route53_record" "cert_validation" {
for_each = {
for dvo in aws_acm_certificate.ssl_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = var.zone_id
# zone_id = dvo.domain_name == "www.${var.domain_name}" ? data.aws_route53_zone.www_bucket.zone_id : data.aws_route53_zone.root_bucket.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
Related
I have created a application gateway, WAF policy, public IP via terraform.
From Azure GUI I have created a Key vault in which I have uploaded the pfx certificate also I have created managed identity and granted full access to azure key vault.
I am trying to create a additional https listener and calling the certificate stored in the keyvault via data block but somehow landing in this error .
Note: Kayvault , managed identity , appgw, waf policy are all in same region.
Error :
│ Error: updating Application Gateway: (Name "abc-xyz-Nonprod-test-us6-Extappgw0001" / Resource Group "xyz-network-vnet-devtest"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidResourceReference" Message="Resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/xyz-network-vnet-devtest/providers/Microsoft.Network/applicationGateways/abc-xyz-Nonprod-test-us6-Extappgw0001/sslCertificates/firepfx referenced by resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/xyz-network-vnet-devtest/providers/Microsoft.Network/applicationGateways/abc-xyz-Nonprod-test-us6-Extappgw0001/httpListeners/External_app_gtw_nonprod_backend_listener_https was not found. Please make sure that the referenced resource exists, and that both resources are in the same region." Details=[]
│
│ with azurerm_application_gateway.abc-xyz-Nonprod-test-us6-Extappgw0001,
│ on abc-xyz-Nonprod-test-us6-Extappgw0001.tf line 102, in resource "azurerm_application_gateway" "abc-xyz-Nonprod-test-us6-Extappgw0001":
│ 102: resource "azurerm_application_gateway" "abc-xyz-Nonprod-test-us6-Extappgw0001"
code
terraform {
backend "azurerm" {
storage_account_name = "abccloudlbstorage"
resource_group_name = "xyz-NETENG-AppResources-Prod"
container_name = "testlb"
tenant_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
key = "abc-xyz-Nonprod-test-us6-Extappgw0001.tfstate"
}
}
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {}
data "azurerm_subnet" "abc-xyz-devtest-us6-vnet00002-sub00001-AppGW" {
name = "abc-xyz-devtest-us6-vnet00002-sub00001-AppGW"
resource_group_name = "xyz-network-vnet-devtest"
virtual_network_name = "abc-xyz-devtest-us6-vnet00002"
}
data "azurerm_user_assigned_identity" "test-appgw-identity-us6"{
name = "test-appgw-identity-us6"
resource_group_name = "xyz-network-vnet-devtest"
}
data "azurerm_key_vault" "xyz-network-kv" {
name = "xyz-network-kv"
resource_group_name = "xyz-network-vnet-devtest"
}
data "azurerm_key_vault_certificate" "firepfx" {
name = "firepfx"
key_vault_id = data.azurerm_key_vault.xyz-network-kv.id
}
resource "azurerm_public_ip" "abc-test-us6-nonprod-FE0001" {
name = "abc-test-us6-nonprod-FE0001"
resource_group_name = "xyz-network-vnet-devtest"
location = "eastus2"
allocation_method = "Static"
sku = "Standard"
zones = ["1", "2", "3"]
tags = {
BusinessUnit = "enterprise-management"
LineOfBusiness = "xyz"
}
}
resource "azurerm_web_application_firewall_policy" "abc-test-us6-nonprod-WFW0001" {
name = "abc-test-us6-nonprod-WFW0001"
resource_group_name = "xyz-network-vnet-devtest"
location = "eastus2"
tags = {
BusinessUnit = "enterprise-management"
LineOfBusiness = "xyz"
}
custom_rules {
name = "Rule1"
priority = 1
rule_type = "MatchRule"
match_conditions {
match_variables {
variable_name = "RemoteAddr"
}
operator = "IPMatch"
negation_condition = false
match_values = ["8.8.8.8"]
}
action = "Block"
}
policy_settings {
enabled = true
mode = "Prevention"
request_body_check = true
file_upload_limit_in_mb = 100
max_request_body_size_in_kb = 128
}
managed_rules {
exclusion {
match_variable = "RequestHeaderNames"
selector = "x-company-secret-header"
selector_match_operator = "Equals"
}
managed_rule_set {
type = "OWASP"
version = "3.2"
}
}
}
resource "azurerm_application_gateway" "abc-xyz-Nonprod-test-us6-Extappgw0001" {
name = "abc-xyz-Nonprod-test-us6-Extappgw0001"
resource_group_name = "xyz-network-vnet-devtest"
location = "eastus2"
zones = ["1", "2", "3"]
firewall_policy_id = azurerm_web_application_firewall_policy.abc-test-us6-nonprod-WFW0001.id
tags = {
BusinessUnit = "enterprise-management"
LineOfBusiness = "xyz"
}
sku {
name = "WAF_v2"
tier = "WAF_v2"
}
autoscale_configuration {
min_capacity = 2
max_capacity = 10
}
gateway_ip_configuration {
name = "abc-test-us6-nonprod-GIP0001"
subnet_id = data.azurerm_subnet.abc-xyz-devtest-us6-vnet00002-sub00001-AppGW.id
}
frontend_port {
name = "abc-us6-gpt-nonprod-PRT-FE0001"
port = 80
}
frontend_ip_configuration {
name = "abc-test-us6-nonprod-CFG-FE0001"
public_ip_address_id = azurerm_public_ip.abc-test-us6-nonprod-FE0001.id
}
frontend_ip_configuration {
name = "abc-test-us6-nonprod-CFG-FE0002"
subnet_id = data.azurerm_subnet.abc-xyz-devtest-us6-vnet00002-sub00001-AppGW.id
private_ip_address = "10.46.72.200"
private_ip_address_allocation = "Static"
}
backend_address_pool {
name = "External_app_gtw_nonprod_backend"
}
backend_http_settings {
name = "External_app_gtw_nonprod_http_setting"
cookie_based_affinity = "Disabled"
path = "/"
port = 80
protocol = "Http"
request_timeout = 60
}
http_listener {
name = "External_app_gtw_nonprod_backend_listener"
frontend_ip_configuration_name = "abc-test-us6-nonprod-CFG-FE0001"
frontend_port_name = "abc-us6-gpt-nonprod-PRT-FE0001"
protocol = "Http"
}
request_routing_rule {
name = "External_app_gtw_nonprod_RR"
rule_type = "Basic"
http_listener_name = "External_app_gtw_nonprod_backend_listener"
backend_address_pool_name = "External_app_gtw_nonprod_backend"
backend_http_settings_name = "External_app_gtw_nonprod_http_setting"
priority = 1
}
frontend_port {
name = "abc-us6-gpt-nonprod-PRT-FE00011"
port = 443
}
backend_http_settings {
name = "External_app_gtw_nonprod_https_setting"
cookie_based_affinity = "Disabled"
path = "/"
port = 443
protocol = "Https"
request_timeout = 60
host_name = "irms.abc.com"
}
http_listener {
name = "External_app_gtw_nonprod_backend_listener_https"
frontend_ip_configuration_name = "abc-test-us6-nonprod-CFG-FE0001"
frontend_port_name = "abc-us6-gpt-nonprod-PRT-FE00011"
protocol = "Https"
ssl_certificate_name = data.azurerm_key_vault_certificate.firepfx.name
}
identity {
type = "UserAssigned"
identity_ids = [data.azurerm_user_assigned_identity.test-appgw-identity-us6.id]
}
request_routing_rule {
name = "External_app_gtw_nonprod_https"
rule_type = "Basic"
http_listener_name = "External_app_gtw_nonprod_backend_listener_https"
backend_address_pool_name = "External_app_gtw_nonprod_backend"
backend_http_settings_name = "External_app_gtw_nonprod_https_setting"
priority = 3
}
}
For Application Gateway, you have to create an ssl_certificate block that references the Key Vault secret ID under the key_vault_secret_id property. Then your listener will reference the name of this ssl_certificate resource instead of the locals variable you declared.
ssl_certificate {
name = "cert2023"
key_vault_secret_id "https://mykv.vault.azure.net/secrets/cert2023"
}
I'm following Neal Shah's instructions for deploying multiple VMs with multiple managed disks (https://www.nealshah.dev/posts/2020/05/terraform-for-azure-deploying-multiple-vms-with-multiple-managed-disks/#deploying-multiple-vms-with-multiple-datadisks)
everything works fine except for the azurerm_virtual_machine_data_disk_attachment resource which fails with the following error
│ Error: Invalid index
│
│ on main.tf line 103, in resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach":
│ 103: virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
│ ├────────────────
│ │ azurerm_linux_virtual_machine.vms is tuple with 3 elements
│ │ each.key is "datadisk_dca0-apache-cassandra-node0_disk00"
│
│ The given key does not identify an element in this collection value: a number is required.
my code is below:
locals {
vm_datadiskdisk_count_map = { for k in toset(var.nodes) : k => var.data_disk_count }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s-%s", var.dca, var.name)
location = var.location
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-%s-availability-set", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-%s-linux-vm-nsg", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for apache cassandra hosts
resource "azurerm_network_interface" "vm_nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
count = length(var.nodes)
name = format("%s-%s-nic${count.index}", var.dca, var.name)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-%s-apache-cassandra-ip", var.dca, var.name)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create apache cassandra VMs
resource "azurerm_linux_virtual_machine" "vms" {
count = length(var.nodes)
name = element(var.nodes, count.index)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [element(azurerm_network_interface.vm_nics.*.id, count.index)]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
source_image_id = var.source_image_id
os_disk {
caching = "ReadWrite"
storage_account_type = var.storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# create data disk(s) for VMs
resource "azurerm_managed_disk" "managed_disk" {
for_each= toset([for j in local.datadisk_lun_map : j.datadisk_name])
name= each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.storage_account_type
create_option = "Empty"
disk_size_gb = var.disk_size_gb
}
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}
anyone know how to accomplish this? thanks!
I've tried several different approaches to this but have been unsuccessful so far, I was expecting it to work as described in Neal's post
I was able to get this working. However, I have not tested adding/removing nodes/disks yet. But this working to create multiple VMs with multiple data disks attached to each VM.
I use a variable file that I source to substitute the variables in the *.tf files.
variables.tf
variable "azure_subscription_id" {
type = string
description = "Azure Subscription ID"
default = ""
}
variable "dca" {
type = string
description = "datacenter [dca0|dca2|dca4|dca6]."
default = ""
}
variable "location" {
type = string
description = "Location of the resource group."
default = ""
}
variable "resource_group" {
type = string
description = "resource group name."
default = ""
}
variable "subnet_name" {
type = string
description = "subnet name"
default = ""
}
variable "vnet_name" {
type = string
description = "vnet name"
default = ""
}
variable "vnet_rg" {
type = string
description = "vnet resource group"
default = ""
}
variable "vm_size" {
type = string
description = "vm size"
default = ""
}
variable "os_disk_size_gb" {
type = string
description = "vm os disk size gb"
default = ""
}
variable "data_disk_size_gb" {
type = string
description = "vm data disk size gb"
default = ""
}
variable "admin_username" {
type = string
description = "admin user name"
default = ""
}
variable "ssh_pub_key" {
type = string
description = "public key for admin user"
default = ""
}
variable "source_image_id" {
type = string
description = "image id"
default = ""
}
variable "os_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "data_disk_storage_account_type" {
type = string
description = ""
default = ""
}
variable "vm_list" {
type = map(object({
hostname = string
}))
default = {
vm0 ={
hostname = "${dca}-${name}-node-0"
},
vm1 = {
hostname = "${dca}-${name}-node-1"
}
vm2 = {
hostname = "${dca}-${name}-node-2"
}
}
}
variable "disks_per_instance" {
type = string
description = ""
default = ""
}
terraform.tfvars
# subscription
azure_subscription_id = "${azure_subscription_id}"
# name and location
resource_group = "${dca}-${name}"
location = "${location}"
dca = "${dca}"
# Network
subnet_name = "${subnet_name}"
vnet_name = "${dca}vnet"
vnet_rg = "th-${dca}-vnet"
# VM
vm_size = "${vm_size}"
os_disk_size_gb = "${os_disk_size_gb}"
os_disk_storage_account_type = "${os_disk_storage_account_type}"
source_image_id = "${source_image_id}"
# User/key info
admin_username = "${admin_username}"
ssh_pub_key = ${ssh_pub_key}
# data disk info
data_disk_storage_account_type = "${data_disk_storage_account_type}"
data_disk_size_gb = "${data_disk_size_gb}"
disks_per_instance= "${disks_per_instance}"
main.tf
# set locals for multi data disks
locals {
vm_datadiskdisk_count_map = { for k, query in var.vm_list : k => var.disks_per_instance }
luns = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
datadisk_lun_map = flatten([
for vm_name, count in local.vm_datadiskdisk_count_map : [
for i in range(count) : {
datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
lun = i
}
]
])
}
# create resource group
resource "azurerm_resource_group" "resource_group" {
name = format("%s", var.resource_group)
location = var.location
}
# create data disk(s)
resource "azurerm_managed_disk" "managed_disk" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
name = each.key
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
storage_account_type = var.data_disk_storage_account_type
create_option = "Empty"
disk_size_gb = var.data_disk_size_gb
}
# create availability set
resource "azurerm_availability_set" "vm_availability_set" {
name = format("%s-availability-set", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
}
# create Security Group to access linux
resource "azurerm_network_security_group" "linux_vm_nsg" {
name = format("%s-linux-vm-nsg", var.resource_group)
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "linux_vm_nsg_association" {
subnet_id = "${data.azurerm_subnet.subnet.id}"
network_security_group_id = azurerm_network_security_group.linux_vm_nsg.id
}
# create NICs for vms
resource "azurerm_network_interface" "nics" {
depends_on = [azurerm_subnet_network_security_group_association.linux_vm_nsg_association]
for_each = var.vm_list
name = "${each.value.hostname}-nic"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
ip_configuration {
name = format("%s-proxy-ip", var.resource_group)
subnet_id = "${data.azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# create VMs
resource "azurerm_linux_virtual_machine" "vms" {
for_each = var.vm_list
name = each.value.hostname
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
network_interface_ids = [azurerm_network_interface.nics[each.key].id]
availability_set_id = azurerm_availability_set.vm_availability_set.id
size = var.vm_size
source_image_id = var.source_image_id
custom_data = filebase64("cloud-init.sh")
admin_username = var.admin_username
disable_password_authentication = true
admin_ssh_key {
username = var.admin_username
public_key = var.ssh_pub_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = var.os_disk_storage_account_type
disk_size_gb = var.os_disk_size_gb
}
}
# attache data disks vms
resource "azurerm_virtual_machine_data_disk_attachment" "managed_disk_attach" {
for_each = toset([for j in local.datadisk_lun_map : j.datadisk_name])
managed_disk_id = azurerm_managed_disk.managed_disk[each.key].id
virtual_machine_id = azurerm_linux_virtual_machine.vms[element(split("_", each.key), 1)].id
lun = lookup(local.luns, each.key)
caching = "ReadWrite"
}
I have a terraform resource in which I am trying to make a subnet_id variable dynamic. So I have varibles defined below in which subnet_id = "worker-subnet-1". I want to pass the name of the subnet and fetch the subnet id as I have multiple subnets. How can I do that.
resource "oci_containerengine_node_pool" "node_pool" {
for_each = var.nodepools
cluster_id = oci_containerengine_cluster.cluster[0].id
compartment_id = var.compartment_id
depends_on = [oci_containerengine_cluster.cluster]
kubernetes_version = var.cluster_kubernetes_version
name = each.value["name"]
node_config_details {
placement_configs {
availability_domain = var.availability_domain
subnet_id = oci_core_subnet.each.value["subnet_name"].id
}
size = each.value["size"]
}
node_shape = each.value["node_shape"]
node_shape_config {
#Optional
memory_in_gbs = each.value["memory"]
ocpus = each.value["ocpus"]
}
node_source_details {
image_id = each.value["image_id"]
source_type = "IMAGE"
}
ssh_public_key = file(var.ssh_public_key_path)
}
These are my variables:
nodepools = {
np1 = {
name = "np1"
size = 3
ocpus = 8
memory = 120
image_id = "test"
node_shape = "VM.Standard2.8"
subnet_name = "worker-subnet-1"
}
np2 = {
name = "np2"
size = 2
ocpus = 8
memory = 120
image_id = "test"
node_shape = "VM.Standard2.8"
subnet_name = "worker-subnet-1"
}
}
any suggestions?
resource "oci_core_subnet" "snet-workers" {
cidr_block = lookup(var.subnets["snet-workers"], "subnet_cidr")
compartment_id = var.compartment_id
vcn_id = oci_core_virtual_network.base_vcn.id
display_name = lookup(var.subnets["snet-workers"], "display_name")
dns_label = lookup(var.subnets["snet-workers"], "dns_label")
prohibit_public_ip_on_vnic = true
security_list_ids = [oci_core_security_list.private_worker_nodes.id]
route_table_id = oci_core_route_table.rt-nat.id
}
You have to use like below where change <local resource name> to the name you have given for your resource
subnet_id = oci_core_subnet.<local resource name>[each.value.subnet_id].id
I have code like below
//Create acm certificate for livy_cert
resource "aws_acm_certificate" "livy_cert" {
count = local.count
domain_name = "${var.subsystem}-${var.component}-livy.${var.region_fqdn}"
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
//Validation route53
resource "aws_route53_record" "certificate_validation" {
for_each = {
for dvo in aws_acm_certificate.livy_cert[0].domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = module.core_info.route53_zone_id
}
//Validate certificate before assigning
resource "aws_acm_certificate_validation" "livy_alb_validation_cert" {
count = local.count
certificate_arn = aws_acm_certificate.livy_cert[0].arn
validation_record_fqdns = [for record in aws_route53_record.certificate_validation : record.fqdn]
}
As you can see my cert is with count variable, however terraform plan fails when my count =0 as
for dvo in aws_acm_certificate.livy_cert[0].domain_validation_options
fails to parse due to 0 index is not valid. I also tried with
for dvo in aws_acm_certificate.livy_cert.*.domain_validation_options
However, that also fails when count =1
Any Idea of how it can be fixed?
You can flatten the list of domain_validation_options before iterating over it:
// Create acm certificate for livy_cert
resource "aws_acm_certificate" "livy_cert" {
count = local.count
domain_name = "${var.subsystem}-${var.component}-livy.${var.region_fqdn}"
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
// Validation route53
resource "aws_route53_record" "certificate_validation" {
for_each = {
for dvo in flatten([
for cert in aws_acm_certificate.livy_cert: cert.domain_validation_options
]): dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = module.core_info.route53_zone_id
}
// Validate certificate before assigning
resource "aws_acm_certificate_validation" "livy_alb_validation_cert" {
count = local.count
certificate_arn = aws_acm_certificate.livy_cert[count.index].arn
validation_record_fqdns = [for record in aws_route53_record.certificate_validation : record.fqdn]
}
(Note, for livy_alb_validation_cert, I have used livy_cert[count.index] rather than livy_cert[0], just for tidiness)
I tried to build an ECS cluster with ALB in front using terraform. As I used dynamic port mappping the targets will not be registerd as healthy. I played with the healthcheck and Success codes if I set it to 301 everything is fine.
ECS
data "template_file" "mb_task_template" {
template = file("${path.module}/templates/marketplace-backend.json.tpl")
vars = {
name = "${var.mb_image_name}"
port = "${var.mb_port}"
image = "${aws_ecr_repository.mb.repository_url}"
log_group = "${aws_cloudwatch_log_group.mb.name}"
region = "${var.region}"
}
}
resource "aws_ecs_cluster" "mb" {
name = var.mb_image_name
}
resource "aws_ecs_task_definition" "mb" {
family = var.mb_image_name
container_definitions = data.template_file.mb_task_template.rendered
volume {
name = "mb-home"
host_path = "/ecs/mb-home"
}
}
resource "aws_ecs_service" "mb" {
name = var.mb_repository_url
cluster = aws_ecs_cluster.mb.id
task_definition = aws_ecs_task_definition.mb.arn
desired_count = 2
iam_role = var.aws_iam_role_ecs
depends_on = [aws_autoscaling_group.mb]
load_balancer {
target_group_arn = var.target_group_arn
container_name = var.mb_image_name
container_port = var.mb_port
}
}
resource "aws_autoscaling_group" "mb" {
name = var.mb_image_name
availability_zones = ["${var.availability_zone}"]
min_size = var.min_instance_size
max_size = var.max_instance_size
desired_capacity = var.desired_instance_capacity
health_check_type = "EC2"
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.mb.name
vpc_zone_identifier = flatten([var.vpc_zone_identifier])
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = file("${path.module}/templates/user_data.tpl")
vars = {
ecs_cluster_name = "${var.mb_image_name}"
}
}
resource "aws_launch_configuration" "mb" {
name_prefix = var.mb_image_name
image_id = var.ami
instance_type = var.instance_type
security_groups = ["${var.aws_security_group}"]
iam_instance_profile = var.aws_iam_instance_profile
key_name = var.key_name
associate_public_ip_address = true
user_data = data.template_file.user_data.rendered
lifecycle {
create_before_destroy = true
}
}
resource "aws_cloudwatch_log_group" "mb" {
name = var.mb_image_name
retention_in_days = 14
}
ALB
locals {
target_groups = ["1", "2"]
}
resource "aws_alb" "mb" {
name = "${var.mb_image_name}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.mb_alb.id}"]
subnets = var.subnets
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_target_group" "mb" {
count = length(local.target_groups)
name = "${var.mb_image_name}-tg-${element(local.target_groups, count.index)}"
port = var.mb_port
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
path = "/health"
protocol = "HTTP"
timeout = "10"
interval = "15"
healthy_threshold = "3"
unhealthy_threshold = "3"
matcher = "200-299"
}
lifecycle {
create_before_destroy = true
}
tags = {
Name = var.mb_image_name
}
}
resource "aws_alb_listener" "mb_https" {
load_balancer_arn = aws_alb.mb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = module.dns.certificate_arn
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
}
resource "aws_alb_listener_rule" "mb_https" {
listener_arn = aws_alb_listener.mb_https.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_alb_target_group.mb.0.arn
}
condition {
field = "path-pattern"
values = ["/health/"]
}
}
Okay. Looks like the code above is working. I had different issue with networking.