Pass one resource's variable to another - azure

I am creating an Azure App Service resource and an App Registration resource (and app service and others that are not relevant to this question as they work fine) via Terraform.
resource "azurerm_app_service" "app" {
name = var.app_service_name
location = var.resource_group_location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.plan-app.id
app_settings = {
"AzureAd:ClientId" = azuread_application.appregistration.application_id
}
site_config {
ftps_state = var.app_service_ftps_state
}
}
resource "azuread_application" "appregistration" {
display_name = azurerm_app_service.app.name
owners = [data.azuread_client_config.current.object_id]
sign_in_audience = "AzureADMyOrg"
fallback_public_client_enabled = true
web {
homepage_url = var.appreg_web_homepage_url
logout_url = var.appreg_web_logout_url
redirect_uris = [var.appreg_web_homepage_url, var.appreg_web_redirect_uri]
implicit_grant {
access_token_issuance_enabled = true
id_token_issuance_enabled = true
}
}
}
output "appreg_application_id" {
value = azuread_application.appregistration.application_id
}
I need to add the App Registration client / application id to the app_settings block in the app service resource.
The error I get with the above configuration is:
{"#level":"error","#message":"Error: Cycle: azuread_application.appregistration, azurerm_app_service.app","#module":"terraform.ui","#timestamp":"2021-09-15T10:54:31.753401Z","diagnostic":{"severity":"error","summary":"Cycle: azuread_application.appregistration, azurerm_app_service.app","detail":""},"type":"diagnostic"}
Note that the output variable displays the application id correctly.

You have a cycle error because you have both resources referencing each other. Terraform builds a directed acyclical graph to work out which order to create (or destroy) resources in with the information from one resource or data source flowing into another normally determining this order.
In your case your azuread_application.appregistration resource is referencing the azurerm_app_service.app.name parameter while the azurerm_app_service.app resource needs the azuread_application.appregistration.application_id attribute.
I don't know a ton about Azure but to me that seems like the azurerm_app_service resource needs to be created ahead of the azuread_application resource and so I'd expect the link to be in that direction.
Because you are already setting the azurerm_app_service.app.name parameter to var.app_service_name then you can just directly pass var.app_service_name to azuread_application.appregistration.display_name to achieve the same result but to break the cycle error.
resource "azurerm_app_service" "app" {
name = var.app_service_name
location = var.resource_group_location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.plan-app.id
app_settings = {
"AzureAd:ClientId" = azuread_application.appregistration.application_id
}
site_config {
ftps_state = var.app_service_ftps_state
}
}
resource "azuread_application" "appregistration" {
display_name = var.app_service_name
owners = [data.azuread_client_config.current.object_id]
sign_in_audience = "AzureADMyOrg"
fallback_public_client_enabled = true
web {
homepage_url = var.appreg_web_homepage_url
logout_url = var.appreg_web_logout_url
redirect_uris = [var.appreg_web_homepage_url, var.appreg_web_redirect_uri]
implicit_grant {
access_token_issuance_enabled = true
id_token_issuance_enabled = true
}
}
}
output "appreg_application_id" {
value = azuread_application.appregistration.application_id
}

Related

how to automatically deploy to aks resource created with terraform

I would like a guide on how to automatically deploy to a newly provisioned aks cluster after provisioning with terraform. for more context, i am building a one click full infrastructure provisioning and deployment all in one script. below is my structure for more understanding
main.tf
resource "azurerm_kubernetes_cluster" "aks" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.cluster_name
default_node_pool {
name = "system"
node_count = var.system_node_count
vm_size = "Standard_DS2_v2"
type = "VirtualMachineScaleSets"
availability_zones = [1, 2, 3]
enable_auto_scaling = false
}
identity {
type = "SystemAssigned"
}
network_profile {
load_balancer_sku = "Standard"
network_plugin = "kubenet"
}
role_based_access_control {
enabled = true
}
}
output.tf
resource "local_file" "kubeconfig" {
depends_on = [azurerm_kubernetes_cluster.aks]
filename = "kubeconfig"
content = azurerm_kubernetes_cluster.aks.kube_config_raw
}
deployment.tf
resource "kubernetes_deployment" "sdc" {
metadata {
name = "sdc"
labels = {
app = "serviceName"
#version = "v1.0"
}
namespace = "default"
}
spec {
replicas = 1
selector {
match_labels = {
app = "serviceName"
}
}
template {
metadata {
labels = {
app = "serviceName"
# version = "v1.0"
}
}
spec {
container {
image = "myImage"
name = "serviceName"
port {
container_port = 80
}
}
}
}
}
depends_on = [
azurerm_kubernetes_cluster.aks
]
}
Everything works perfectly, my kubeconfig file is created and downloaded. my major headache is how to make the terraform apply process use the kubeconfig file created and also run the deployment. making my terraform script fully automated. I basically want to provision and deploy into the newly provisioned cluster all in one run.
Looking forward to good help.
Thanks guys

Azure Cosmos DB Error with Private Link and Private Endpoint "Failed to refresh the collection list. Please try again later"

I have enabled Private Endpoint for my Azure Cosmos DB. Everytime i go to Cosmos, i see a Red Flag on top which says : Failed to refresh the collection list. Please try again later.
We use Terraform to deploy code.
Also i don't see any container being created even though i have the below code in Terraform
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
database_name = azurerm_cosmosdb_sql_database.default.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Any idea what can i do to fix this. I don't see these issues when the Cosmos is not behind a Private Endpoint and Private link
My TF Code is provided below :
resource "azurerm_cosmosdb_account" "default" {
resource_group_name = module.resourcegroup.resource_group.name
location = var.location
name = module.name_cosmosdb_account.location.cosmosdb_account.name_unique
tags = module.resourcegroup.resource_group.tags
public_network_access_enabled = false
network_acl_bypass_for_azure_services = true
enable_automatic_failover = true
is_virtual_network_filter_enabled = true
offer_type = "Standard"
kind = "GlobalDocumentDB"
consistency_policy {
consistency_level = "Session"
max_interval_in_seconds = 5
max_staleness_prefix = 100
}
geo_location {
location = module.resourcegroup.resource_group.location
failover_priority = 0
}
geo_location {
location = "eastus2"
failover_priority = 1
}
}
resource "azurerm_cosmosdb_sql_database" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
name = "cosmosdb_db"
throughput = 400
}
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
database_name = azurerm_cosmosdb_sql_database.default.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Even with the error from Portal the container and resources are being created from terraform . You can use Data explorer to see the database and container created from terraform.
Test:
Terraform code:
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = "resourcegroup"
}
resource "azurerm_virtual_network" "example" {
name = "cosmos-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
}
resource "azurerm_subnet" "example" {
name = "cosmos-subnet"
resource_group_name = data.azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansuman-cosmosdb"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
offer_type = "Standard"
kind = "GlobalDocumentDB"
consistency_policy {
consistency_level = "BoundedStaleness"
max_interval_in_seconds = 10
max_staleness_prefix = 200
}
geo_location {
location = data.azurerm_resource_group.rg.location
failover_priority = 0
}
}
resource "azurerm_private_endpoint" "example" {
name = "cosmosansuman-endpoint"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
subnet_id = azurerm_subnet.example.id
private_service_connection {
name = "cosmosansuman-privateserviceconnection"
private_connection_resource_id = azurerm_cosmosdb_account.example.id
subresource_names = [ "SQL" ]
is_manual_connection = false
}
}
resource "azurerm_cosmosdb_sql_database" "example" {
name = "ansuman-cosmos-mongo-db"
resource_group_name = data.azurerm_resource_group.rg.name
account_name = azurerm_cosmosdb_account.example.name
throughput = 400
}
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = data.azurerm_resource_group.rg.name
account_name = azurerm_cosmosdb_account.example.name
database_name = azurerm_cosmosdb_sql_database.example.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Output:
Update: As per the Discussion , the error Failed to refresh the collection list. Please try again later. is by-default in your case as you have disabled public network access to the cosmosdb account while creation. If its set to disabled, public network traffic will be blocked even before the private endpoint is created.
So, for this error the possible solutions will be :
Enable public network traffic to access the account while creating the cosmosdb account from terraform. As , Even you set it to true after the private endpoint is set for cosmosdb , public access to cosmosdb will be automatically disabled , if you go to the firewalls and virtual networks you can see allow access from all networks is grayed out . So, you can check allow access from portal and add your current IP there to get access only for your public network as shown below.(note : as its bydefault set to true so you don't need to add public_network_access_enabled = true in code.)
You can use Data Explorer to check the containers which has been already verified by you .
You can create a VM on the same vnet where the endpoint is residing and
connect the cosmosdb from inside the VM on portal itself. You can refer this Microsoft Document for more details.

Terraform: error when adding Diagnostic setting to Azure App Service

See below configuration I am using to add a diagonstic setting to send App service logs to a Log analytics workspace.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/monitor_diagnostic_setting
resource "azurerm_app_service" "webapp" {
for_each = local.apsvc_map_with_locations
name = "${var.regional_web_rg[each.value.location].name}-${each.value.apsvc_name}-apsvc"
location = each.value.location
resource_group_name = var.regional_web_rg[each.value.location].name
app_service_plan_id = azurerm_app_service_plan.asp[each.value.location].id
https_only = true
identity {
type = "UserAssigned"
identity_ids = each.value.identity_ids
}
}
resource "azurerm_monitor_diagnostic_setting" "example" {
for_each = local.apsvc_map_with_locations
name = "example"
target_resource_id = "azurerm_app_service.webapp[${each.value.location}-${each.value.apsvc_name}].id"
log_analytics_workspace_id = data.terraform_remote_state.pod_bootstrap.outputs.pod_log_analytics_workspace.id
log {
category = "AuditEvent"
enabled = false
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
}
Error:
Can not parse "target_resource_id" as a resource id: Cannot parse Azure ID: parse "azurerm_app_service.webapp[].id": invalid URI for request
2020-11-06T20:19:59.3344089Z
2020-11-06T20:19:59.3346016Z on .terraform\modules\web.web\pipeline\app\apsvc\app_hosting.tf line 127, in resource "azurerm_monitor_diagnostic_setting" "example":
2020-11-06T20:19:59.3346956Z 127: resource "azurerm_monitor_diagnostic_setting" "example" {
2020-11-06T20:19:59.3347091Z
You can try with the following:
target_resource_id = azurerm_app_service.webapp["${each.value.location}-${each.value.apsvc_name}"].id
instead of:
target_resource_id = "azurerm_app_service.webapp[${each.value.location}-${each.value.apsvc_name}].id"

How to send AKS master logs to eventhub using terraform?

How to send AKS master logs to eventhub using Azurerm terraform ? As Terraform only provides log analytics option only.
In order to send logs to Event Hub using terraform you need to create few resources :
Event Hub Namespace (azurerm_eventhub_namespace)
Event Hub (azurerm_eventhub)
Authorization Rule for an Event Hub Namespace (azurerm_eventhub_namespace_authorization_rule)
Diagnostic Setting for an existing Resource (azurerm_monitor_diagnostic_setting)
The following example based on this repo.
# Create the AKS cluster
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "example" {
name = "example-aks1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "exampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
# Create Event hub namespace
resource "azurerm_eventhub_namespace" "logging" {
name = "logging-eventhub"
location = "${azurerm_resource_group.example.location}"
resource_group_name = "${azurerm_resource_group.example.name}"
sku = "Standard"
capacity = 1
kafka_enabled = false
}
# Create Event hub
resource "azurerm_eventhub" "logging_aks" {
name = "logging-aks-eventhub"
namespace_name = "${azurerm_eventhub_namespace.logging.name}"
resource_group_name = "${azurerm_resource_group.example.name}"
partition_count = 2
message_retention = 1
}
# Create an authorization rule
resource "azurerm_eventhub_namespace_authorization_rule" "logging" {
name = "authorization_rule"
namespace_name = "${azurerm_eventhub_namespace.logging.name}"
resource_group_name = "${azurerm_resource_group.example.name}"
listen = true
send = true
manage = true
}
# Manages a Diagnostic Setting for an existing Resource
resource "azurerm_monitor_diagnostic_setting" "aks-logging" {
name = "diagnostic_aksl"
target_resource_id = "${azurerm_kubernetes_cluster.example.id}"
eventhub_name = "${azurerm_eventhub.logging_aks.name}"
eventhub_authorization_rule_id = "${azurerm_eventhub_namespace_authorization_rule.logging.id}"
log {
category = "kube-scheduler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-controller-manager"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "cluster-autoscaler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-audit"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-apiserver"
enabled = true
retention_policy {
enabled = false
}
}
}

How do I fully deploy Containerized Azure Function App with Terraform

Attempting to deploy a Function App on a Premium plan that serves the functions from a container. The HOWTO for this works well enough: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=nodejs#create-an-app-from-the-image
However, when I try to deploy it using Terraform, no sale. Everything looks right but the function does not show up in the side menu (it does for the one deployed with the az CLI), nor can I hit it with Postman.
Via Resource Explorer I can see that the Functions are not being populated. Here is the HCL that I am using
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-Premium-ConsumptionPlan"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
kind = "Elastic"
reserved = true
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
FUNCTIONS_WORKER_RUNTIME = "dotnet"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
FUNCTION_APP_EDIT_MODE = "readOnly"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
----- Updated based on answer ----
The solution was to instruct Function App to NOT use storage to discover metadata about available functions - this involves setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to false. Here is my updated script
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-premiumPlan"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
location = "${data.azurerm_resource_group.rg.location}"
kind = "Linux"
reserved = true
sku {
tier = "Premium"
size = "P1V2"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "userapi-${var.app_name}fa-${var.env_name}"
location = "${data.azurerm_resource_group.rg.location}"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTION_APP_EDIT_MODE = "readOnly"
https_only = true
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/testimage:v1.0.1"
}
}
To create the Azure Function with your custom Docker image, I think your problem is that you set the environment variable FUNCTIONS_WORKER_RUNTIME, it means you use the built-in runtime, but you want to use your custom image. With my test, you only need to configure the function app like this:
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
Then you only need to wait a while for the creation.

Resources