How do I fully deploy Containerized Azure Function App with Terraform - azure

Attempting to deploy a Function App on a Premium plan that serves the functions from a container. The HOWTO for this works well enough: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=nodejs#create-an-app-from-the-image
However, when I try to deploy it using Terraform, no sale. Everything looks right but the function does not show up in the side menu (it does for the one deployed with the az CLI), nor can I hit it with Postman.
Via Resource Explorer I can see that the Functions are not being populated. Here is the HCL that I am using
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-Premium-ConsumptionPlan"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
kind = "Elastic"
reserved = true
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "WestUS"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
FUNCTIONS_WORKER_RUNTIME = "dotnet"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
FUNCTION_APP_EDIT_MODE = "readOnly"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
----- Updated based on answer ----
The solution was to instruct Function App to NOT use storage to discover metadata about available functions - this involves setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to false. Here is my updated script
resource "azurerm_app_service_plan" "plan" {
name = "${var.app_name}-premiumPlan"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
location = "${data.azurerm_resource_group.rg.location}"
kind = "Linux"
reserved = true
sku {
tier = "Premium"
size = "P1V2"
}
}
data "azurerm_container_registry" "registry" {
name = "${var.app_name}registry"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_function_app" "funcApp" {
name = "userapi-${var.app_name}fa-${var.env_name}"
location = "${data.azurerm_resource_group.rg.location}"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTION_APP_EDIT_MODE = "readOnly"
https_only = true
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/testimage:v1.0.1"
}
}

To create the Azure Function with your custom Docker image, I think your problem is that you set the environment variable FUNCTIONS_WORKER_RUNTIME, it means you use the built-in runtime, but you want to use your custom image. With my test, you only need to configure the function app like this:
resource "azurerm_function_app" "funcApp" {
name = "${var.app_name}-userapi-${var.env_name}-funcapp"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.plan.id}"
storage_connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
version = "~2"
app_settings = {
FUNCTIONS_EXTENSION_VERSION = "~2"
DOCKER_REGISTRY_SERVER_URL = "${data.azurerm_container_registry.registry.login_server}"
DOCKER_REGISTRY_SERVER_USERNAME = "${data.azurerm_container_registry.registry.admin_username}"
DOCKER_REGISTRY_SERVER_PASSWORD = "${data.azurerm_container_registry.registry.admin_password}"
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "${azurerm_storage_account.storage.primary_connection_string}"
WEBSITE_CONTENTSHARE = "${azurerm_storage_account.storage.name}"
DOCKER_CUSTOM_IMAGE_NAME = "${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|${data.azurerm_container_registry.registry.login_server}/pingtrigger:test"
}
}
Then you only need to wait a while for the creation.

Related

How to send the AKS application Cluster, Node, Pod, Container metrics to Log Analytics workspace so that it will be available in Azure Monitoring?

I have created an AKS cluster using the following Terraform code
resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
# Create Log Analytics Workspace
module "log_analytics_workspace" {
source = "./modules/log_analytics_workspace"
count = var.enable_log_analytics_workspace == true ? 1 : 0
app_or_service_name = "log"
subscription_type = var.subscription_type
environment = var.environment
resource_group_name = azurerm_resource_group.rg.name
location = var.location
instance_number = var.instance_number
sku = var.log_analytics_workspace_sku
retention_in_days = var.log_analytics_workspace_retention_in_days
tags = var.tags
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
oms_agent {
log_analytics_workspace_id = module.log_analytics_workspace[0].id
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
and I want to send the AKS application Cluster, Node, Pod, Container metrics to Log Analytics workspace so that it will be available in Azure Monitoring.
I have configured the diagnostic setting as mentioned below
resource "azurerm_monitor_diagnostic_setting" "aks_cluster" {
name = "${azurerm_kubernetes_cluster.k8s.name}-audit"
target_resource_id = azurerm_kubernetes_cluster.k8s.id
log_analytics_workspace_id = module.log_analytics_workspace[0].id
log {
category = "kube-apiserver"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-controller-manager"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "cluster-autoscaler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-scheduler"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "kube-audit"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = false
retention_policy {
enabled = false
}
}
}
Is that all needed? I did come across an article where they were using azurerm_application_insights and I don't understand why azurerm_application_insights is needed to capture the cluster level metrics?
You do not need Application Insights, it really depends if you want application level monitoring.
This is probably want you read:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_insights
"Manages an Application Insights component."
Application Insights provides complete monitoring of applications running on AKS and other environments.
https://learn.microsoft.com/en-us/azure/aks/monitor-aks#level-4--applications
According to good practice, you need to enable a few others:
guard should be enabled assuming you use AAD.
enable AllMetrics.
consider kube-audit-admin for reduced logging events.
consider csi-azuredisk-controller.
consider cloud-controller-manager for the cloud-node-manager component.
See more here:
https://learn.microsoft.com/en-us/azure/aks/monitor-aks#configure-monitoring
https://learn.microsoft.com/en-us/azure/aks/monitor-aks-reference

Mounting a volume in 'azurerm_linux_web_app' within docker container using Terraform

I am using an Azure Linux Web App (terraform below) to host a docker container which is the base image of Keycloak, our authentication and authorization provider. We have a requirement to import a theme which can be done by mounting a folder in a specific location within the docker container.
I did see where it is possible to mount blob storage inside of the linux web app, but I'm not sure how to get that mounted inside of the docker container which is defined in the application stack.
Question
How can I set where the mount point goes rather than /keycloak/custom-themes? I would like it to go under /opt/keycloak/themes?
locals {
storage_account_name = "themestoragean001"
blob_container_name = "themes"
storage_account_kind = "Storage"
}
resource "azurerm_storage_account" "access_service_storage_account" {
name = local.storage_account_name
resource_group_name = data.azurerm_resource_group.resource_group.name
location = data.azurerm_resource_group.resource_group.location
account_tier = "Standard"
account_replication_type = "GRS"
account_kind = local.storage_account_kind
tags = merge(
local.resource_tags,
{
Purpose = "Storage Account for Keycloak themes for ${local.environment}"
StorageKind = local.storage_account_kind
}
)
}
resource "azurerm_storage_container" "container" {
name = local.blob_container_name
storage_account_name = azurerm_storage_account.access_service_storage_account.name
container_access_type = "private"
}
resource "azurerm_linux_web_app" "keycloak_web_app" {
name = local.app_service_name
location = local.default_region
resource_group_name = data.azurerm_resource_group.resource_group.name
service_plan_id = azurerm_service_plan.access_service_plan.id
https_only = true
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.keycloak_registry_server}"
PROXY_ADDRESS_FORWARDING = true
KC_HOSTNAME = var.keycloak_hostname
KC_HTTP_ENABLED = true
KC_METRICS_ENABLED = true
KC_DB_URL_HOST = azurerm_postgresql_flexible_server.keycloak_postgresql_server.fqdn
KC_DB_URL_PORT = 5432
KC_DB_SCHEMA = "public"
KC_DB_USERNAME = local.postgres_admin_username
KC_DB_PASSWORD = local.postgres_admin_password
KEYCLOAK_ADMIN = local.keycloak_admin_username
KEYCLOAK_ADMIN_PASSWORD = local.keycloak_admin_password
WEBSITES_PORT = 8080
KC_PROXY = "edge"
KC_HOSTNAME_STRICT = false
}
identity {
type = "SystemAssigned"
}
site_config {
always_on = true
http2_enabled = true
minimum_tls_version = 1.2
app_command_line = "start --optimized --proxy=edge --hostname-strict-https=false"
health_check_path = "/"
remote_debugging_enabled = false
application_stack {
docker_image = "${local.keycloak_registry_server}/${local.keycloak_image}"
docker_image_tag = "19.0.3"
}
}
storage_account {
access_key = azurerm_storage_account.access_service_storage_account.primary_connection_string
account_name = local.storage_account_name
name = "LinuxThemeMount"
share_name = "themes"
type = "AzureBlob"
mount_path = "/keycloak/custom-themes"
}
tags = local.resource_tags
}
Azure Linux Web App Configuration:

Terraform deployed azure function not working with FUNCTIONS_WORKER_RUNTIME python setting

Azure Function App displays "The service is unavailable." when FUNCTIONS_WORKER_RUNTIME = python app setting is set (with this setting commented the function displays "Your Functions 4.0 app is up and running").
As far as I understand, without this setting requirements.txt is not installed so the functions returns an error (from the logs: No module named 'azure.storage'...).
My terraform Azure Function code:
resource "azurerm_linux_function_app" "function_app" {
name = "${var.name}fa"
resource_group_name = var.resource_group.name
location = var.resource_group.location
storage_account_name = var.storage_account.name
storage_account_access_key = var.storage_account.primary_access_key
service_plan_id = var.service_plan.id
site_config {
application_insights_key = azurerm_application_insights.ai.instrumentation_key
application_insights_connection_string = azurerm_application_insights.ai.connection_string
}
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.ai.instrumentation_key
SCM_DO_BUILD_DURING_DEPLOYMENT = true
FUNCTIONS_WORKER_RUNTIME = "python"
}
}
The correct configuration should look like this:
resource "azurerm_linux_function_app" "function_app" {
name = "${var.name}fa"
resource_group_name = var.resource_group.name
location = var.resource_group.location
storage_account_name = var.storage_account.name
storage_account_access_key = var.storage_account.primary_access_key
service_plan_id = var.service_plan.id
site_config {
application_insights_key = azurerm_application_insights.ai.instrumentation_key
application_insights_connection_string = azurerm_application_insights.ai.connection_string
application_stack {
python_version = "3.9"
}
}
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.ai.instrumentation_key
SCM_DO_BUILD_DURING_DEPLOYMENT = true
}
}

Pass one resource's variable to another

I am creating an Azure App Service resource and an App Registration resource (and app service and others that are not relevant to this question as they work fine) via Terraform.
resource "azurerm_app_service" "app" {
name = var.app_service_name
location = var.resource_group_location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.plan-app.id
app_settings = {
"AzureAd:ClientId" = azuread_application.appregistration.application_id
}
site_config {
ftps_state = var.app_service_ftps_state
}
}
resource "azuread_application" "appregistration" {
display_name = azurerm_app_service.app.name
owners = [data.azuread_client_config.current.object_id]
sign_in_audience = "AzureADMyOrg"
fallback_public_client_enabled = true
web {
homepage_url = var.appreg_web_homepage_url
logout_url = var.appreg_web_logout_url
redirect_uris = [var.appreg_web_homepage_url, var.appreg_web_redirect_uri]
implicit_grant {
access_token_issuance_enabled = true
id_token_issuance_enabled = true
}
}
}
output "appreg_application_id" {
value = azuread_application.appregistration.application_id
}
I need to add the App Registration client / application id to the app_settings block in the app service resource.
The error I get with the above configuration is:
{"#level":"error","#message":"Error: Cycle: azuread_application.appregistration, azurerm_app_service.app","#module":"terraform.ui","#timestamp":"2021-09-15T10:54:31.753401Z","diagnostic":{"severity":"error","summary":"Cycle: azuread_application.appregistration, azurerm_app_service.app","detail":""},"type":"diagnostic"}
Note that the output variable displays the application id correctly.
You have a cycle error because you have both resources referencing each other. Terraform builds a directed acyclical graph to work out which order to create (or destroy) resources in with the information from one resource or data source flowing into another normally determining this order.
In your case your azuread_application.appregistration resource is referencing the azurerm_app_service.app.name parameter while the azurerm_app_service.app resource needs the azuread_application.appregistration.application_id attribute.
I don't know a ton about Azure but to me that seems like the azurerm_app_service resource needs to be created ahead of the azuread_application resource and so I'd expect the link to be in that direction.
Because you are already setting the azurerm_app_service.app.name parameter to var.app_service_name then you can just directly pass var.app_service_name to azuread_application.appregistration.display_name to achieve the same result but to break the cycle error.
resource "azurerm_app_service" "app" {
name = var.app_service_name
location = var.resource_group_location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.plan-app.id
app_settings = {
"AzureAd:ClientId" = azuread_application.appregistration.application_id
}
site_config {
ftps_state = var.app_service_ftps_state
}
}
resource "azuread_application" "appregistration" {
display_name = var.app_service_name
owners = [data.azuread_client_config.current.object_id]
sign_in_audience = "AzureADMyOrg"
fallback_public_client_enabled = true
web {
homepage_url = var.appreg_web_homepage_url
logout_url = var.appreg_web_logout_url
redirect_uris = [var.appreg_web_homepage_url, var.appreg_web_redirect_uri]
implicit_grant {
access_token_issuance_enabled = true
id_token_issuance_enabled = true
}
}
}
output "appreg_application_id" {
value = azuread_application.appregistration.application_id
}

specifying in terraform that the OS is linux for azurerm_function_app

I have the following azurerm_function_app terrform section:
resource "azurerm_function_app" "main" {
name = "${var.storage_function_name}"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
app_service_plan_id = "${azurerm_app_service_plan.main.id}"
storage_connection_string = "${azurerm_storage_account.main.primary_connection_string}"
https_only = true
app_settings {
"APPINSIGHTS_INSTRUMENTATIONKEY" = "${azurerm_application_insights.main.instrumentation_key}"
}
}
How can I specify the OS is linux?
Since there is not much documentation, I used following technique to construct terraform template.
Create the type of function app you want in azure portal
Import same resource using terraform import command.
terraform import azurerm_function_app.functionapp1
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Web/sites/functionapp1
following information will be retrieved
id = /subscriptions/xxxx/resourceGroups/xxxxxx/providers/Microsoft.Web/sites/xxxx
app_service_plan_id = /subscriptions/xxx/resourceGroups/xxxx/providers/Microsoft.Web/serverfarms/xxxx
app_settings.% = 3
app_settings.FUNCTIONS_WORKER_RUNTIME = node
app_settings.MACHINEKEY_DecryptionKey = xxxxx
app_settings.WEBSITE_NODE_DEFAULT_VERSION = 10.14.1
client_affinity_enabled = false
connection_string.# = 0
default_hostname = xxxx.azurewebsites.net
enable_builtin_logging = false
enabled = true
https_only = false
identity.# = 0
kind = functionapp,linux,container
location = centralus
name = xxxxx
outbound_ip_addresses = xxxxxx
resource_group_name = xxxx
site_config.# = 1
site_config.0.always_on = true
site_config.0.linux_fx_version = DOCKER|microsoft/azure-functions-node8:2.0
site_config.0.use_32_bit_worker_process = true
site_config.0.websockets_enabled = false
site_credential.# = 1
site_credential.0.password =xxxxxx
site_credential.0.username = xxxxxx
storage_connection_string = xxxx
tags.% = 0
version = ~2
From this I build following terraform template
provider "azurerm" {
}
resource "azurerm_resource_group" "linuxnodefunction" {
name = "azure-func-linux-node-rg"
location = "westus2"
}
resource "azurerm_storage_account" "linuxnodesa" {
name = "azurefunclinuxnodesa"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
location = "${azurerm_resource_group.linuxnodefunction.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "linuxnodesp" {
name = "azure-func-linux-node-sp"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
kind = "Linux"
reserved = true
sku {
capacity = 1
size = "P1v2"
tier = "PremiunV2"
}
}
resource "azurerm_function_app" "linuxnodefuncapp" {
name = "azure-func-linux-node-function-app"
location = "${azurerm_resource_group.linuxnodefunction.location}"
resource_group_name = "${azurerm_resource_group.linuxnodefunction.name}"
app_service_plan_id = "${azurerm_app_service_plan.linuxnodesp.id}"
storage_connection_string = "${azurerm_storage_account.linuxnodesa.primary_connection_string}"
app_settings {
FUNCTIONS_WORKER_RUNTIME = "node"
WEBSITE_NODE_DEFAULT_VERSION = "10.14.1"
}
site_config {
always_on = true
linux_fx_version = "DOCKER|microsoft/azure-functions-node8:2.0"
use_32_bit_worker_process = true
websockets_enabled = false
}
}
Let us know your experience with this. I will try to test few things with this.
I think you need to specify that in app_service_plan block
Kind = "Linux"
kind - (Optional) The kind of the App Service Plan to create. Possible values are Windows (also available as App), Linux and FunctionApp (for a Consumption Plan). Defaults to Windows. Changing this forces a new resource to be created.
NOTE: When creating a Linux App Service Plan, the reserved field must be set to true.
Example from Terraform doc
resource "azurerm_resource_group" "test" {
name = "azure-functions-cptest-rg"
location = "westus2"
}
resource "azurerm_storage_account" "test" {
name = "functionsapptestsa"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "test" {
name = "azure-functions-test-service-plan"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
kind = "Linux"
sku {
tier = "Dynamic"
size = "Y1"
}
properties {
reserved = true
}
}
resource "azurerm_function_app" "test" {
name = "test-azure-functions"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
storage_connection_string = "${azurerm_storage_account.test.primary_connection_string}"
}

Resources