I have an Azure function that I need to run in an Elastic Premium plan. After deployed I see the following error:
Azure Functions runtime is unreachable
I've tried to solve it following Microsoft documentation, no luck.
Here is some thoughts about my tries :
We checked the Storage account is created
The Function's subnet already has the service endpoint for the storage account
Vnet integration is already enabled in the Function and it (subnet) is already added to the Storage firewall
We added the required properties in the Function settings:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = dynamic created (connection string to the
Storage account)
WEBSITE_CONTENTOVERVNET = 1
WEBSITE_CONTENTSHARE = dynamic created
WEBSITE_VNET_ROUTE_ALL = 1
Here is the documentation link.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-recover-storage-account
Everything was working fine when I was using the Premium (P1v2) and the error begins when I moved to Elastic (EP1).
I am deploying it using Terraform.
Here is a TF code example we are using to deploy
locals {
app_settings = {
FUNCTIONS_WORKER_RUNTIME = "python"
FUNCTION_APP_EDIT_MODE = "readonly"
WEBSITE_VNET_ROUTE_ALL = "1"
WEBSITE_CONTENTOVERVNET = "1"
}
}
module "az_service_plan_sample" {
source = "source module"
serviceplan_name = "planname"
resource_group_name = "RG Name"
region = "East US 2"
tier = "ElasticPremium"
size = "EP1"
kind = "elastic"
capacity = 40
per_site_scaling = false
depends_on = [
module.storage_account
]
}
module "storage_account_sample" {
source = "source module"
resource_group_name = "RG Name"
location = "East US 2"
name = "saname"
storage_account_replication_type = "GRS"
subnet_ids = [subnet_ids]
}
module "sample" {
source = "source module"
azure_function_name = "functionname"
resource_group_name = "RG Name"
storage_account_name = module.storage_account.storage-account-name
storage_account_access_key = module.storage_account.storage-account-primary-key
region = "East US 2"
subnet_id = subnet_ids
app_service_id = module.az_service_plan.service_plan_id
scope_role_storage_account = module.storage_account.storage-account-id
azure_function_version = "~4"
app_settings = local.app_settings
key_vault_reference_identity_id = azurerm_user_assigned_identity.az_func.id
pre_warmed_instance_count = 2
identity_type = "UserAssigned"
user_assigned_identityies = [{
id = azurerm_user_assigned_identity.az_func.id
principal_id = azurerm_user_assigned_identity.az_func.principal_id
}]
depends_on = [
module.az_service_plan_sample,
module.storage_account_sample,
azurerm_user_assigned_identity.az_func,
]
}
AFAIk, There is not a one specific reason for Azure function runtime unreachable, Please check the below workaround to solve the above issue,
We have tried to create a Function app using Elastic premium plan and its working fine at our end,
Please make sure that you have configured the correct WEBSITE_CONTENTAZUREFILECONNECTIONSTRING value same as AzureWebJobsStorage then try to STOP/START the function app.
Also try to set the pre_warmed_instance_count=1 instead of 2 as mentioned in this MICROSOFT DOCUMENTATION:-
The default pre-warmed instance count is 1, and for most scenarios this value should remain as 1.
For more information please refer this ARTICLE|AZURE LESSONS-AZURE FUNCTION RUNTIME UNREACHABLE.
When you use a Function with Elastic Premium Plan Type that has a VNET Integration, you need to add one more property called vnet_route_all_enabled to enable route outbound from your Azure Function. Also you need to first create a file in your storage account that the name of this file will be the content of this variable WEBSITE_CONTENTSHARE in your Application Settings. Below is my code suggestion:
You can check this doc to be sure: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-vnet
Below my suggest code:
locals {
app_settings = {
FUNCTIONS_WORKER_RUNTIME = "python"
FUNCTION_APP_EDIT_MODE = "readonly"
WEBSITE_VNET_ROUTE_ALL = "1"
WEBSITE_CONTENTOVERVNET = "1"
WEBSITE_CONTENTSHARE = "file-function"
}
}
module "az_service_plan_sample" {
source = "source module"
serviceplan_name = "planname"
resource_group_name = "RG Name"
region = "East US 2"
tier = "ElasticPremium"
size = "EP1"
kind = "elastic"
capacity = 40
per_site_scaling = false
depends_on = [
module.storage_account
]
}
module "storage_account_sample" {
source = "source module"
resource_group_name = "RG Name"
location = "East US 2"
name = "saname"
storage_account_replication_type = "GRS"
subnet_ids = [subnet_ids]
}
resource "azurerm_storage_share" "share_file_ingest_function" {
name = "file-function"
storage_account_name = module.storage_account_sample.name
depends_on = [
module.storage_account_sample
]
}
module "sample" {
source = "source module"
azure_function_name = "functionname"
resource_group_name = "RG Name"
storage_account_name = module.storage_account.storage-account-name
storage_account_access_key = module.storage_account.storage-account-primary-key
region = "East US 2"
subnet_id = subnet_ids
app_service_id = module.az_service_plan.service_plan_id
scope_role_storage_account = module.storage_account.storage-account-id
azure_function_version = "~4"
app_settings = local.app_settings
key_vault_reference_identity_id = azurerm_user_assigned_identity.az_func.id
pre_warmed_instance_count = 2
vnet_route_all_enabled = true
identity_type = "UserAssigned"
user_assigned_identityies = [{
id = azurerm_user_assigned_identity.az_func.id
principal_id = azurerm_user_assigned_identity.az_func.principal_id
}]
depends_on = [
module.az_service_plan_sample,
module.storage_account_sample,
azurerm_user_assigned_identity.az_func,
]
}
Related
I'm trying to write Terraform code to configure a Google Cloud SQL instance to use subnet "db-subnet" of a VPC (see below).
module "vpc" {
source = "terraform-google-modules/network/google"
version = "~> 6.0"
project_id = module.project_factory.project_id
network_name = "staging-vpc"
routing_mode = "GLOBAL"
subnets = [
{
subnet_name = "db-subnet"
subnet_ip = "10.10.20.0/24"
subnet_region = var.region
subnet_private_access = "true"
subnet_flow_logs = "true"
description = "This subnet is for cloudsql DBs"
},
]
}
Next, I use 'ip_configuration:private_network' to refer to the subnet self link.
module "cloudsql_postgresql" {
source = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
version = "14.0.0"
project_id = module.project_factory.project_id
name = "fhirserver-postgres"
database_version = "POSTGRES_14"
zone = "us-central1-c"
user_name = "postgres"
ip_configuration = {
ipv4_enabled = false
private_network = module.vpc.subnets["${var.region}/db-subnet"].self_link
}
}
However, it returns an error: "settings.0.ip_configuration.0.private_network" ("https://www.googleapis.com/compute/v1/projects/endue-staging-263b/regions/us-central1/subnetworks/db-subnet") doesn't match regexp "^(?:http(?:s)?://.+/)?projects/((?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?)))/global/networks/((?:[a-z](?:[-a-z0-9]*[a-z0-9])?))$"
Can anyone help me with this error? Thank you very much!
I tried private_network = module.vpc.network_self_link and it works but it's not what I'm looking for tho.
I would like to update my exsiting Azure App Service in Terraform by adding a Backup to this App Service.
For now it looks like this:
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = "example-resources"
}
resource "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = "West Europe"
resource_group_name = "example-resources"
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
But when I run it i got a error A resource with the ID ........ /MyUniqueWebAppName" already exists - to be managed via Terraform this resource needs to be imported into the State.
How in terraform can I point to an existing Azure APP Service and add a backup with the same schedule as I did in my template?
Before you can modify your existing resources with TF, you must import into the terraform state. For this you use import command.
data "azurerm_resource_group" "example" {
name = "<give rg name existing one>"
}
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = data.azurerm_resource_group.example.name
}
data "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
No need to use import command , use this code for your reference
just give rg name existing one in resources group block
We are using terraform version of 0.12.19 and azurerm provider version 2.10.0 for deploying the service bus and its queues and authorization rules. So when we ran the terraform apply it created the service bus and queue but it throwed the below error for the creation of authorization rules.
But when we checked the azure portal these authorization rules were present and in tf state file as well we were able to find the entries of both the resources and they had a parameter Status as "Tainted" in it.. So when we tried to run the apply again to see if will recreate/replace the existing resources but it was failing with the same error. Now we are unable to proceed further as even when we run the plan for creating the new resources its failing at this point and not letting us proceed further.
We even tried to untainted it and run the apply but it seems still we are getting this issue though the resources doesn't have the status tainted parameter in tf state. Can you please help us here the solution so that we can resolve this. (We can't move forward to new version of terraform cli as there are so many modules dependent on it and it will impact our production deployments as well.)
Error: Error making Read request on Azure ServiceBus Queue Authorization Rule "" (Queue "sample-check-queue" / Namespace "sample-check-bus" / Resource Group "My-RG"): servicebus.QueuesClient#GetAuthorizationRule: Invalid input: autorest/validation: validation failed: parameter=authorizationRuleName constraint=MinLength value="" details: value length must be greater than or equal to 1
azurerm_servicebus_queue_authorization_rule.que-sample-check-lsr: Refreshing state... [id=/subscriptions//resourcegroups/My-RG/providers/Microsoft.ServiceBus/namespaces/sample-check-bus/queues/sample-check-queue/authorizationrules/lsr]
Below is the service_bus.tf file code:
provider "azurerm" {
version = "=2.10.0"
features {}
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = "My-RG"
location = "West Europe"
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr" {
name = "lsr"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
I have tried the below terraform code to create authorization rules and could create them successfully:
I have followed this azurerm_servicebus_queue_authorization_rule |
Resources | hashicorp/azurerm | Terraform Registry having latest
version of hashicorp/azurerm terraform provider.
This maybe even related to arguments queue_name. arguments of
resources changed to queue_id in 3.X.X versions
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "xxxx"
location = "xx"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
#resource_group_name = "My-RG"
namespace_id = azurerm_servicebus_namespace.service_bus.id
#namespace_name =
azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr"
{
name = "lsr"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check- AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
Authorization rules created without error:
Please try to change the name of the authorization rule named “lsr” with increased length and also please try to create one rule at a time in your case .
Thanks all for your inputs and suggestions.
Code is working fine now with the terraform provider version 2.56.0 and terraform cli version 0.12.19. Please let me know if any concerns.
I am creating an infrastructure with terraform modules. Some of the common and repeatitive infra are created using module
and other resources are created independently outside of the module. The structure of my code is described as below.
-terraform\module\storage.tf
-terraform\main.tf
-terraform\mlws.tf
This is my code for /module/storage.tf where I am createing a storage account like this
resource "azurerm_storage_account" "storage" {
name = var.storage_account_name
resource_group_name = var.rg_name
location = var.location
account_tier = "Standard"
account_replication_type = "GRS"
min_tls_version = "TLS1_2"
}
module "m1" {
source = "./modules"
storage_account_name = "m1storage"
rg_name = "rg1"
location = "USCentral"
}
module "m2" {
source = "./modules"
storage_account_name = "m2storage"
rg_name = "rg2"
location = "USCentral"
}
module "m3" {
source = "./modules"
storage_account_name = "m3storage"
rg_name = "rg3"
location = "USCentral"
}
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = **<Mandatory to be filled>**
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
The code for storage account is under \terraform\module\storage.tf, The code for calling the module is under \terraform\main.tf, The code for machine learning workspace is under \terraform\mlws.tf.
Since my mlws.tf code is outside the module but it need to be associated with storage account id created under module m2 in above code.
I am struggling to fetch the id of "m2storage" storage account. Can you please provide solution on how can I access the id of specific storage account created through module and attach it with my code which is outside the module.
This is how it normally works. You run module m2 and it should give output something like this (should include storage_account_id):
output "storage_account_id" {
description = "M2 storage account id."
value = m2.storage_account.storage_account_id
}
Now you have the output and you want to use it you will refer to it as:
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = module.m2.storage_account_id
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
Let me know if you need more help.
I am getting a 403 forbidden when creating a function app that connects to its storage account via private endpoint inside a vnet. Storage account has firewall default action of 'Deny', and of course if I set it to 'Allow' it will work. I want this as 'Deny', however. Following this microsoft link if the function app and storage account are created in the same region with vnet, subnets, and private endpoints then it's supposed to work so I must be doing something wrong. I also tried changing the region for the storage account and it still resulted in a 403.
Error:
Error: web.AppsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="There was a conflict. The remote server returned an error: (403) Forbidden." Details=[{"Message":"There was a conflict. The remote server returned an error: (403) Forbidden."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","ExtendedCode":"01020","Message":"There was a conflict. The remote server returned an error: (403) Forbidden.","MessageTemplate":"There was a conflict. {0}","Parameters":["The remote server returned an error: (403) Forbidden."]}}]
Here is my terraform code
resource "azurerm_function_app" "func" {
name = "${var.func_basics.name}-func"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
app_service_plan_id = azurerm_app_service_plan.svc_plan.id
storage_account_name = azurerm_storage_account.func_sa.name
storage_account_access_key = azurerm_storage_account.func_sa.primary_access_key
version = var.runtime_version
https_only = true
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet
]
app_settings = merge(var.app_settings, local.additional_app_settings)
}
resource "azurerm_app_service_plan" "svc_plan" {
name = "${var.func_basics.name}-func-plan"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
kind = "elastic"
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
resource "azurerm_application_insights" "func_ai" {
name = "${var.func_basics.name}-func-appi"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
application_type = var.ai_app_type
}
resource "azurerm_storage_account" "func_sa" {
name = "st${lower(replace(var.func_basics.name, "/[-_]*/", ""))}"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
account_tier = var.sa_settings.tier
account_replication_type = var.sa_settings.replication_type
account_kind = "StorageV2"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
depends_on = [
azurerm_virtual_network.func_vnet
]
network_rules {
default_action = "Deny"
virtual_network_subnet_ids = [azurerm_subnet.func_endpoint_subnet.id]
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
}
}
resource "azurerm_virtual_network" "func_vnet" {
name = "${var.func_basics.name}-func-vnet"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "func_service_subnet" {
name = "${var.func_basics.name}-func-svc-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_service_network_policies = true
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "${var.func_basics.name}-func-del"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
resource "azurerm_subnet" "func_endpoint_subnet" {
name = "${var.func_basics.name}-func-end-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.2.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_private_endpoint" "func_req_sa_blob_endpoint" {
name = "${var.func_basics.name}-func-req-sa-blob-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-blob-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_endpoint" "func_req_sa_file_endpoint" {
name = "${var.func_basics.name}-func-req-sa-file-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-file-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["file"]
}
}
resource "azurerm_app_service_virtual_network_swift_connection" "func_vnet_swift" {
app_service_id = azurerm_function_app.func.id
subnet_id = azurerm_subnet.func_service_subnet.id
}
locals {
additional_app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.func_ai.instrumentation_key
"WEBSITE_CONTENTAZUREFILECONNECTIONSTRING" = azurerm_storage_account.func_sa.primary_connection_string
"AzureWebJobsStorage" = azurerm_storage_account.func_sa.primary_connection_string
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_CONTENTOVERVNET" = "1"
"WEBSITE_DNS_SERVER" = "168.63.129.16"
}
}
It seems that it's a common error message when you create an Azure function where the storage account of the function is added to the Virtual Network, read here for more details.
To resolve it, you can use the local-exec Provisioner to invoke the az CLI command to deny the traffic after all of the provisions are finished.
az storage account update --name storage_account_name --resource-group reource_group_name --default-action 'Deny' --bypass 'AzureServices', 'Logging', 'Metrics'
Alternatively, you can separately configure the storage account network rules. You may need to allow your client's IP to access the storage account.
resource "azurerm_storage_account_network_rules" "test" {
resource_group_name = var.resourceGroupName
storage_account_name = azurerm_storage_account.func_sa.name
default_action = "Deny"
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
ip_rules = ["x.x.x.x"]
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet,
azurerm_function_app.func
]
}
In addition, there is a possible solution for this similar case on Github.
I've had this issue in the past and found that it can be resolved as follows. I've tested this on v3.3.0 of the provider using the azurerm_windows_function_app resource. I think currently this is an Azure problem, in that it if you don't supply a share it will try and create one but will be denied. You'd expect this to work if Allow Azure services on the trusted services list to access this storage account is enabled, but webapps aren't trusted.
Create your storage account with IP rules and deny
Create a share within this for your function app content
within the function set the following configuration settings
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = <storage_account.primary_connection_string>
WEBSITE_CONTENTSHARE = <your share>
WEBSITE_CONTENTOVERVNET = 1
In the functions site configuration set the attribute vnet_route_all_enabled = true