I am trying to change my Azure function's Platform to 64 bit to be compatible with a new dll or project needs. I just have not been able to find the corresponding terraform key to set this azure function field.
The problem is that terraform is currently defaulting to 32 bit, so whenever I deploy the field changes.
Any help would be appreciated. Thank you!
I've tried poking around with some app_settings keys from Microsoft documentation, but none of them seem obviously connected to the platform version. I have also tried looking at the keys here terraform documentation and none of those jump out to me either.
Here is my terraform not showing app_settings
resource "azurerm_app_service_plan" "plan" {
count = length(var.resource_groups)
name = "${var.name}-asp${count.index + 1}"
location = var.resource_groups[count.index].location
resource_group_name = var.resource_groups[count.index].name
kind = "FunctionApp"
sku {
tier = var.app_service_plan_tier
size = var.app_service_plan_size
}
tags = var.tags
}
resource "azurerm_function_app" "function" {
count = length(azurerm_app_service_plan.plan.*)
name = "${var.name}${count.index + 1}"
location = azurerm_app_service_plan.plan[count.index].location
resource_group_name = azurerm_app_service_plan.plan[count.index].resource_group_name
app_service_plan_id = azurerm_app_service_plan.plan[count.index].id
storage_account_name = var.storage_account_name
storage_account_access_key = var.storage_account_access_key
app_settings = local.app_settings
version = "~2"
https_only = true
tags = var.tags
}
The resource that manages the configuration of the worker is the azurerm_function_app resource.
Setting the attribute use_32_bit_worker_process to true will run the applications in a 32-bit platform, which it's the default value.
Explicitly set use_32_bit_worker_process to false and be sure to use any other tiers than Free or Shared, as stated in the docs:
when using an App Service Plan in the Free or Shared Tiers use_32_bit_worker_process must be set to true.
javierlga is correct,
set use_32_bit_worker_process to false inside site_config block.
site_config = {
use_32_bit_worker_process = false
}
More info:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app#use_32_bit_worker_process
You are able to set the bitness of your worker in the azurerm_windows_function_app resource.
Set use_32_bit_worker to false.
Note: azurerm_function_app is deprecated and has been superseeded by windows_function_app or azurerm_linux_function_app
Related
I have created a storage account using a Terraform. I would like to disable the option found under the storage account settings and configuration in the Azure portal called Allow public blob access, however under the azurerm_storage_account command, I cannot seem to find the option required to achieve this.
Below is my code so far to create the storage account, which works, but if anyone could point me in the right direction that would be great, thank you.
Storage Account
resource "azurerm_storage_account" "st" {
name = var.st.name
resource_group_name = var.rg_shared_name
location = var.rg_shared_location
account_tier = var.st.tier
account_replication_type = var.st.replication
public_network_access_enabled = false
}
As soon as I've posted this question, I found the command, so I apologise for wasting your time.
The command to use is allow_nested_items_to_be_public, if you set this to false it will disable the feature found under Storage Account > Settings > Configuration, Allow blob public access
Source
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account#allow_nested_items_to_be_public
Updated Code
resource "azurerm_storage_account" "st" {
name = var.st.name
resource_group_name = var.rg_shared_name
location = var.rg_shared_location
account_tier = var.st.tier
account_replication_type = var.st.replication
public_network_access_enabled = false
allow_nested_items_to_be_public = false
}
With the release of version 3.0 of the azurerm provider, the argument allow_blob_public_access changed to allow_nested_items_to_be_public. This can cause confusion if you read old documentation or examples. Furthermore, there are several ways in which you can disable public network access for a storage account.
You can set public_network_access_enabled to false.
You can use the network_rules block and set default_action to deny.
You can use the azurerm_storage_account_network_rules resource and set the default_action to deny.
Explicitly telling that nobody should be able to publicly enter the storage account is the cleanest/safest option. However, sometimes you want to open a storage account for a specific set of IP addresses and block all the others, then the other options are useful.
If you disable public network access then you should make use of private endpoints or service endpoints to be able to connect to your storage account from a private network. Example based on this repository:
resource "azurerm_storage_account" "storage_account" {
name = var.name
resource_group_name = var.resource_group_name
location = var.location
account_kind = var.kind
account_tier = var.tier
account_replication_type = var.replication_type
is_hns_enabled = true
enable_https_traffic_only = true
public_network_access_enabled = false
allow_nested_items_to_be_public = false
min_tls_version = var.min_tls_version
}
resource "azurerm_private_endpoint" "private_endpoint_blob" {
name = "pe-blob-${var.name}"
location = var.location
resource_group_name = var.resource_group_name
subnet_id = var.subnet_id
private_service_connection {
name = "psc-blob-${var.name}"
is_manual_connection = false
private_connection_resource_id = azurerm_storage_account.storage_account.id
subresource_names = ["blob"]
}
# Should be deployed by Azure policy
lifecycle {
ignore_changes = [private_dns_zone_group]
}
}
Here is what i found on the official Microsoft documentation,
It seems the line allow_blob_public_access = false works
https://learn.microsoft.com/en-us/azure/developer/terraform/store-state-in-azure-storage?tabs=terraform
In our Application Insights logs for Azure Functions there are a lot of warnings with the message:
The Dashboard setting is no longer supported. See https://aka.ms/functions-dashboard for details.
We build our Azure resources using Terraform, and since our Function Apps target the "~4" runtime version we don't add the AzureWebJobsDashboard setting to our Function's Application settings. (According to the docs: The AzureWebJobsDashboard setting is only valid for apps that target version 1.x of the Azure Functions runtime.)
I was therefore surprised to find the AzureWebJobsDashboard setting with a value in the Azure portal. Any idea how it got there?
I deleted the setting manually in the portal for four of the apps we have running, and the logged warnings went away - however, the setting reappeared in one of them after a little while 🤯 Is there any way to make sure the deletion is permanent?
Edit: I tried deleting the setting manually for four new apps - making sure to save the changes, and the setting reappeared in two of them after some hours.
Edit2: After 1-2 days the setting is back in all eight apps.
There's a special setting builtin_logging_enabled in terraform resource for Azure functions:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app#enable_builtin_logging
Setting it to false should disable AzureWebJobsDashboard.
Just add it in your azurerm_windows_function_app resource like this:
resource "azurerm_windows_function_app" "func" {
name = "sample-function-app"
builtin_logging_enabled = false
...
}
We have tried the same in our environment to check ,when deploying azure function using terraform if AzureWebJobsDashboard is there or not.
Yes, It was there and the document you followed which is correct , So to it manually we need to follow the below to resolve the above issue.
To do that make sure that we have applied the APPINSIGHTS_INSTRUMENTATIONKEY after deleting AzureWebJobsDashboard
And enabled App insights for our function app as shown below and the value will be store automatically after enable.
In your case, the configuration is appeared automatically after sometime or days, But if we enabled the aforementioned it seems to be work. As we checked several times but still its not appeared.
NOTE:- we used Python3.9 with function runtime v4 in Linux environment.
Below is the terraform code that we used for reproducing;
main.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ajayXXXX"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "exatst"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_service_plan" "example" {
name = "example-service-plan1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
os_type = "Linux"
sku_name = "S1"
}
resource "azurerm_linux_function_app" "example" {
name = "funterraform"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
service_plan_id = azurerm_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
site_config {
application_stack {
python_version = "3.9"
}
}
}
resource "azurerm_function_app_function" "example" {
name = "example-function-app-function"
function_app_id = azurerm_linux_function_app.example.id
language = "Python"
test_data = jsonencode({
"name" = "Azure"
})
config_json = jsonencode({
"bindings" = [
{
"authLevel" = "function"
"direction" = "in"
"methods" = [
"get",
"post",
]
"name" = "req"
"type" = "httpTrigger"
},
{
"direction" = "out"
"name" = "$return"
"type" = "http"
},
]
})
}
Source Code taken from : HashiCrop Terraformregistry|azurerm_function_app_function
For more information please refer the below links:-
GitHub Issue| Remove support for AzureWebJobsDashboard
MICROSOFT DOCUMENT| App settings reference for Azure Functions.
Trying to setup my first web app using terraform on Azure using there freetier.
The Resource group, and app service plan were able to be created but the app creation gives an error that says:
creating Linux Web App: (Site Name "testazurermjay" / Resource Group "test-resources"): web.AppsClient#C. Status=<nil> <nil>
Here is the terraform main.tf file:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "Switzerland North"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = "UK South" #azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {}
}
At first I thought the name was the issue for the azurerm_linux_web_app so I changed it from test to testazurermjay however that was not able to work.
I was able to get it to work BUT I had to use a depreciated resource called azurerm_app_service instead of azurerm_linux_web_app. I ALSO had to make sure that my resource-group and app service plan were in the same location. When I originally tried to set both the resource group and the app plan to Switzerland North it would give me an error when creating the app service plan (That is why you see me change the plan to UK South in the Original question). HOWEVER - after I set BOTH resource group and app service plan to UK South they were able to be created in the same location. Then I used azurerm_app_service to create a free tier service using the use_32_bit_worker_process = true variable in the site_config object.
Here is the full terraform file now:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "UK South"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_app_service" "test" {
name = "sofcvlepsaipd"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
app_service_plan_id = azurerm_service_plan.test.id
site_config {
use_32_bit_worker_process = true
}
}
I MUST STRESS THAT THIS ISN'T BEST PRACTICE AS THE azurerm_app_service IS GOING TO BE REMOVED IN THE NEXT VERSION. THIS SEEMS TO INDICATE THAT TERRAFORM WONT BE ABLE TO CREATE FREE TIER APP SERVICES IN THE NEXT UPDATE.
If someone knows how to do this with azurerm_linux_web_app or knows a better approach to this let me know.
I just encountered a similar issue, "always_on" setting defaults to true, but that is not supported in the free tier. As stated here, you must explicitly set it to false when using free tier
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {
always_on = false
}
}
I have several Azure resources that are created using the for_each property and then those resources have an Application Insights resource created using for_each as well.
Here is the code that creates the azurerm_application_insights:
resource "azurerm_application_insights" "applicationInsights" {
for_each = toset(keys(merge(local.appServices, local.functionApps)))
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "ai"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
application_type = "web"
lifecycle {
ignore_changes = [tags]
}
}
I've noticed that every time we run a terraform plan against some environments, we are always seeing Terraform report a "change" to the APPINSIGHTS_INSTRUMENTATIONKEY value. When I compare this value in the app settings key/value list to the actual AI instrumentation key that was created for it, it does match.
Terraform will perform the following actions:
# module.primaryRegion.module.functionapp["fnapp1"].azurerm_function_app.fnapp will be updated in-place
~ resource "azurerm_function_app" "fnapp" {
~ app_settings = {
# Warning: this attribute value will be marked as sensitive and will
# not display in UI output after applying this change
~ "APPINSIGHTS_INSTRUMENTATIONKEY" = (sensitive)
# (1 unchanged element hidden)
Is this a common issue with other people? I would think that the instrumentation key would never change especially since Terraform is what created all of these Application Insights resources and assigns it to each application.
This is how I associate each Application Insights resource to their appropriate application with a for_each property
module "webapp" {
for_each = local.appServices
source = "../webapp"
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "app"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = each.value.app_service_plan_id
app_settings = merge({"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.applicationInsights[each.key].instrumentation_key}, each.value.app_settings)
allowed_origins = each.value.allowed_origins
deploymentEnvironment = var.deploymentEnvironment
}
I'm wondering if the merge is just reordering the list of key/values in the app_settings for the app, and Terraform detects that as some kind of change and the value itself isn't changing. This is the only way I know how to assign a bunch of Application Insights resources to many web apps using for_each to reduce configuration code.
Use only the Site_config block to solve the issue
Example
resource "azurerm_windows_function_app" "function2" {
provider = azurerm.private
name = local.private.functionapps.function2.name
resource_group_name = local.private.rg.app.name
location = local.private.location
storage_account_name = local.private.functionapps.storageaccount.name
storage_account_access_key = azurerm_storage_account.function_apps_storage.primary_access_key
service_plan_id = azurerm_service_plan.app_service_plan.id
virtual_network_subnet_id = lookup(azurerm_subnet.subnets, "appservice").id
https_only = true
site_config {
application_insights_key = azurerm_application_insights.appinisghts.instrumentation_key
}
}
I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.