unable to sepcify backup in terraform cosmodb - azure

I am using terraform to create CosmoDB , my build uses azurerm 2.56.0
resource "azurerm_cosmosdb_account" "testaccount" {
name = "testaccount"
location = var.location
resource_group_name = var.rgname
offer_type = "Standard"
Kind = "GlobalDocumentDB"
enable_automatic_failover = false
consistent_policy {
consistency_level = "Session"
}
backup {
type = "Periodic"
interval_in_minutes = "120"
retention_in_hours = "14"
}
}
I am getting following error
Error: Unsupported block type
When I comment out the backup section, it works fine.
I checked cosmosdb account https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account#backup
it does seem like I have declared it correctly. I have also checked that this version of azurerm supports backups
I am probably missing something obvious, does anyone see what the problem is?
Thanks
Dan

backup is not supported in 2.56.0. You are looking at newer docs. For your version, the docs are here.
If you want to use backup, you have to upgrade your provider.

Related

How to set my azure function's Platform with Terraform?

I am trying to change my Azure function's Platform to 64 bit to be compatible with a new dll or project needs. I just have not been able to find the corresponding terraform key to set this azure function field.
The problem is that terraform is currently defaulting to 32 bit, so whenever I deploy the field changes.
Any help would be appreciated. Thank you!
I've tried poking around with some app_settings keys from Microsoft documentation, but none of them seem obviously connected to the platform version. I have also tried looking at the keys here terraform documentation and none of those jump out to me either.
Here is my terraform not showing app_settings
resource "azurerm_app_service_plan" "plan" {
count = length(var.resource_groups)
name = "${var.name}-asp${count.index + 1}"
location = var.resource_groups[count.index].location
resource_group_name = var.resource_groups[count.index].name
kind = "FunctionApp"
sku {
tier = var.app_service_plan_tier
size = var.app_service_plan_size
}
tags = var.tags
}
resource "azurerm_function_app" "function" {
count = length(azurerm_app_service_plan.plan.*)
name = "${var.name}${count.index + 1}"
location = azurerm_app_service_plan.plan[count.index].location
resource_group_name = azurerm_app_service_plan.plan[count.index].resource_group_name
app_service_plan_id = azurerm_app_service_plan.plan[count.index].id
storage_account_name = var.storage_account_name
storage_account_access_key = var.storage_account_access_key
app_settings = local.app_settings
version = "~2"
https_only = true
tags = var.tags
}
The resource that manages the configuration of the worker is the azurerm_function_app resource.
Setting the attribute use_32_bit_worker_process to true will run the applications in a 32-bit platform, which it's the default value.
Explicitly set use_32_bit_worker_process to false and be sure to use any other tiers than Free or Shared, as stated in the docs:
when using an App Service Plan in the Free or Shared Tiers use_32_bit_worker_process must be set to true.
javierlga is correct,
set use_32_bit_worker_process to false inside site_config block.
site_config = {
use_32_bit_worker_process = false
}
More info:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app#use_32_bit_worker_process
You are able to set the bitness of your worker in the azurerm_windows_function_app resource.
Set use_32_bit_worker to false.
Note: azurerm_function_app is deprecated and has been superseeded by windows_function_app or azurerm_linux_function_app

AzureWebJobsDashboard no longer supported, but added automatically to Azure Function App

In our Application Insights logs for Azure Functions there are a lot of warnings with the message:
The Dashboard setting is no longer supported. See https://aka.ms/functions-dashboard for details.
We build our Azure resources using Terraform, and since our Function Apps target the "~4" runtime version we don't add the AzureWebJobsDashboard setting to our Function's Application settings. (According to the docs: The AzureWebJobsDashboard setting is only valid for apps that target version 1.x of the Azure Functions runtime.)
I was therefore surprised to find the AzureWebJobsDashboard setting with a value in the Azure portal. Any idea how it got there?
I deleted the setting manually in the portal for four of the apps we have running, and the logged warnings went away - however, the setting reappeared in one of them after a little while 🤯 Is there any way to make sure the deletion is permanent?
Edit: I tried deleting the setting manually for four new apps - making sure to save the changes, and the setting reappeared in two of them after some hours.
Edit2: After 1-2 days the setting is back in all eight apps.
There's a special setting builtin_logging_enabled in terraform resource for Azure functions:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app#enable_builtin_logging
Setting it to false should disable AzureWebJobsDashboard.
Just add it in your azurerm_windows_function_app resource like this:
resource "azurerm_windows_function_app" "func" {
name = "sample-function-app"
builtin_logging_enabled = false
...
}
We have tried the same in our environment to check ,when deploying azure function using terraform if AzureWebJobsDashboard is there or not.
Yes, It was there and the document you followed which is correct , So to it manually we need to follow the below to resolve the above issue.
To do that make sure that we have applied the APPINSIGHTS_INSTRUMENTATIONKEY after deleting AzureWebJobsDashboard
And enabled App insights for our function app as shown below and the value will be store automatically after enable.
In your case, the configuration is appeared automatically after sometime or days, But if we enabled the aforementioned it seems to be work. As we checked several times but still its not appeared.
NOTE:- we used Python3.9 with function runtime v4 in Linux environment.
Below is the terraform code that we used for reproducing;
main.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ajayXXXX"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "exatst"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_service_plan" "example" {
name = "example-service-plan1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
os_type = "Linux"
sku_name = "S1"
}
resource "azurerm_linux_function_app" "example" {
name = "funterraform"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
service_plan_id = azurerm_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
site_config {
application_stack {
python_version = "3.9"
}
}
}
resource "azurerm_function_app_function" "example" {
name = "example-function-app-function"
function_app_id = azurerm_linux_function_app.example.id
language = "Python"
test_data = jsonencode({
"name" = "Azure"
})
config_json = jsonencode({
"bindings" = [
{
"authLevel" = "function"
"direction" = "in"
"methods" = [
"get",
"post",
]
"name" = "req"
"type" = "httpTrigger"
},
{
"direction" = "out"
"name" = "$return"
"type" = "http"
},
]
})
}
Source Code taken from : HashiCrop Terraformregistry|azurerm_function_app_function
For more information please refer the below links:-
GitHub Issue| Remove support for AzureWebJobsDashboard
MICROSOFT DOCUMENT| App settings reference for Azure Functions.

Terraform reports error "Failed to query available provider packages"

I have created main.tf file as below for Mongodb terraform module.
resource "mongodbatlas_teams" "test" {
org_id = null
name = "MVPAdmin_Team"
usernames = ["user1#email.com", "user2#email.com", "user3#email.com"]
}
resource "mongodbatlas_project" "test" {
name = "MVP_Project"
org_id = null
teams {
team_id = null
role_names = ["GROUP_CLUSTER_MANAGER"]
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = null
ip_address = null
comment = "IP address for MVP Dev cluster testing"
}
resource "mongodbatlas_cluster" "test" {
name = "MVP_DevCluster"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
cluster_type = REPLICASET
state_name = var.state_name
replication specs {
num_shards= var.num_shards
region_config {
region_name = "AU-EA"
electable_nodes = var.electable_nodes
priority = var.priority
read_only_nodes = var.read_only_nodes
}
}
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
mongo_db_major_version = var.mongo_db_major_version
provider_name = "Azure"
provider_disk_type_name = var.provider_disk_type_name
provider_instance_size_name = var.provider_instance_size_name
mongodbatlas_database_user {
username = var.username
password = var.password
auth_database_name = var.auth_database_name
role_name = var.role_name
database_name = var.database_name
}
mongodbatlas_database_snapshot_backup_policy {
policy_item = var.policy_item
frequency_type = var.frequency_type
retention_value = var.retention_value
}
advanced_configuration {
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
connection_string = var.connection_string
}
}
However, terraform init reports as below:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/mongodbatlas...
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mongodbatlas: provider registry registry.terraform.io does not have
a provider named registry.terraform.io/hashicorp/mongodbatlas
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use mongodb/mongodbatlas? If so, you must specify that
source address in each module which requires that provider. To see which
modules are currently depending on hashicorp/mongodbatlas, run the following
command:
terraform providers
Any idea as to what is going wrong?
The error message explains the most likely reason for seeing this error message: you've upgraded directly from Terraform v0.12 to Terraform v0.14 without running through the Terraform v0.13 upgrade steps.
If you upgrade to Terraform v0.13 first and follow those instructions then the upgrade tool should be able to give more specific instructions on what to change here, and may even be able to automatically upgrade your configuration for you.
However, if you wish then you can alternatively manually add the configuration block that the v0.13 upgrade tool would've inserted, to specify that you intend to use the mongodb/mongodbatlas provider as "mongodbatlas" in this module:
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
}
There are some other considerations in the v0.13 upgrade guide that the above doesn't address, so you may still need to perform the steps described in that upgrade guide if you see different error messages after trying what I showed above.

Terraform Azure SQL server admin password change forces recreating of resource

I have the following piece of Terraform code where Terraform fetches the sql admin password from a key vault. When I changed the administrator login and passsword in the key vault and then run terraform again to update the sql server, it destroys the sql database and sql server.
Is this standard procedure or can I change this behavior? One could understand that recreating the resources is not really feasible in a production environment. I know a lifecycle hook could prevent the deletion of a resource, but such a thing would then break the pipeline if I am correct.
data "azurerm_key_vault_secret" "sql_admin_user_secret" {
name = var.sql_admin_user_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
data "azurerm_key_vault_secret" "sql_admin_password_secret" {
name = var.sql_admin_password_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
resource "azurerm_sql_server" "sql_server" {
name = var.sql_server_name
resource_group_name = var.resource_group_name
location = var.location
version = var.sql_server_version
administrator_login = data.azurerm_key_vault_secret.sql_admin_user_secret.value
administrator_login_password = data.azurerm_key_vault_secret.sql_admin_password_secret.value
}
resource "azurerm_sql_database" "sql_database" {
name = var.sql_database_name
resource_group_name = var.resource_group_name
location = var.location
server_name = azurerm_sql_server.sql_server.name
edition = var.sql_edition
requested_service_objective_name = var.sql_service_level
}
I could add something like this, but this only prevents a destroy and ignores changes in those fields respectively. Which is again, not really an option.
lifecycle {
prevent_destroy = true
ignore_changes = ["administrator_login", "administrator_login_password"]
}
Update:
The way of working is to never update the administrator_login. administrator_login_password can be updated separately, which doesn't cause the instance to be recreated.
As per the official doc, if you change the administrator_login, it is expected the resource to be recreated. However, if you only change administrator_login_password, it should get updated.
administrator_login - (Required) The administrator login name for the new server. Changing this forces a new resource to be created.
There is nothing much can be done here since Terraform is communicating with the Azure API, which is not designed to update the administrator user id of Azure SQL without creating a new resource.

Terraform Invalid count argument that depends on another resource

I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.

Resources