How can you create an Azure Cognitive Services Account with System assigned identity in Terraform?
I have tried the following but got an error: Blocks of type "identity" are not expected here.
resource "azurerm_cognitive_account" "cgsrv" {
# Conditionally based on feature flag
count = var.to_provision == true ? 1 : 0
name = lower(replace("${var.name_params.prefix}-cgnsrv-${var.name_params.use_case_name}", "-", ""))
location = var.location
resource_group_name = var.resourcegroup_name
kind = "CognitiveServices"
sku_name = "S0"
identity {
type = "SystemAssigned"
}
}
You are right, the terraform documentation does not mention this ability at the moment (see here)
The provider is open-sourced, you can find the source here and it looks like there is a pull request regarding this specific field: https://github.com/terraform-providers/terraform-provider-azurerm/pull/12469
Related
I'm configuring my servers with terraform. For non-prod environments, our sku doesn't allow for high availability, but in prod our sku does.
For some reason high_availability.mode only accepts the value of "ZoneRedundant" for high availability, but doesn't accept any other value (according to the documentation). Depending on whether or not var.isProd is true, I want to turn high availability on and off, but how would I do that?
resource "azurerm_postgresql_flexible_server" "default" {
name = "example-${var.env}-postgresql-server"
location = azurerm_resource_group.default.location
resource_group_name = azurerm_resource_group.default.name
version = "14"
administrator_login = "sqladmin"
administrator_password = random_password.postgresql_server.result
geo_redundant_backup_enabled = var.isProd
backup_retention_days = var.isProd ? 60 : 7
storage_mb = 32768
high_availability {
mode = "ZoneRedundant"
}
sku_name = var.isProd ? "B_Standard_B2s" : "B_Standard_B1ms"
}
I believe the default assignment for this resource would be disabled HA, and therefore it is not the argument mode which manages the HA, but rather the existence of the high_availability block. Therefore, you could manage the HA by excluding the block to accept the default "disabled", or including the block to manage the HA as enabled with a value of ZoneRedundant:
dynamic "high_availability" {
for_each = var.isProd ? ["this"] : []
content {
mode = "ZoneRedundant"
}
}
I am hypothesizing somewhat on the API endpoint parameter defaults, so this would need to be acceptance tested with an Azure account. However, the documentation for the Azure Postgres Flexible Server in general claims HA is in fact disabled by default, so this should function as desired.
If you deploy flexible server using azurerm provider in terraform it will accept only ZoneRedundant value for highavailability.mode and also using this provider you can deploy sql server versions [11 12 13] only.
Using Azapi provider in terraform you can use highavailability values with any one of these "Disabled, ZoneRedundant or SameZone"
Based on your requirement I have created the below sample terraform script which has a environment variable accepts only prod or non-prod values using this value further flexible server will get deployed with respective properties.
If the environment value is prod script will deploy flexible server with high availability as zone redundant and also backup retention with 35 days with geo redundant backup enabled.
If the environment value is non-prod script will deploy flexible server with high availability as disabled and also backup retention with 7 days with geo redundant backup disabled
Here is the Terraform Script:
terraform {
required_providers {
azapi = {
source = "azure/azapi"
}
}
}
provider "azapi" {
}
variable "environment" {
type = string
validation {
condition = anytrue([var.environment == "prod",var.environment=="non-prod"])
error_message = "you havent defined any of allowed values"
}
}
resource "azapi_resource" "rg" {
type = "Microsoft.Resources/resourceGroups#2021-04-01"
name = "teststackhub"
location = "eastus"
parent_id = "/subscriptions/<subscriptionId>"
}
resource "azapi_resource" "test" {
type = "Microsoft.DBforPostgreSQL/flexibleServers#2022-01-20-preview"
name = "example-${var.environment}-postgresql-server"
location = azapi_resource.rg.location
parent_id = azapi_resource.rg.id
body = jsonencode({
properties= {
administratorLogin="azureuser"
administratorLoginPassword="<password>"
backup = {
backupRetentionDays = var.environment=="prod"?35:7
geoRedundantBackup = var.environment=="prod"?"Enabled":"Disabled"
}
storage={
storageSizeGB=32
}
highAvailability={
mode= var.environment=="prod"?"ZoneRedundant":"Disabled"
}
version = "14"
}
sku={
name = var.environment=="prod" ? "Standard_B2s" : "Standard_B1ms"
tier = "GeneralPurpose"
}
})
}
NOTE: The above terraform sample script is for your reference please do make the changes based on your business requirement.
I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7
The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.
I would create a sample azure web app using Terraform.
I use this code to create the resoucre.
This is my main.tf file :
resource "azurerm_resource_group" "rg" {
name = var.rgname
location = var.rglocation
}
resource "azurerm_app_service_plan" "plan" {
name = var.webapp_plan_name
location = var.rglocation
resource_group_name = var.rgname
sku {
tier = var.plan_settings["tier"]
size = var.plan_settings["size"]
capacity = var.plan_settings["capacity"]
}
}
resource "azurerm_app_service" "webapp" {
name = var.webapp_name
location = var.rglocation
resource_group_name = var.rgname
app_service_plan_id = azurerm_app_service_plan.plan.id
}
and this is the variable.tf
# variables for Resource Group
variable "rgname" {
description = "(Required)Name of the Resource Group"
type = string
default = "example-rg"
}
variable "rglocation" {
description = "Resource Group location like West Europe etc."
type = string
default = "eastus2"
}
# variables for web app plan
variable "webapp_plan_name" {
description = "Name of webapp"
type = string
default = "XXXXXXXXXx"
}
variable "plan_settings" {
type = map(string)
description = "Definition of the dedicated plan to use"
default = {
kind = "Linux"
size = "S1"
capacity = 1
tier = "Standard"
}
}
variable "webapp_name" {
description = "Name of webapp"
type = string
default = "XXXXXXXX"
}
The terraform apply --auto-approve show a error :
Error: creating/updating App Service Plan "XXXXXXXXXXX" (Resource Group "example-rg"): web.AppServicePlansClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="ResourceGroupNotFound" Message="Resource group 'example-rg' could not be found."
but in Azure Portal , the ressource group is created
what's wrong in my code ?
Can i hold on Terraform to check if resource is created or not before pass to next resource ?
You need to reference the resources to indicate the dependency between them to terraform, so that it can guarantee, that the resource group is created first, and then the other resources. The error indicates, that the resource group does not exist, yet. Your code tries to create the ASP first and then the RG.
resource "azurerm_resource_group" "rg" {
name = var.rgname
location = var.rglocation
}
resource "azurerm_app_service_plan" "plan" {
...
resource_group_name = azurerm_resource_group.rg.name
...
}
... but I can't use variable?
To answer that question and give more detail about the current issue, what's happening in your original code is that you're not referencing terraform resources in a way that terraform can use to create its dependency graph.
if var.rgname equals <my-rg-name> and azurerm_resource_group.rg.name also equals <my-rg-name> then technically, that's the same, right?
Well, no, not necessarily. They indeed have the same value. The difference is that the first one just echos the value. The second one contains the same value but it's also instruction to terraform saying, "Hey, wait a minute. We need the name value from azurerm_resource_group.rg so let me make sure I set that up first, and then I'll provision this resource"
The difference is subtle but important. Using values in this way lets Terraform understand what it has to provision first and lets it create its dependency graph. Using variables alone does not. Especially in large projects, alway try to use the resource variable value instead of just input variables which will help avoid unnecessary issues.
It's quite common to use an input variable to define certain initial data, like the name of the resource group as you did above. After that, however, every time resource_group_name is referenced in the manifests you should always use terraform generated value, so resource_group_name = azurerm_resource_group.rg.name
I have an Azure Function and an Azure Service Plan that was both created using the following Terraform code:
resource "azurerm_app_service_plan" "asp" {
name = "asp-${var.environment}"
resource_group_name = var.rg_name
location = var.location
kind = "FunctionApp"
reserved = true
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
resource "azurerm_function_app" "function" {
name = "function-${var.environment}"
resource_group_name= var.rg_name
location= var.location
app_service_plan_id= azurerm_app_service_plan.asp.id
storage_connection_string=azurerm_storage_account.storage.primary_connection_string
os_type = "linux"
site_config {
linux_fx_version = "DOCKER|${data.azurerm_container_registry.acr.login_server}/${var.image_name}:latest"
}
identity {
type = "SystemAssigned"
}
app_settings = {
#Lots of variables, but irrelevant for this issue I assume?
}
depends_on = [azurerm_app_service_plan.asp]
version = "~2"
}
resource "azurerm_storage_account" "storage" {
name = "storage${var.environment}"
resource_group_name = var.rg_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
The function works fine.
The issue is that any change I now try to do in Terraform ends up in the following error during apply:
2020-08-25T06:31:23.256Z [DEBUG] plugin.terraform-provider-azurerm_v2.24.0_x5: {"Code":"Conflict","Message":"Server farm 'asp-staging' cannot be deleted because it has web app(s) function-staging assigned to it.","Target":null,"Details":[{"Message":"Server farm 'asp-staging' cannot be deleted because it has web app(s) function-staging assigned to it."},{"Code":"Conflict"},{"ErrorEntity":{"ExtendedCode":"11003","MessageTemplate":"Server farm '{0}' cannot be deleted because it has web app(s) {1} assigned to it.","Parameters":["asp-staging","function-staging"],"Code":"Conflict","Message":"Server farm 'asp-staging' cannot be deleted because it has web app(s) function-staging assigned to it."}}],"Innererror":null}
...
Error: Error deleting App Service Plan "asp-staging" (Resource Group "my-resource-group"): web.AppServicePlansClient#Delete: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> <nil>
I have another service plan with an app service, and have had no problems applying while they are running.
I have even tried removing all references to the function and its service plan and still get the same error.
I am able to delete the Function and its service plan from the portal and then Terraform applies fine once when it create the function and service plan. As long as those are present when Terraform applies it fails.
This workaround of manually deleting the function and service plan is not feasible in the long run, so I hope someone can help me point out the issue. Is there some error in the way I have created the function or service plan?
provider "azurerm" {
version = "~> 2.24.0"
...
Edit:
As suggested this might be a provider bug, so I have created this issue: https://github.com/terraform-providers/terraform-provider-azurerm/issues/8241
Edit2:
On the bug forum they claim it is a configuration error and that I am missing a dependency. I have updated the code with a depends_on, I still have the same error.
I found the issue. On every apply the service plan was reapplied every time:
# azurerm_app_service_plan.asp must be replaced
-/+ resource "azurerm_app_service_plan" "asp" {
~ id = "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Web/serverfarms/asp" -> (known after apply)
- is_xenon = false -> null
~ kind = "elastic" -> "FunctionApp" # forces replacement
location = "norwayeast"
~ maximum_elastic_worker_count = 1 -> (known after apply)
~ maximum_number_of_workers = 20 -> (known after apply)
name = "asp"
- per_site_scaling = false -> null
reserved = true
resource_group_name = "xxx"
- tags = {
- "Owner" = "XXX"
- "Service" = "XXX"
- "environment" = "staging"
} -> null
Even though I created it as kind="FunctionApp" it seems it was changed to "elastic"
I now changed it to kind="elastic" and Terraform has stopped destroying my service plan on every apply :)
Thanks a lot to Charles Xu for lots of help!
The reason (in my case) for the 409 was that app service plans with active app services cannot be deleted. When terraform tries to delete the app service before the apps are migrated to the new plan then the 409 happens. The solution was to keep the existing app service plan during the migration and rerun terraform after the applications have been migrated to a new plan to remove the old plan.
I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.