Terraform can't create free web app on azure - azure

Trying to setup my first web app using terraform on Azure using there freetier.
The Resource group, and app service plan were able to be created but the app creation gives an error that says:
creating Linux Web App: (Site Name "testazurermjay" / Resource Group "test-resources"): web.AppsClient#C. Status=<nil> <nil>
Here is the terraform main.tf file:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "Switzerland North"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = "UK South" #azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {}
}
At first I thought the name was the issue for the azurerm_linux_web_app so I changed it from test to testazurermjay however that was not able to work.

I was able to get it to work BUT I had to use a depreciated resource called azurerm_app_service instead of azurerm_linux_web_app. I ALSO had to make sure that my resource-group and app service plan were in the same location. When I originally tried to set both the resource group and the app plan to Switzerland North it would give me an error when creating the app service plan (That is why you see me change the plan to UK South in the Original question). HOWEVER - after I set BOTH resource group and app service plan to UK South they were able to be created in the same location. Then I used azurerm_app_service to create a free tier service using the use_32_bit_worker_process = true variable in the site_config object.
Here is the full terraform file now:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "UK South"
}
resource "azurerm_service_plan" "test" {
name = "test"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
os_type = "Linux"
sku_name = "F1"
}
resource "azurerm_app_service" "test" {
name = "sofcvlepsaipd"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
app_service_plan_id = azurerm_service_plan.test.id
site_config {
use_32_bit_worker_process = true
}
}
I MUST STRESS THAT THIS ISN'T BEST PRACTICE AS THE azurerm_app_service IS GOING TO BE REMOVED IN THE NEXT VERSION. THIS SEEMS TO INDICATE THAT TERRAFORM WONT BE ABLE TO CREATE FREE TIER APP SERVICES IN THE NEXT UPDATE.
If someone knows how to do this with azurerm_linux_web_app or knows a better approach to this let me know.

I just encountered a similar issue, "always_on" setting defaults to true, but that is not supported in the free tier. As stated here, you must explicitly set it to false when using free tier
resource "azurerm_linux_web_app" "test" {
name = "testazurermjay"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_service_plan.test.location
service_plan_id = azurerm_service_plan.test.id
site_config {
always_on = false
}
}

Related

is it possible to create Azure Database for PostgreSQL Flexible server - Restore server-> (Back up and restore) using terraform?

The azure Database for PostgreSQL Flexible server automatically back up the databases. In case of any accidental deletion of any databases we can restore the database by creating a new flexible server for the recovery process from the back up database .I know how do it from azure portal.Does the terraform code can also configure "backup and restore" for PostgreSQL Flexible server - Restore server.
The exact summary of the manual task documented in the azure doc:https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-restore-server-portal. Just i want do the task using terraform . In addition to that ensure appropriate login and database level permission
I really appreciate any support and help
It is possible to create the azure database for PostgreSQL flexible server backup using terraform
Please use the below terraform code to restore the server
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "RG_NAME"
location = "EASTUS"
}
resource "azurerm_virtual_network" "example" {
name = "example-vn"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "example-sn"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "fs"
service_delegation {
name = "Microsoft.DBforPostgreSQL/flexibleServers"
actions = [
"Microsoft.Network/virtualNetworks/subnets/join/action",
]
}
}
}
resource "azurerm_private_dns_zone" "example" {
name = "example.postgres.database.azure.com"
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "exampleVnetZone.com"
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.example.id
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_postgresql_flexible_server" "example" {
name = "example-psqlflexibleserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12"
delegated_subnet_id = azurerm_subnet.example.id
private_dns_zone_id = azurerm_private_dns_zone.example.id
administrator_login = "psqladmin"
administrator_password = "H#Sh1CoR3!"
zone = "1"
storage_mb = 32768
backup_retention_days = 30
geo_redundant_backup_enabled = true
sku_name = "GP_Standard_D4s_v3"
depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
Here I have mentioned the RG_name, subnet, VM, Vnet, db name, password and backup policy days
I have given the backup policy retention days are 30 the policy retention days should be in between 1 to 35 and the defaults value is 7 days
Before running the script we have to check the appropriate login server details
After the follow the below steps to execute the file
terraform init
It will initialize the file
Terraform plan
This will creates an execution plan and it will preview the changes that terraform plans to make the infrastructure
Terraform apply
This will creates or updates the infrastructure depending on the configuration
Previously it was default and the geo_redundant_backup_enabled is false I have set it to true and backup policy will be 30 days
For reference you can use this documentation

Terraform tried creating a "implicit dependency" but the next stage of my code still fails to find the Azure resource group just created

Would be grateful for any assistance, I thought I had nailed this one when I stumbled across the following link ...
Creating a resource group with terraform in azure: Cannot find resource group directly after creating it
However, the next stage of my code is still failing...
Error: Code="ResourceGroupNotFound" Message="Resource group 'ShowTell' could not be found
# We strongly recommend using the required_providers block to set the
# Azure Provider source and version being used
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.64.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
variable "resource_group_name" {
type = string
default = "ShowTell"
description = ""
}
# Create your resource group
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = "UK South"
}
# Should be accessible from LukesContainer.uksouth.azurecontainer.io
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = "${var.resource_group_name}"
ip_address_type = "public"
dns_name_label = "LukesContainer"
os_type = "Linux"
container {
name = "hello-world"
image = "microsoft/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = "443"
protocol = "TCP"
}
}
container {
name = "sidecar"
image = "microsoft/aci-tutorial-sidecar"
cpu = "0.5"
memory = "1.5"
}
tags = {
environment = "testing"
}
}
In order to create an implicit dependency you must refer directly to the object that the dependency relates to. In your case, that means deriving the resource group name from the resource group object itself, rather than from the variable you'd used to configure that object:
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = azurerm_resource_group.example.name
# ...
}
With the configuration you included in your question, both the resource group and the container group depend on var.resource_group_name but there was no dependency between azurerm_container_group.LukesContainer and azurerm_resource_group.example, and so Terraform is therefore free to create those two objects in either order.
By deriving the container group's resource group name from the resource group object you tell Terraform that the resource group must be processed first, and then its results used to populate the container group.

Create main.tf resources only when a variable is set to true in the vars.tf file

I usually have one generic main.tf file that is the basis for all deployments to our environments (DEV/STAGING/LIVE). I have one parameter.tf file for each of those environments.
There is always a requirement to have some more expensive Azure options enabled in the STAGING and LIVE environments over what DEV might have - in my example its enabling the Azure Defender for SQL and extended Auditing functions for Azure SQL servers (PaaS)
This is a portion of my main.tf file that is generic...
# Define SQL Server
resource "azurerm_mssql_server" "example" {
name = var.azsqlserver1name
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = var.azsqlserver1version
administrator_login = var.azsqlserver1sauser
administrator_login_password = random_password.sql-password.result
public_network_access_enabled = "true" # set to false with vNet integration
}
# Define Storage Account and container for SQL Threat Detection Policy Audit Logs
resource "azurerm_storage_account" "example" {
name = var.azsaname1
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = var.azsatier1
account_replication_type = var.azsasku1
access_tier = var.azsaaccesstier1
account_kind = var.azsakind1
enable_https_traffic_only = "true"
}
resource "azurerm_storage_container" "example" {
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Defines Azure SQL Defender and Auditing - NOTE: Auditing - only SA out at the moment (11/2020) - Log Analytics and Event Hub in preview only
resource "azurerm_mssql_server_security_alert_policy" "example" {
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_mssql_server.example.name
state = var.azsqltreatdetectionstate
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
email_account_admins = var.azsqltreatdetectionemailadmins
retention_days = var.azsqltreatdetectionretention
}
resource "azurerm_mssql_server_vulnerability_assessment" "example" {
server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.example.id
storage_container_path = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/"
storage_account_access_key = azurerm_storage_account.example.primary_access_key
recurring_scans {
enabled = var.azsqlvscansrecurring
email_subscription_admins = var.azsqlvscansemailadmins
}
}
resource "azurerm_mssql_server_extended_auditing_policy" "example" {
server_id = azurerm_mssql_server.example.id
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = var.azsqlauditretentiondays
}
What I need to do is have anything after the first "azurerm_mssql_server" resource to only be created in STAGING and LIVE (not DEV). I was planning to have a variable in the DEV/STAGING/LIVE parm tf files that state something like...
DEVparm.tf
variable azsqlenableazuredefenderforsql {
default="false"
}
STAGINGparm.tf and LIVEparm.tf
variable azsqlenableazuredefenderforsql {
default="true"
}
If this possible to achieve? Thus far I've draw a blank and tested a few things, but they don't quite work. It seems a simple enough vision, but when there is no IF... statement
If you need to flip a resource on and off that is easy to achieve with count = 1 or 0. This is usually handled with the ternary operator.
resource "some_resource" "example" {
count = terraform.workspace != "development" ? 1 : 0
}
The count parameter was added to modules for terraform 0.13. If you have a bundle of resources it could be an alterative method to excluding certain resources from building.
One way that a lot of people solve this is by combining the count parameter on resources with a ternary. For example, look at the section entitled "If-Statements with the count parameter" in https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9#478c.
Basically you can keep your azsqlenableazuredefenderforsql variable and then in your resources do something like:
resource "azurerm_storage_container" "example" {
count = var.azsqlenableazuredefenderforsql ? 1 : 0
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}

Getting "Error waiting for Virtual Network Rule "" (server, rg) to be created or updated..." for azurerm_mariadb_virtual_network_rule

I'm building a Terraform config for my infrastructure deployment, and trying to connect an azurerm_mariadb_server resource to an azurerm_subnet, using an azurerm_mariadb_virtual_network_rule, as per documentation.
The vnet, subnet, mariadb-server etc are all created, but I get the following when trying to create the vnet_rule.
Error: Error waiting for MariaDb Virtual Network Rule "vnet-rule" (MariaDb Server: "server", Resource Group: "rg")
to be created or updated: couldn't find resource (21 retries)
on main.tf line 86, in resource "azurerm_mariadb_virtual_network_rule" "vnet_rule":
86: resource "azurerm_mariadb_virtual_network_rule" "mariadb_vnet_rule" {
I can't determine which resource can't be found - all resources except the azurerm_mariadb_virtual_network_rule are created, according to both the bash shell output and Azure portal.
My config is below - details of some resources are omitted for brevity.
provider "azurerm" {
version = "~> 2.27.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group_name}-rg"
location = var.location
}
resource "azurerm_virtual_network" "vnet" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}Vnet"
address_space = ["10.0.0.0/16"]
location = var.location
}
resource "azurerm_subnet" "backend" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}backendSubnet"
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mariadb_server" "server" {
# DB server name can contain lower-case letters, numbers and dashes, NOTHING ELSE
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-server"
location = var.location
sku_name = "B_Gen5_2"
version = "10.3"
ssl_enforcement_enabled = true
}
resource "azurerm_mariadb_database" "mariadb_database" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}_mariadb_database"
server_name = azurerm_mariadb_server.server.name
charset = "utf8"
collation = "utf8_general_ci"
}
## Network Service Endpoint (add DB to subnet)
resource "azurerm_mariadb_virtual_network_rule" "vnet_rule" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-vnet-rule"
server_name = azurerm_mariadb_server.server.name
subnet_id = azurerm_subnet.backend.id
}
The issue looks to arise within 'func resourceArmMariaDbVirtualNetworkRuleCreateUpdate', but I don't know Go, so can't follow exactly what's causing it.
If anyone can see an issue, or knows how to get around this, please let me know!
Also, I'm not able to do it via the portal - step 3 here shows a section for configuring VNET rules, which is not present on my page for 'Azure database for mariaDB server'. I have the Global administrator role, so I don't think it's permissions-related.
From creating and manage Azure Database for MariaDB VNet service endpoints and VNet rules by using the Azure portal
The key point is that
Support for VNet service endpoints is only for General Purpose and
Memory Optimized servers.
So change the code sku_name = "B_Gen5_2" to sku_name = "GP_Gen5_2" or other eligible sku_name.
sku_name - (Required) Specifies the SKU Name for this MariaDB Server.
The name of the SKU, follows the tier + family + cores pattern (e.g.
B_Gen4_1, GP_Gen5_8). For more information see the product
documentation.
It takes a few minutes to deploy.

Error updating AppSetting with name is not allowed from Terraform

While updating an Azure App Service App Setting with Terraform we're getting the following error:
{"Message":"AppSetting with name 'HEALTHCHECKS-UI_HEALTHCHECKS_0_NAME' is not allowed."}
However if we add it via the portal manual it works totally fine:
I'm guessing it's something to do with the 0 or - but how do we escape these?
The Terraform code is pretty simple but here is a failing example:
resource "azurerm_resource_group" "test" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_app_service_plan" "test" {
name = "example-appserviceplan"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "test" {
name = "example-app-service"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
site_config {
dotnet_framework_version = "v4.0"
scm_type = "LocalGit"
}
app_settings = {
"HEALTHCHECKSUI_HEALTHCHECKS_0_NAME" = "Self"
"HEALTHCHECKSUI_HEALTHCHECKS_0_URI" = "https://${var.environment_name}-example-app-service/health-api"
}
}
Dropping into the bash terminal in kudo and running printenv shows that setting it manually removes the -:
HEALTHCHECKSUI_HEALTHCHECKS_0_NAME=https://example-app-service.azurewebsites.net/health-api
Not sure, I do not find the document to show the limitation of this for the app settings in Azure App Service. But as I know, the operating system has the limitation of this. For Linux, you cannot set the environment variable with the name contains -. But in Windows, the limitations for - does not exist. Generally, letters and numbers are no problem in both systems.
Remove - (Dash) from environment variables names. It will work fine
Ex: Test-Pass --> TestPass

Resources