Create SQL firewall rules with the sql server using targeted apply - azure

I have an azurerm_sql_server and two azurerm_sql_firewall_rules for this server.
If I do a targeted terraform apply to create a resource depending on the SQL server, the SQL server is created but the firewall rules are not.
Can I require the firewall rules to always be deployed with the SQL server?
"Bonus": The SQL server is in a module and the database using the server is in another module :(
Example code:
infrastructure/main.tf
resource "azurerm_sql_server" "test" {
count = var.enable_dbs ? 1 : 0
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
name = local.dbs_name
version = "12.0"
administrator_login = var.dbs_admin_login
administrator_login_password = var.dbs_admin_password
}
resource "azurerm_sql_firewall_rule" "allow_azure_services" {
count = var.enable_dbs ? 1 : 0
resource_group_name = azurerm_resource_group.test.name
name = "AllowAccessToAzureServices"
server_name = azurerm_sql_server.test[0].name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
webapp/main.tf
resource "azurerm_sql_database" "test" {
count = var.enable_db ? 1 : 0
location = var.location
resource_group_name = var.resource_group_name
server_name = var.dbs_name
name = var.project_name
requested_service_objective_name = var.db_sku_size
}
main.tf
module "infrastructure" {
source = "./infrastructure"
project_name = "infra"
enable_dbs = true
dbs_admin_login = "someusername"
dbs_admin_password = "somepassword"
}
module "my_webapp" {
source = "./webapp"
location = module.infrastructure.location
resource_group_name = module.infrastructure.resource_group_name
project_name = local.project_name
enable_db = true
dbs_name = module.infrastructure.dbs_name
dbs_admin_login = module.infrastructure.dbs_admin_login
dbs_admin_password = module.infrastructure.dbs_admin_password
}
If the whole script is applied using terraform apply everything is fine.
But if only module.my_webapp should be applied using terraform apply -target module.my_webapp the firewall rule is missing because it is not the target and the target doesn't directly require it.
The rule is necessary nonetheless and should be applied every time the database server itself is applied.
Possible "Solution":
Add the firewall rules as output of the infrastructure module:
output "dbs_firewall_rules" {
value = concat(
azurerm_sql_firewall_rule.allow_azure_services,
azurerm_sql_firewall_rule.allow_office_ip
)
}
Then add this output as input to the webapp module:
variable "dbs_firewall_rules" {
description = "DB firewall rules required (used for the database in depends_on)"
type = list
}
And connect it in the main script:
module "my_webapp" {
...
dbs_firewall_rules = module.infrastructure.dbs_firewall_rules
...
}
Drawback: Only one type of resources can be put in the list. This is why I renamed it from dbs_dependencies to dbs_firewall_rules.

If you have the firewall rules defined in the same tf file as the SQL Server they should be getting deployed because the server is referenced which builds up the dependency graph correctly. I have ran into issues with it not working specifically with SQL Server firewall rules. What I eventually did was leverage the depends_on property on the SQL Server resource to make sure that those are always created. That looks like this:
resource "azurerm_sql_firewall_rule" "test" {
name = "FirewallRule1"
resource_group_name = "${azurerm_resource_group.test.name}"
server_name = "${azurerm_sql_server.test.name}"
start_ip_address = "10.0.17.62"
end_ip_address = "10.0.17.62"
depends_on = [azurerm_sql_server.test]
}
resource "azurerm_sql_server" "test" {
name = "mysqlserver"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
version = "12.0"
administrator_login = "mradministrator"
administrator_login_password = "thisIsDog11"
tags = {
environment = "production"
}
}
Then just add each rule that you know you want to that. If you are doing your rules outside your module, then you should be able to make this an argument that you can pass into it to force the linking.

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

Error: waiting for creation of MsSql Database - Terraform failure

I have had to re-develop my pipeline for building out my infrastructure to use local agent pools and change Ubuntu code to windows (PowerShell code).
I am now in the position where I am building out my infrastructure and it failing on the most basic of tasks.
I have created my SQL Server and that seems to be OK. I have also got my logging system infra done OK too, but I am really struggling on building out a DB on my SQL Server.
At the most basic here is my code. Server build OK.
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
tags = var.tags
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
I get an error saying:
Error: waiting for creation of MsSql Database "xxx-xxx-xxx-xxx-Prod" (MsSql Server Name "xxx-xxx-xxx-prod" / Resource Group "rg-xxx-xxx-xxx"): Code="InternalServerError" Message="An unexpected error occured while processing the request.
Before I had to design it all in PS and not bash and use local pool, I never had a problem and this worked fine.
I found this but it's saying the correct error but nothing else seems correct. It is odd because I can build up my other infra fine in the same main.tf file.
https://github.com/hashicorp/terraform-provider-azurerm/issues/13194
I am also noticing that Terraform output is not working:
Here is my output file:
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
#output "cl_sql_database_name" {
# value = azurerm_mssql_database.cl.name
#}
#output "rapt_sql_database_name" {
# value = azurerm_mssql_database.rapt.name
#}
output "app_insights_instrumentation_key" {
value = azurerm_application_insights.main.instrumentation_key
}
Is there any chance this is linked?
Please use latest terraform version i.e. 1.1.13 and azurerm-provider i.e. 2.92.0 as there were some bugs in previous Azure API version which resulted in 500 Error Code which has been fixed in the recent versions as mentioned in this GitHub Issue . Only after a successful apply , you get the output otherwise it won't get stored if there is an error.
I tested the same code with latest versions on both PowerShell and Bash as below :
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {}
locals {
resourceGroupName="ansumantest"
sqlServerName="ansumantestsql"
sql_ad_login="sqladmin"
object_id= data.azurerm_client_config.current.object_id
raptSqlDatabaseName="ansserverdb"
}
variable "location" {
default="eastus"
}
variable "sql_administrator_login" {
default="4dm1n157r470r"
}
variable "sql_administrator_login_password" {
default="4-v3ry-53cr37-p455w0rd"
}
variable "sql_firewall_rule" {
default="ansumantestfirewall"
}
variable "environment" {
default="test"
}
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
Output.tf
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
output "cl_sql_database_name" {
value = azurerm_mssql_database.main.name
}
Output :
Updated Terraform to latest version didnt really help...
I turned on logging with TF_LOGS (env variable in terraform).
Watching the activity logs in the resource group and i noticed it was building the DB server and then there was an error with AAD and then the DB Server build failed after that. So I removed the AAD work in my main.tf - then i reran the pipeline then it worked fine. Phew...

Using terraform how do I create an azure sql database from a backup

Using the default example on the terraform site I can easily create a database but how do I create a new database by restoring a backup?
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_mssql_server" "example" {
name = "example-sqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_mssql_database" "test" {
name = "acctest-db-d"
server_id = azurerm_mssql_server.example.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 4
read_scale = true
sku_name = "BC_Gen5_2"
zone_redundant = true
create_mode = "RestoreExternalBackup" <-- WHAT ELSE DO I DO?
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
In the documentation they mention a create_mode "RestoreExternalBackup" option but provide no example on how to reference the backup - mine is stored in an azure storage container.
Edit: The mention of "RestoreExternalBackup" was more about my lack of understanding. What I meant to ask was how do I restore/create a database from a bacpac file stored in a Storage Account
Following the blog Deploying Azure SQL Database Bacpac and Terraform by John Q. Martin
You can include the bacpac as the source for the database created in
Azure.
First, setup the firewall on the Azure SQL Server to prevent any failure during deployment due to blob storage access issue. To ensure this we have to enable “Allow Azure services and resources to access this server”, this allows the two Azure services to communicate.
Setting the Azure SQL Server Firewall
Set both Start_ip and End_ip to 0.0.0.0. This is interpreted by Azure as a firewall rule to allow Azure services.
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
Defining the Database Resource
We need to use the azurerm_sql_database resource, because the deployment of a bacpac is only supported through this resource type.
The resource definition here is comprised of two main sections, the first being the details around where the database needs to go and the second part being a sub-block which defines the bacpac source details. Here we need to put in the URI for the bacpac file and the storage key, in this case we are using the SAS token for the key to allow access to the bacpac.
We also need to provide the username and password for the server we are creating to allow the import to work because it needs to have authorisation to the Azure SQL Server to work.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_sql_server" "example" {
name = "myexamplesqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_sql_database" "appdb01" {
depends_on = [azurerm_sql_firewall_rule.allowAzureServices]
name = "AzSqlDbName"
resource_group_name = azurerm_sql_server.example.resource_group_name
location = azurerm_sql_server.example.location
server_name = azurerm_sql_server.example.name
collation = "SQL_Latin1_General_CP1_CI_AS"
requested_service_objective_name = "BC_Gen5_2"
max_size_gb = 4
read_scale = true
zone_redundant = true
create_mode = "Default"
import {
storage_uri = "https://examplesa.blob.core.windows.net/source/Source.bacpac"
storage_key = "gSKjBfoK4toNAWXUdhe6U7YHqBgCBPsvoDKTlh2xlqUQeDcuCVKcU+uwhq61AkQaPIbNnqZbPmYwIRkXp3OzLQ=="
storage_key_type = "StorageAccessKey"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
authentication_type = "SQL"
operation_mode = "Import"
}
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
Note:
The extended_auditing_policy block has been moved to azurerm_mssql_server_extended_auditing_policy and azurerm_mssql_database_extended_auditing_policy. This block will be removed in version 3.0 of
the provider.
requested_service_objective_name - (Optional) The service objective name for the database. Valid values depend on edition and location and may include S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool. You can list the available names with the cli: shell az sql db list-editions -l westus -o table. For further information please see Azure CLI - az sql db.
And import supports the following:
storage_uri - (Required) Specifies the blob URI of the .bacpac file.
storage_key - (Required) Specifies the access key for the storage account.
storage_key_type - (Required) Specifies the type of access key for the storage account. Valid values are StorageAccessKey or SharedAccessKey.
administrator_login - (Required) Specifies the name of the SQL administrator.
administrator_login_password - (Required) Specifies the password of the SQL administrator.
authentication_type - (Required) Specifies the type of authentication used to access the server. Valid values are SQL or ADPassword.
operation_mode - (Optional) Specifies the type of import operation being performed. The only allowable value is Import.
Alternately, If you want to continue using the azurerm_mssql_database then we would need to deploy and empty database and then deploy the bacpac via SqlPackage. (Which I haven't tried yet)

How do we configure plan for SQL database for Azure using Terraform?

I am using Terraform to configure the infrastructure on Azure. I am creating SQL database using Terraform template and I am able to create the database but by default the Standard plan is set for the database. Is it possible to configure the plan to Basic plan using the template?
This is the template that I am using: -
resource "azurerm_resource_group" "test" {
name = "Test-ResourceGroup"
location = "Central India"
}
resource "azurerm_sql_server" "test" {
name = "name-test-dev"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
version = "12.0"
administrator_login = "test-admin"
administrator_login_password = "test-password"
}
resource "azurerm_sql_database" "test" {
name = "test-dev"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
server_name = "${azurerm_sql_server.test.name}"
}
Set edition argument on the database to Basic
resource "azurerm_sql_database" "test" {
name = "test-dev"
edition = "Basic"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
server_name = "${azurerm_sql_server.test.name}"
}
hope helps
Create an Azure SQL Server Database with a SQL script initialization
https://registry.terraform.io/modules/metadevpro/sqlserver-seed/azurerm/1.0.0
module "azurerm_sql_sever_init" {
source = "Azure/database-seed/azurerm"
location = "westeurope"
resource_group = "myresourcegroup007"
db_server_fqdn = "${azurerm_sql_server.db1.fully_qualified_domain_name}"
sql_admin_username = "${azurerm_sql_server.db1.administrator_login}"
sql_admin_password = "${azurerm_sql_server.db1.administrator_login_password}"
db_name = "mydatabase"
init_script_file = "mydatabase.init.sql"
log_file = "mydatabase.init.log"
tags = {
environment = "qa"
project = "acme"
provisioner = "terraform"
}
}

Create custom domain for app services via terraform

I am creating azure app services via terraform and following there documentation located at this site :
https://www.terraform.io/docs/providers/azurerm/r/app_service.html
Here is the snippet for terraform script:
resource "azurerm_app_service" "app" {
name = "app-name"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "ommitted"
site_config {
java_version = "1.8"
java_container = "TOMCAT"
java_container_version = "8.5"
}
}
I need sub domain as well for my app services for which I am not able to find any help in terraform :
as of now url for app services is:
https://abc.azure-custom-domain.cloud
and I want my url to be :
https://*.abc.azure-custom-domain.cloud
I know this can be done via portal but is their any way by which we can do it via terraform?
This is now possible using app_service_custom_hostname_binding (since PR#1087 on 6th April 2018)
resource "azurerm_app_service_custom_hostname_binding" "test" {
hostname = "www.mywebsite.com"
app_service_name = "${azurerm_app_service.test.name}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
This is not possible. You could the link you provided. If parameter is not in, the parameter is not supported by terraform.
You need do it on Azure Portal.
I have found it to be a tiny bit more complicated...
DNS Zone (then set name servers at the registrar)
App Service
Domain verification TXT record
CNAME record
Hostname binding
resource "azurerm_dns_zone" "dns-zone" {
name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
}
resource "azurerm_linux_web_app" "app-service" {
name = "some-service"
resource_group_name = var.azure_resource_group_name
location = var.azure_region
service_plan_id = "some-plan"
site_config {}
}
resource "azurerm_dns_txt_record" "domain-verification" {
name = "asuid.api.domain.com"
zone_name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
ttl = 300
record {
value = azurerm_linux_web_app.app-service.custom_domain_verification_id
}
}
resource "azurerm_dns_cname_record" "cname-record" {
name = "domain.com"
zone_name = azurerm_dns_zone.dns-zone.name
resource_group_name = var.azure_resource_group_name
ttl = 300
record = azurerm_linux_web_app.app-service.default_hostname
depends_on = [azurerm_dns_txt_record.domain-verification]
}
resource "azurerm_app_service_custom_hostname_binding" "hostname-binding" {
hostname = "api.domain.com"
app_service_name = azurerm_linux_web_app.app-service.name
resource_group_name = var.azure_resource_group_name
depends_on = [azurerm_dns_cname_record.cname-record]
}
I had the same issue & had to use PowerSHell to overcome it in the short-term. Maybe you could get Terraform to trigger the PSHell script... I haven't tried that yet!!!
PSHell as follows: -
$fqdn="www.yourwebsite.com"
$webappname="yourwebsite.azurewebsites.net"
Set-AzureRmWebApp -Name <YourAppServiceName> -ResourceGroupName <TheResourceGroupOfYourAppService> -HostNames #($fqdn,$webappname)
IMPORTANT: Make sure you configure DNS FIRST i.e. CNAME or TXT record for the custom domain you're trying to set, else PSHell & even the Azure Portal manual method will fail.

Resources