Unable to create Service Bus Authorization Rule in Azure - azure

We are using terraform version of 0.12.19 and azurerm provider version 2.10.0 for deploying the service bus and its queues and authorization rules. So when we ran the terraform apply it created the service bus and queue but it throwed the below error for the creation of authorization rules.
But when we checked the azure portal these authorization rules were present and in tf state file as well we were able to find the entries of both the resources and they had a parameter Status as "Tainted" in it.. So when we tried to run the apply again to see if will recreate/replace the existing resources but it was failing with the same error. Now we are unable to proceed further as even when we run the plan for creating the new resources its failing at this point and not letting us proceed further.
We even tried to untainted it and run the apply but it seems still we are getting this issue though the resources doesn't have the status tainted parameter in tf state. Can you please help us here the solution so that we can resolve this. (We can't move forward to new version of terraform cli as there are so many modules dependent on it and it will impact our production deployments as well.)
Error: Error making Read request on Azure ServiceBus Queue Authorization Rule "" (Queue "sample-check-queue" / Namespace "sample-check-bus" / Resource Group "My-RG"): servicebus.QueuesClient#GetAuthorizationRule: Invalid input: autorest/validation: validation failed: parameter=authorizationRuleName constraint=MinLength value="" details: value length must be greater than or equal to 1
azurerm_servicebus_queue_authorization_rule.que-sample-check-lsr: Refreshing state... [id=/subscriptions//resourcegroups/My-RG/providers/Microsoft.ServiceBus/namespaces/sample-check-bus/queues/sample-check-queue/authorizationrules/lsr]
Below is the service_bus.tf file code:
provider "azurerm" {
version = "=2.10.0"
features {}
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = "My-RG"
location = "West Europe"
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr" {
name = "lsr"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}

I have tried the below terraform code to create authorization rules and could create them successfully:
I have followed this azurerm_servicebus_queue_authorization_rule |
Resources | hashicorp/azurerm | Terraform Registry having latest
version of hashicorp/azurerm terraform provider.
This maybe even related to arguments queue_name. arguments of
resources changed to queue_id in 3.X.X versions
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "xxxx"
location = "xx"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
#resource_group_name = "My-RG"
namespace_id = azurerm_servicebus_namespace.service_bus.id
#namespace_name =
azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr"
{
name = "lsr"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check- AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
Authorization rules created without error:
Please try to change the name of the authorization rule named “lsr” with increased length and also please try to create one rule at a time in your case .

Thanks all for your inputs and suggestions.
Code is working fine now with the terraform provider version 2.56.0 and terraform cli version 0.12.19. Please let me know if any concerns.

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

Update exsiting Azure App Service in Terraform

I would like to update my exsiting Azure App Service in Terraform by adding a Backup to this App Service.
For now it looks like this:
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = "example-resources"
}
resource "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = "West Europe"
resource_group_name = "example-resources"
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
But when I run it i got a error A resource with the ID ........ /MyUniqueWebAppName" already exists - to be managed via Terraform this resource needs to be imported into the State.
How in terraform can I point to an existing Azure APP Service and add a backup with the same schedule as I did in my template?
Before you can modify your existing resources with TF, you must import into the terraform state. For this you use import command.
data "azurerm_resource_group" "example" {
name = "<give rg name existing one>"
}
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = data.azurerm_resource_group.example.name
}
data "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
No need to use import command , use this code for your reference
just give rg name existing one in resources group block

Az Function Elastic Premium - Azure Functions runtime is unreachable

I have an Azure function that I need to run in an Elastic Premium plan. After deployed I see the following error:
Azure Functions runtime is unreachable
I've tried to solve it following Microsoft documentation, no luck.
Here is some thoughts about my tries :
We checked the Storage account is created
The Function's subnet already has the service endpoint for the storage account
Vnet integration is already enabled in the Function and it (subnet) is already added to the Storage firewall
We added the required properties in the Function settings:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = dynamic created (connection string to the
Storage account)
WEBSITE_CONTENTOVERVNET = 1
WEBSITE_CONTENTSHARE = dynamic created
WEBSITE_VNET_ROUTE_ALL = 1
Here is the documentation link.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-recover-storage-account
Everything was working fine when I was using the Premium (P1v2) and the error begins when I moved to Elastic (EP1).
I am deploying it using Terraform.
Here is a TF code example we are using to deploy
locals {
app_settings = {
FUNCTIONS_WORKER_RUNTIME = "python"
FUNCTION_APP_EDIT_MODE = "readonly"
WEBSITE_VNET_ROUTE_ALL = "1"
WEBSITE_CONTENTOVERVNET = "1"
}
}
module "az_service_plan_sample" {
source = "source module"
serviceplan_name = "planname"
resource_group_name = "RG Name"
region = "East US 2"
tier = "ElasticPremium"
size = "EP1"
kind = "elastic"
capacity = 40
per_site_scaling = false
depends_on = [
module.storage_account
]
}
module "storage_account_sample" {
source = "source module"
resource_group_name = "RG Name"
location = "East US 2"
name = "saname"
storage_account_replication_type = "GRS"
subnet_ids = [subnet_ids]
}
module "sample" {
source = "source module"
azure_function_name = "functionname"
resource_group_name = "RG Name"
storage_account_name = module.storage_account.storage-account-name
storage_account_access_key = module.storage_account.storage-account-primary-key
region = "East US 2"
subnet_id = subnet_ids
app_service_id = module.az_service_plan.service_plan_id
scope_role_storage_account = module.storage_account.storage-account-id
azure_function_version = "~4"
app_settings = local.app_settings
key_vault_reference_identity_id = azurerm_user_assigned_identity.az_func.id
pre_warmed_instance_count = 2
identity_type = "UserAssigned"
user_assigned_identityies = [{
id = azurerm_user_assigned_identity.az_func.id
principal_id = azurerm_user_assigned_identity.az_func.principal_id
}]
depends_on = [
module.az_service_plan_sample,
module.storage_account_sample,
azurerm_user_assigned_identity.az_func,
]
}
AFAIk, There is not a one specific reason for Azure function runtime unreachable, Please check the below workaround to solve the above issue,
We have tried to create a Function app using Elastic premium plan and its working fine at our end,
Please make sure that you have configured the correct WEBSITE_CONTENTAZUREFILECONNECTIONSTRING value same as AzureWebJobsStorage then try to STOP/START the function app.
Also try to set the pre_warmed_instance_count=1 instead of 2 as mentioned in this MICROSOFT DOCUMENTATION:-
The default pre-warmed instance count is 1, and for most scenarios this value should remain as 1.
For more information please refer this ARTICLE|AZURE LESSONS-AZURE FUNCTION RUNTIME UNREACHABLE.
When you use a Function with Elastic Premium Plan Type that has a VNET Integration, you need to add one more property called vnet_route_all_enabled to enable route outbound from your Azure Function. Also you need to first create a file in your storage account that the name of this file will be the content of this variable WEBSITE_CONTENTSHARE in your Application Settings. Below is my code suggestion:
You can check this doc to be sure: https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-vnet
Below my suggest code:
locals {
app_settings = {
FUNCTIONS_WORKER_RUNTIME = "python"
FUNCTION_APP_EDIT_MODE = "readonly"
WEBSITE_VNET_ROUTE_ALL = "1"
WEBSITE_CONTENTOVERVNET = "1"
WEBSITE_CONTENTSHARE = "file-function"
}
}
module "az_service_plan_sample" {
source = "source module"
serviceplan_name = "planname"
resource_group_name = "RG Name"
region = "East US 2"
tier = "ElasticPremium"
size = "EP1"
kind = "elastic"
capacity = 40
per_site_scaling = false
depends_on = [
module.storage_account
]
}
module "storage_account_sample" {
source = "source module"
resource_group_name = "RG Name"
location = "East US 2"
name = "saname"
storage_account_replication_type = "GRS"
subnet_ids = [subnet_ids]
}
resource "azurerm_storage_share" "share_file_ingest_function" {
name = "file-function"
storage_account_name = module.storage_account_sample.name
depends_on = [
module.storage_account_sample
]
}
module "sample" {
source = "source module"
azure_function_name = "functionname"
resource_group_name = "RG Name"
storage_account_name = module.storage_account.storage-account-name
storage_account_access_key = module.storage_account.storage-account-primary-key
region = "East US 2"
subnet_id = subnet_ids
app_service_id = module.az_service_plan.service_plan_id
scope_role_storage_account = module.storage_account.storage-account-id
azure_function_version = "~4"
app_settings = local.app_settings
key_vault_reference_identity_id = azurerm_user_assigned_identity.az_func.id
pre_warmed_instance_count = 2
vnet_route_all_enabled = true
identity_type = "UserAssigned"
user_assigned_identityies = [{
id = azurerm_user_assigned_identity.az_func.id
principal_id = azurerm_user_assigned_identity.az_func.principal_id
}]
depends_on = [
module.az_service_plan_sample,
module.storage_account_sample,
azurerm_user_assigned_identity.az_func,
]
}

Error: waiting for creation of MsSql Database - Terraform failure

I have had to re-develop my pipeline for building out my infrastructure to use local agent pools and change Ubuntu code to windows (PowerShell code).
I am now in the position where I am building out my infrastructure and it failing on the most basic of tasks.
I have created my SQL Server and that seems to be OK. I have also got my logging system infra done OK too, but I am really struggling on building out a DB on my SQL Server.
At the most basic here is my code. Server build OK.
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
tags = var.tags
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
I get an error saying:
Error: waiting for creation of MsSql Database "xxx-xxx-xxx-xxx-Prod" (MsSql Server Name "xxx-xxx-xxx-prod" / Resource Group "rg-xxx-xxx-xxx"): Code="InternalServerError" Message="An unexpected error occured while processing the request.
Before I had to design it all in PS and not bash and use local pool, I never had a problem and this worked fine.
I found this but it's saying the correct error but nothing else seems correct. It is odd because I can build up my other infra fine in the same main.tf file.
https://github.com/hashicorp/terraform-provider-azurerm/issues/13194
I am also noticing that Terraform output is not working:
Here is my output file:
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
#output "cl_sql_database_name" {
# value = azurerm_mssql_database.cl.name
#}
#output "rapt_sql_database_name" {
# value = azurerm_mssql_database.rapt.name
#}
output "app_insights_instrumentation_key" {
value = azurerm_application_insights.main.instrumentation_key
}
Is there any chance this is linked?
Please use latest terraform version i.e. 1.1.13 and azurerm-provider i.e. 2.92.0 as there were some bugs in previous Azure API version which resulted in 500 Error Code which has been fixed in the recent versions as mentioned in this GitHub Issue . Only after a successful apply , you get the output otherwise it won't get stored if there is an error.
I tested the same code with latest versions on both PowerShell and Bash as below :
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {}
locals {
resourceGroupName="ansumantest"
sqlServerName="ansumantestsql"
sql_ad_login="sqladmin"
object_id= data.azurerm_client_config.current.object_id
raptSqlDatabaseName="ansserverdb"
}
variable "location" {
default="eastus"
}
variable "sql_administrator_login" {
default="4dm1n157r470r"
}
variable "sql_administrator_login_password" {
default="4-v3ry-53cr37-p455w0rd"
}
variable "sql_firewall_rule" {
default="ansumantestfirewall"
}
variable "environment" {
default="test"
}
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
Output.tf
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
output "cl_sql_database_name" {
value = azurerm_mssql_database.main.name
}
Output :
Updated Terraform to latest version didnt really help...
I turned on logging with TF_LOGS (env variable in terraform).
Watching the activity logs in the resource group and i noticed it was building the DB server and then there was an error with AAD and then the DB Server build failed after that. So I removed the AAD work in my main.tf - then i reran the pipeline then it worked fine. Phew...

AzureAD // Terraform Cycle Error Undefined

I want to create a App Registration with Azuread Provider and use the applictionid output for a Configuration in my appservice. Everytime I plan, i got a Error Message. If i remove the Configuration Line, everything works fine.
I tried to put the App-Registration in a Module and work with the output but I got the same error.
Does anyone have an advise?
//Azure App Registration
resource "azuread_application" "appregistration" {
name = "${var.state}Site-${var.typ}-ar"
reply_urls = ["https://${azurerm_app_service.appservice.default_site_hostname}/signin-callback"]
available_to_other_tenants = false
oauth2_allow_implicit_flow = true
}
resource "azuread_application_password" "AppRegistrationPwd" {
application_object_id = "${azuread_application.appregistration.id}"
value = "SOMECODE"
end_date = "2020-01-01T01:02:03Z"
}
resource "azuread_service_principal" "serviceprincipal" {
application_id = "${azuread_application.appregistration.application_id}"
app_role_assignment_required = false
}
Appservice
resource "azurerm_app_service" "appservice" {
name = "${var.state}-Site-${var.typ}-as"
location = "${var.location}"
resource_group_name = "${azurerm_app_service_plan.serviceplan.resource_group_name}"
app_service_plan_id = "${azurerm_app_service_plan.serviceplan.id}"
site_config {
dotnet_framework_version = "v4.0"
scm_type = "LocalGit"
}
app_settings = {
"AzureAd:ClientId" = "${azuread_service_principal.serviceprincipal.application_id}"
}
}
Error:
Error: Cycle: module.devcentralhub.azuread_service_principal.serviceprincipal, module.devcentralhub.azurerm_app_service.appservice, module.devcentralhub.azuread_application.appregistration
Your understanding is right as your comment, the resource azurerm_app_service needs the application_id from the resource azuread_service_principal while the resource azuread_service_principal needs the app service name in the reply_urls, so it causes the cycle.
To break the cycle, you could specify ${azurerm_app_service.appservice.default_site_hostname} via ${var.state}-Site-${var.typ}-as.azurewebsites.net since usually both values are the same.
Change to reply_urls = ["https://${var.state}-Site-${var.typ}-as.azurewebsites.net/signin-callback"] in your code.

Resources