I want to create a App Registration with Azuread Provider and use the applictionid output for a Configuration in my appservice. Everytime I plan, i got a Error Message. If i remove the Configuration Line, everything works fine.
I tried to put the App-Registration in a Module and work with the output but I got the same error.
Does anyone have an advise?
//Azure App Registration
resource "azuread_application" "appregistration" {
name = "${var.state}Site-${var.typ}-ar"
reply_urls = ["https://${azurerm_app_service.appservice.default_site_hostname}/signin-callback"]
available_to_other_tenants = false
oauth2_allow_implicit_flow = true
}
resource "azuread_application_password" "AppRegistrationPwd" {
application_object_id = "${azuread_application.appregistration.id}"
value = "SOMECODE"
end_date = "2020-01-01T01:02:03Z"
}
resource "azuread_service_principal" "serviceprincipal" {
application_id = "${azuread_application.appregistration.application_id}"
app_role_assignment_required = false
}
Appservice
resource "azurerm_app_service" "appservice" {
name = "${var.state}-Site-${var.typ}-as"
location = "${var.location}"
resource_group_name = "${azurerm_app_service_plan.serviceplan.resource_group_name}"
app_service_plan_id = "${azurerm_app_service_plan.serviceplan.id}"
site_config {
dotnet_framework_version = "v4.0"
scm_type = "LocalGit"
}
app_settings = {
"AzureAd:ClientId" = "${azuread_service_principal.serviceprincipal.application_id}"
}
}
Error:
Error: Cycle: module.devcentralhub.azuread_service_principal.serviceprincipal, module.devcentralhub.azurerm_app_service.appservice, module.devcentralhub.azuread_application.appregistration
Your understanding is right as your comment, the resource azurerm_app_service needs the application_id from the resource azuread_service_principal while the resource azuread_service_principal needs the app service name in the reply_urls, so it causes the cycle.
To break the cycle, you could specify ${azurerm_app_service.appservice.default_site_hostname} via ${var.state}-Site-${var.typ}-as.azurewebsites.net since usually both values are the same.
Change to reply_urls = ["https://${var.state}-Site-${var.typ}-as.azurewebsites.net/signin-callback"] in your code.
Related
I have been trying to figure out a way to prepare a terraform template for my app service/az function where I can connect it to application Insight while creating them through Terraform. Well the it worked, BUT the application Insight shows
Migrate this resource to Workspace-based Application Insights to gain support for all of the capabilities of Log Analytics, including Customer-Managed Keys and Commitment Tiers. Click here to learn more and migrate in a few clicks.
How do I acheive it from terraform? As from the documentation page of terraform there is no mention of such setup. Appreciate you help on this.
Here is the terraform code for az-function
resource "azurerm_linux_function_app" "t_funcapp" {
name = "t-function-app"
location = local.resource_location
resource_group_name = local.resource_group_name
service_plan_id = azurerm_service_plan.t_app_service_plan.id
storage_account_name = azurerm_storage_account.t_funcstorage.name
storage_account_access_key = azurerm_storage_account.t_funcstorage.primary_access_key
site_config {
application_stack {
java_version = "11"
}
remote_debugging_enabled = false
ftps_state = "AllAllowed"
}
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = "${azurerm_application_insights.t_appinsights.instrumentation_key}"
}
depends_on = [
azurerm_resource_group.t_rg,
azurerm_service_plan.t_app_service_plan,
azurerm_storage_account.t_funcstorage,
azurerm_application_insights.t_appinsights
]
}
Here is the terraform code for app insight
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
application_type = "web"
depends_on = [
azurerm_log_analytics_workspace.t_workspace
]
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
You must create a Log Analytics Workspace and add it to your Application Insights.
For example
resource "azurerm_log_analytics_workspace" "example" {
name = "workspace-test"
location = local.resource_location
resource_group_name = local.resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_application_insights" "t_appinsights" {
name = "t-appinsights"
location = local.resource_location
resource_group_name = local.resource_group_name
workspace_id = azurerm_log_analytics_workspace.example.id
application_type = "web"
}
output "instrumentation_key" {
value = azurerm_application_insights.t_appinsights.instrumentation_key
}
output "app_id" {
value = azurerm_application_insights.t_appinsights.app_id
}
Hope this helps!
We are using terraform version of 0.12.19 and azurerm provider version 2.10.0 for deploying the service bus and its queues and authorization rules. So when we ran the terraform apply it created the service bus and queue but it throwed the below error for the creation of authorization rules.
But when we checked the azure portal these authorization rules were present and in tf state file as well we were able to find the entries of both the resources and they had a parameter Status as "Tainted" in it.. So when we tried to run the apply again to see if will recreate/replace the existing resources but it was failing with the same error. Now we are unable to proceed further as even when we run the plan for creating the new resources its failing at this point and not letting us proceed further.
We even tried to untainted it and run the apply but it seems still we are getting this issue though the resources doesn't have the status tainted parameter in tf state. Can you please help us here the solution so that we can resolve this. (We can't move forward to new version of terraform cli as there are so many modules dependent on it and it will impact our production deployments as well.)
Error: Error making Read request on Azure ServiceBus Queue Authorization Rule "" (Queue "sample-check-queue" / Namespace "sample-check-bus" / Resource Group "My-RG"): servicebus.QueuesClient#GetAuthorizationRule: Invalid input: autorest/validation: validation failed: parameter=authorizationRuleName constraint=MinLength value="" details: value length must be greater than or equal to 1
azurerm_servicebus_queue_authorization_rule.que-sample-check-lsr: Refreshing state... [id=/subscriptions//resourcegroups/My-RG/providers/Microsoft.ServiceBus/namespaces/sample-check-bus/queues/sample-check-queue/authorizationrules/lsr]
Below is the service_bus.tf file code:
provider "azurerm" {
version = "=2.10.0"
features {}
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = "My-RG"
location = "West Europe"
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr" {
name = "lsr"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
I have tried the below terraform code to create authorization rules and could create them successfully:
I have followed this azurerm_servicebus_queue_authorization_rule |
Resources | hashicorp/azurerm | Terraform Registry having latest
version of hashicorp/azurerm terraform provider.
This maybe even related to arguments queue_name. arguments of
resources changed to queue_id in 3.X.X versions
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "xxxx"
location = "xx"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
#resource_group_name = "My-RG"
namespace_id = azurerm_servicebus_namespace.service_bus.id
#namespace_name =
azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr"
{
name = "lsr"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check- AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
Authorization rules created without error:
Please try to change the name of the authorization rule named “lsr” with increased length and also please try to create one rule at a time in your case .
Thanks all for your inputs and suggestions.
Code is working fine now with the terraform provider version 2.56.0 and terraform cli version 0.12.19. Please let me know if any concerns.
An error occurs when I try to create a AKS with Terraform. The AKS was created but the error still comes at the end, which is ugly.
│ Error: retrieving Access Profile for Cluster: (Managed Cluster Name
"aks-1" / Resource Group "pengine-aks-rg"):
containerservice.ManagedClustersClient#GetAccessProfile: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400
Code="BadRequest" Message="Getting static credential is not allowed because this cluster
is set to disable local accounts."
This is my terraform code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.96.0"
}
}
}
resource "azurerm_resource_group" "aks-rg" {
name = "aks-rg"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "aks-1" {
name = "aks-1"
location = azurerm_resource_group.aks-rg.location
resource_group_name = azurerm_resource_group.aks-rg.name
dns_prefix = "aks1"
local_account_disabled = "true"
default_node_pool {
name = "nodepool1"
node_count = 3
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Test"
}
}
Is this a Terraform bug? Can I avoid the error?
If you disable local accounts you need to activate AKS-managed Azure Active Directory integration as you have no more local accounts to authenticate against AKS.
This example enables RBAC, Azure AAD & Azure RBAC:
resource "azurerm_kubernetes_cluster" "aks-1" {
...
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
tenant_id = data.azurerm_client_config.current.tenant_id
admin_group_object_ids = ["OBJECT_IDS_OF_ADMIN_GROUPS"]
azure_rbac_enabled = true
}
}
}
If you dont want AAD integration you need set local_account_disabled = "false".
I am trying to assign User Assigned Managed identity to AAD group. I have following Terraform code:
resource "azurerm_user_assigned_identity" "myid" {
name = "my_identity"
resource_group_name = azurerm_resource_group.somerg.name
location = azurerm_resource_group.somerg.location
}
data "azuread_group" "existinggroup" {
display_name = "existing_group"
security_enabled = true
}
resource "azuread_group_member" "mygrpmember" {
group_object_id = data.azuread_group.existinggroup.id
member_object_id = azurerm_user_assigned_identity.myid.id
}
During plan operation, I get following error:
Error: Value must be a valid UUID
When I change myid.id to myid.principal_id in last line of above code, I get an error during apply operation:
Error: Could not retrieve member principal object "4e83cd6b-d984-4484-8fb2-3ae6e1667ef9"
ODataId was nil
When I try with myid.client_id I get this during apply:
Error: Could not retrieve principal object "838c2662-5fe2-484c-bb52-f70994fa1d8b"
DirectoryObjects.BaseClient.Get(): Get "https://graph.microsoft.com/v1.0/5989ece0-f90e-40bf-9c79-1a7beccdb861/directoryObjects/838c2662-5fe2-484c-bb52-f70994fa1d8b": GET https://graph.microsoft.com/v1.0/5989ece0-f90e-40bf-9c79-1a7beccdb861/directoryObjects/838c2662-5fe2-484c-bb52-f70994fa1d8b giving up after 9 attempt(s)
What am I doing wrong?
It will work if you give myid.principal_id only . Please use the latest versions i.e. terraform Version v1.1.0 , azuread version v2.13.0 and azurerm version v2.89.0 :
I tested the same code in my environment like below :
provider "azuread"{}
provider "azurerm"{
features {}
}
data "azurerm_resource_group" "somerg"{
name = "ansuman-resourcegroup"
}
resource "azurerm_user_assigned_identity" "myid" {
name = "ansuman-identity"
resource_group_name = data.azurerm_resource_group.somerg.name
location = data.azurerm_resource_group.somerg.location
}
data "azuread_group" "existinggroup" {
display_name = "TestQA"
security_enabled = true
}
resource "azuread_group_member" "mygrpmember" {
group_object_id = data.azuread_group.existinggroup.id
member_object_id = azurerm_user_assigned_identity.myid.principal_id
}
Output:
I am trying to deploy 2 different Azure Apps in the same resource group.
These Azure Apps are defined as docker images stored in an Azure Container Registry (where I previously pushed those docker images).
I am not able to deploy both of them at the same time because I think there is something wrong in the way I am defining them as Terraform is expecting to find only one azurerm_app_service, but I am not sure how I can work around this?
When I run this command: terraform plan -var-file test.tfvars, then I see this message in the output:
Error: azurerm_app_service.ci_rg: resource repeated multiple times
How do I define "2 different resources of the same type"?
This is the content of the main.tf file (where I inject the variables defined in variables.tf with the values defined in test.tfvars):
// the resource group definition
resource "azurerm_resource_group" "ci_rg" {
name = "${var.resource_group_name}"
location = "${var.azure_location}"
}
// the app service plan definition
resource "azurerm_app_service_plan" "ci_rg" {
name = "${var.app_service_plan}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
kind = "Linux"
sku {
tier = "Standard"
size = "S1"
capacity = 2 // for both the docker containers
}
properties {
reserved = true
}
}
// the first azure app
resource "azurerm_app_service" "ci_rg" {
name = "${var.first_app_name}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.ci_rg.id}"
site_config {
linux_fx_version = "DOCKER|${var.first_app_docker_image_name}"
}
app_settings {
"CONF_ENV" = "${var.conf_env}"
"DOCKER_REGISTRY_SERVER_URL" = "${var.docker_registry_url}",
"DOCKER_REGISTRY_SERVER_USERNAME" = "${var.docker_registry_username}",
"DOCKER_REGISTRY_SERVER_PASSWORD" = "${var.docker_registry_password}",
}
}
// the second azure app
resource "azurerm_app_service" "ci_rg" {
name = "${var.second_app_name}"
location = "${azurerm_resource_group.ci_rg.location}"
resource_group_name = "${azurerm_resource_group.ci_rg.name}"
app_service_plan_id = "${azurerm_app_service_plan.ci_rg.id}"
site_config {
linux_fx_version = "DOCKER|${var.second_app_docker_image_name}"
}
app_settings {
"CONF_ENV" = "${var.conf_env}"
"DOCKER_REGISTRY_SERVER_URL" = "${var.docker_registry_url}",
"DOCKER_REGISTRY_SERVER_USERNAME" = "${var.docker_registry_username}",
"DOCKER_REGISTRY_SERVER_PASSWORD" = "${var.docker_registry_password}",
}
}
Edit:
I am not entirely sure about how this Terraform thing works, but I think the label azurerm_app_service is taken by the "syntax of Terraform". See the docs here: https://www.terraform.io/docs/providers/azurerm/r/app_service.html
where the title is azurerm_app_service. So I don't think I can change that.
My guess would be you need to rename the second one to something else. Like this: resource "azurerm_app_service" "ci_rg_second". It obviously doesnt like the fact that it has the same name.