Enabling Activity Logs Diagnostic Settings using Terraform - azure

Currently there exists a module to create a Log Diagnostic Setting for Azure Resources linked here. Using the portal I am able to generate a log diagnostic setting for activity logs as well as mentioned here. I was trying to enable activity logs diagnostic settings and send logs to a Storage account and only came across this module.
However it seems that it is not possible to use this module to send Activity logs to a Log analytics workspace. It also does not support the Log categories which are mentioned in the portal (i.e Administrative,Security, ServiceHealth etc) and only provides Action,Delete and Write. This leads me to believe that they are not intended to be used for the same purpose. The first module requires a target_resource_id and since Activity logs exist in the subscription level no such id exists.
As such is it possible to use the first mentioned module, or an entirely different module to enable diagnostic settings? Any help regarding the matter would be appreciated

You can configure this by specifying the subscription id as the target_resource_id within a azurerm_monitor_diagnostic_setting resource.
Example:
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "/subscriptions/85306735-db49-41be-b899-b0fc48095b01"
eventhub_name = azurerm_eventhub.diagnostics.name
eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.diagnostics.id
log {
category = "Administrative"
retention_policy {
enabled = false
}
}

You should use the attribute "log_analytics_workspace_id"
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "/subscriptions/xxxx"
log_analytics_workspace_id = azurerm_log_analytics_workspace.this.id
log_analytics_destination_type = "Dedicated" # or null see [documentation][1]
log {
category = "Administrative"
retention_policy {
enabled = false
}
}

Related

Azure App Service Plan inconsistently throttling - App Service Plan Create operation is throttled for subscription

When creating an App Service Plan on my new-ish (4 day old) subscription using Terraform, I immediately get a throttling error
App Service Plan Create operation is throttled for subscription <subscription>. Please contact support if issue persists
The thing is, when I then go to the UI and create an identical service plan, I receive no errors and it creates without issue, so it's clear that there is actually no throttling issue for creating the app service plan since I can make it.
I'm wondering if anyone knows why this is occurring?
NOTE
I've gotten around this issue by just creating the resource in the UI and then importing it into my TF state... but since the main point of IaC is automation, I'd like to ensure that this unusual behavior does not persist when I go to create new environments.
EDIT
My code is as follows
resource "azurerm_resource_group" "frontend_rg" {
name = "${var.env}-${var.abbr}-frontend"
location = var.location
}
resource "azurerm_service_plan" "frontend_sp" {
name = "${var.env}-${var.abbr}-sp"
resource_group_name = azurerm_resource_group.frontend_rg.name
location = azurerm_resource_group.frontend_rg.location
os_type = "Linux"
sku_name = "B1"
}
EDIT 2
terraform {
backend "azurerm" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.15.0"
}
}
}

Terraform Azure Event hub - disable public network access

What is the way we can disable public network access using Terrform for Azure Event Hub
I selected options public_network_access_enabled as false and public_network_access as false under network_rulesset block and following error
"public_network_access_enabled" is not expected here.
I am not sure what I am missing here...any support would be great help here.
As you say the attribute public_network_access_enabled does not exist in the module azurerm_eventhub
The attribute public_network_access_enabled it is part of the module azurerm_eventhub_namespace
public_network_access_enabled - (Optional) Is public network access enabled for the EventHub Namespace? Defaults to true.
Source: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/eventhub_namespace#public_network_access_enabled
For example:
resource "azurerm_eventhub_namespace" "example" {
name = "example-namespace"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
SKU = "Standard"
capacity = 2
public_network_access_enabled = false # Default is true
tags = {
environment = "Production"
}
}
Better if you can provide more details about how you have configured the access to your Azure Event Hub Namespace. Because, if you have disabled the public access, you need to enable access via private endpoints. In that case, you need to correctly use public_network_access_enabled property in both namespace level and network_rulesets level.
If you are using hashicorp as the provider, check the latest documentations for this in https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/eventhub_namespace#network_rulesets
Note:

Cant find azure monitor scheduled query with invalid data source

So Im using terraform to create a scheduled query alert on a particular application insight
resource "azurerm_template_deployment" "rule1" {
name = "queryrule${md5(format("%s-%s", var.resourcegroupname, var.name))}" # This is the name of the deployment (has to be unique for each rule)
resource_group_name = var.resourcegroupname
template_body = file("./modules/queryrule/queryRule.json")
deployment_mode = "Incremental"
parameters = {
action_emailSubject = "${var.person} from ${var.email}"
action_groups = "${join(";", var.action_group_array)}"
action_trigger_thresholdOperator = var.act_threshold_1operator
action_trigger_threshold = var.action_threshold
name = var.name_rule
description = var.description
schedule_frequencyInMinutes = var.frequency
schedule_timeWindowInMinutes = var.timeWindow
query = var.queryString
data_source_id = var.data_source
}
}
queryRule.json is a normal ARM template for scheduled query.
THe problem is that when I deployed the terraform project the datasource was invalid so the scheduled query was created but not added the alert of the appinsight and also not added to the terraform state.
when I deployed next time it said this resource already exists but is not part of terraform state. I want to delete this scheduled query but i cant find it on the azure portal. Any ideas how to find and delete this orphan scheduled query?
Contacted Microsoft support to find the answer. The reason I was not able to find it because the scheduled query was never created, only the deployment for it was created and not added to terraform state. I had to go the the deployments option on menu on the resource group and found the failed deployment and deleted it.

Using databricks workspace in the same configuration as the databricks provider

I'm having some trouble getting the azurerm & databricks provider to work together.
With the azurerm provider, setup my workspace
resource "azurerm_databricks_workspace" "ws" {
name = var.workspace_name
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "premium"
managed_resource_group_name = "${azurerm_resource_group.rg.name}-mng-rg"
custom_parameters {
virtual_network_id = data.azurerm_virtual_network.vnet.id
public_subnet_name = var.public_subnet
private_subnet_name = var.private_subnet
}
}
No matter how I structure this, I can't say seem to get the azurerm_databricks_workspace.ws.id to work in the provider statement for databricks in the the same configuration. If it did work, the above workspace would be defined in the same configuration and I'd have a provider statement that looks like this:
provider "databricks" {
azure_workspace_resource_id = azurerm_databricks_workspace.ws.id
}
Error:
I have my ARM_* environment variables set to identify as a Service Principal with Contributor on the subscription.
I've tried in the same configuration & in a module and consuming outputs. The only way I can get it to work is by running one configuration for the workspace and a second configuration to consume the workspace.
This is super suboptimal in that I have a fair amount of repeating values across those configurations and it would be ideal just to have one.
Has anyone been able to do this?
Thank you :)
I've had the exact same issue with a not working databricks provider because I was working with modules. I separated the databricks infra (Azure) with databricks application (databricks provider).
In my databricks module I added the following code at the top, otherwise it would use my azure setup:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.3.1"
}
}
}
In my normal provider setup I have the following settings for databricks:
provider "databricks" {
azure_workspace_resource_id = module.databricks_infra.databricks_workspace_id
azure_client_id = var.ARM_CLIENT_ID
azure_client_secret = var.ARM_CLIENT_SECRET
azure_tenant_id = var.ARM_TENANT_ID
}
And of course I have the azure one. Let me know if it worked :)
If you experience technical difficulties with rolling out resources in this example, please make sure that environment variables don't conflict with other provider block attributes. When in doubt, please run TF_LOG=DEBUG terraform apply to enable debug mode through the TF_LOG environment variable. Look specifically for Explicit and implicit attributes lines, that should indicate authentication attributes used. The other common reason for technical difficulties might be related to missing alias attribute in provider "databricks" {} blocks or provider attribute in resource "databricks_..." {} blocks. Please make sure to read alias: Multiple Provider Configurations documentation article.
From the error message, it looks like Authentication is not configured for provider could you please configure it through the one of following options mentioned above.
For more details, refer Databricks provider - Authentication.
For passing the custom_parameters, you may checkout the SO thread which addressing the similar issue.
In case if you need more help on this issue, I would suggest to open an issue here: https://github.com/terraform-providers/terraform-provider-azurerm/issues

How to set Azure Web Application Firewall (WAF) logs via Terraforn?

I am trying to do this, via Terraform code:
However, I can not find how. Is it some obscure resource or it is not implemented at all ?
You can use the azurerm_monitor_diagnostic_setting to configure the setting as ydaetskcoR said, it works like the screenshot you provided shows. Here is the example code:
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "application_gateway_resource_id"
storage_account_id = data.azurerm_storage_account.example.id
log {
category = "ApplicationGatewayFirewallLog"
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
Terraform does not support Data for application gateway, so you need to input the resource id of the existing application gateway yourself, or quote the id when you create the new application gateway.
It seems like logs are not supported by Terraform for Azure WAF (ApplicationGateway) yet.

Resources