Azure Activity Log Alerts are not working - azure

I have created an Activity Log Alert in Azure using the following Terraform Code
// We need to define the action group for Security Alerts
resource "azurerm_monitor_action_group" "monitor_action_group_soc" {
name = "sec-alert"
resource_group_name = data.azurerm_resource_group.tenant-global.name
short_name = "sec-alert"
email_receiver {
name = "sendtoAdmin"
email_address = var.email_address
use_common_alert_schema = true
}
}
data "azurerm_monitor_action_group" "monitor_action_group_soc" {
name = "sec-alert"
resource_group_name = var.tenant-global-rg
depends_on = [
azurerm_monitor_action_group.monitor_action_group_soc
]
}
// Monitor Activity Log and Alert
resource "azurerm_monitor_activity_log_alert" "activity_log_alert_cu_security_group" {
name = "Activity Log Alert for Create or Update Security Group"
resource_group_name = data.azurerm_resource_group.ipz12-dat-np-mgmt-rg.name
scopes = [data.azurerm_subscription.current.id]
description = "Monitoring for Create or Update Network Security Group events gives insight into network access changes and may reduce the time it takes to detect suspicious activity"
criteria {
category = "Security"
operation_name = "Microsoft.Network/networkSecurityGroups/write"
}
action {
action_group_id = data.azurerm_monitor_action_group.monitor_action_group_soc.id
}
}
I have created the Network Security Group, added a Rule, deleted the Rule and finally deleted the Network Security Group but I didn't receive any Alerts.

Azure Activity Log Alerts are not working:
These are the modifications I made to your code to achieve the expected result.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "<resourcegroup>"{
name = "<resourcegroup>"
location = "Central US"
}
resource "azurerm_monitor_action_group" "<actiongroup>" {
name = "sec-alert"
resource_group_name = "<resourcegroup>"
short_name = "sec-alert"
email_receiver {
name = "xxxxx"
email_address = "xxxxxxx#gmail.com"
use_common_alert_schema = true
}
}
data "azurerm_monitor_action_group" "<actiongroup>" {
name = "sec-alert"
resource_group_name = "<resourcegroup>"
depends_on = [
azurerm_monitor_action_group.<actiongroup>
]
}
resource "azurerm_monitor_activity_log_alert" "azurerm_monitor_activity_log_alert_securitygroup" {
name = "Activity Log Alert for Create or Update Security Group"
resource_group_name = "<resourcegroup>"
scopes = [data.azurerm_subscription.current.id] #My scope is /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Network/networkSecurityGroups/<NetworkSecurityGroup>
description = "Monitoring for Create or Update Network Security Group events gives insight into network access changes and may reduce the time it takes to detect suspicious activity"
criteria {
category = "Security"
operation_name = "Microsoft.Network/networkSecurityGroups/write"
}
action {
action_group_id = data.azurerm_monitor_action_group.<actiongroup>.id
}
}
Created Security alert by running "terraform apply" in AzCLI :
Received a mail once it is added to the Network Security Group:

Related

Azure AKS - oms agent AND diagnostic settings possible together?

I'm deploying an AKS cluster via Terraform.
I set an oms_agent block within my aks resource block:
resource "azurerm_kubernetes_cluster" "tfdemo-cluster" {
resource_group_name = var.resourcegroup_name
location = var.location
name = "${var.projectname}-aks"
node_resource_group = "${var.resourcegroup_name}-node"
... omitted to shorten ...
oms_agent {
log_analytics_workspace_id = var.log_analytics_workspace_id
}
Like this it works as aspected.
But when I add an additional resource of type diagnostic_settings like so
resource "azurerm_monitor_diagnostic_setting" "aks-diagnostics" {
name = "aks-logs"
storage_account_id = var.storage_account_id
target_resource_id = azurerm_kubernetes_cluster.tfdemo-cluster.id
log {
category = "kube-audit"
enabled = true
}
metric {
category = "AllMetrics"
retention_policy {
days = 30
enabled = true
}
}
}
I run into an error that says:
"diagnosticsettings.DiagnosticSettingsClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=nil nil"
When I tried to google that error messages I found issues related to other Azure services where the sku of that service wasn't matching a specified feature or capacity but I'm don't see that here.
Why I want log analytics workspace AND logs dumped into a storage account: My thinking was just that a log anal. ws is really expensive compared to storage in a storage account. So I thought I send say the audit data for long time retention to the cheap storage account (my settings in the given example might not 100% represent that but it's not the point here I'd say) and still have the "expensive" log analytics service to dig into the cluster performance.
Thanks a lot for any input!
I Tried to reproduce the same in my environment to Create an Azure AKS cluster with OMS Agent and Diagnostic Setting using Terraform:
Sending long-term data retention logs to a Azure Storage Account can be more cost-effective than keeping them in a Azure Log Analytics workspace. However, the Azure Log Analytics workspace can still be useful for real-time analysis and performance monitoring.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "aksgroup" {
name = "aks-rg"
location = "East US"
}
resource "azurerm_log_analytics_workspace" "oms" {
name = "oms-workspace"
location = azurerm_resource_group.aksgroup.location
resource_group_name = azurerm_resource_group.aksgroup.name
sku = "PerGB2018"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "cluster-aks1"
location = azurerm_resource_group.aksgroup.location
resource_group_name = azurerm_resource_group.aksgroup.name
dns_prefix = "aks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "standard_a2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
addon_profile {
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.oms.id
}
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate
sensitive = true
}
output "kube_config" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
sensitive = true
}
resource "azurerm_monitor_diagnostic_setting" "aks" {
name = "aks-diagnostic-setting"
target_resource_id = azurerm_kubernetes_cluster.aks.id
storage_account_id = azurerm_storage_account.aks.id
log_analytics_workspace_id = azurerm_log_analytics_workspace.oms.id
log {
category = "kube-audit"
enabled = true
}
metric {
category = "AllMetrics"
retention_policy {
days = 30
enabled = true
}
}
}
resource "azurerm_storage_account" "aks" {
name = "aksdiagnostic"
resource_group_name = azurerm_resource_group.aksgroup.name
location = azurerm_resource_group.aksgroup.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Terraform Apply:
Once ran the code resources are created, like below.
Azure AKS Diagnostic settings created with Log Analytics settings.
Log Analytics settings- created.

Unable to declare resource_type variable and event status in Terraform for azure resource health alert.showing conflicting error Msg

Trying to add the Event status in Resource health block (not accepting it)
Tried adding the resource_type variable as mentioned in terraform documentation to
select the resource types to get this alert apply on.but its omitting
conflicting error msg is
"criteria.. resource health": conflicts with criteria.o.resource type
resource "azurerm_monitor_activity_log_alert" "reshealthalert" {
name = "resourceHealthFromMain "
resource_group_name = azurerm_resource_group.rg.name
scopes = ["/subscriptions/${data.azurerm_subscription.current.subscription_id}"]
description = var.monitor_activity_log_alert_description
criteria {
category = var.criteria_resource_health
# resource_type = "Storage account"
resource_health {
current = var.current_resource_status
previous = var.previous_resource_status
# events = var.resource_health_events
reason = var.reason_type
#event_status = var.resource_health_events
} }
action {
action_group_id = azurerm_monitor_action_group.email_alert.id } }
Please check the below terraform code.
Please make sure to give resource_type before category and give resourceId
format of storage account in the form resource_type= "Microsoft.Storage/storageAccounts"
main.tf
resource "azurerm_monitor_activity_log_alert" "reshealthalert" {
name = "ka-resource-Health "
resource_group_name = azurerm_resource_group.example.name
scopes = ["/subscriptions/xxxxxxxxxx"]
description = "This alert will monitor a specific storage account updates."
resource_type= "Microsoft.Storage/storageAccounts"
#resource_type = "Storage account"
criteria {
category = var.criteria_resource_health
...
}
Reference:
azurerm_monitor_activity_log_alert | Resources | hashicorp/azurerm | Terraform Registry

Terraform Azurerm - Data Export Rule

My query is related to azurerm_log_analytics_data_export_rule. I have created Log Analytics Workspace and Eventhub in portal followed all the steps in below link.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_data_export_rule
Both Terraform Plan and Apply are successful. But the expected tables are not created in Eventhub. For example (as per above link) “Heartbeat” table is not created Eventhub after export_rule created. The below Microsoft documentation mentions that the tables will be automatically created in EH or Storage account once export rule creation successful.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal
Will be helpful if I get some info on this rule.
The Hashicrop template you are following will create new resource group, storage account, log analytics workspace & a export rule.
Since the above terraform template is creating new environment & there will be no heart beat logs present by default so that is reason why there were no heart beat logs container was created.
When we have tested in our environment, exporting heart beat logs of log analytics workspace data to storage account it took nearly 30 minutes to get the data to be reflected in our storage account.
Data completeness
Data export will continue to retry sending data for up to 30 minutes in the event that the destination is unavailable. If it's still unavailable after 30 minutes then data will be discarded until the destination becomes available.
provider "azurerm" {
features{}
}
resource "azurerm_resource_group" "data_export_resource_group" {
name = "test_data_export_rg"
location = "centralus"
}
resource "azurerm_log_analytics_workspace" "data_export_log_analytics_workspace" {
name = "testdataexportlaw"
location = azurerm_resource_group.data_export_resource_group.location
resource_group_name = azurerm_resource_group.data_export_resource_group.name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_storage_account" "data_export_azurerm_storage_account" {
name = "testdataexportazurermsa"
resource_group_name = azurerm_resource_group.data_export_resource_group.name
location = azurerm_resource_group.data_export_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_eventhub_namespace" "data_export_azurerm_eventhub_namespace" {
name = "testdataexportehnamespace"
location = azurerm_resource_group.data_export_resource_group.location
resource_group_name = azurerm_resource_group.data_export_resource_group.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "data_export_eventhub" {
name = "testdataexporteh1"
namespace_name = azurerm_eventhub_namespace.data_export_azurerm_eventhub_namespace.name
resource_group_name = azurerm_resource_group.data_export_resource_group.name
partition_count = 2
message_retention = 1
}
```
resource "azurerm_log_analytics_data_export_rule" "example" {
name = "testdataExport1"
resource_group_name = azurerm_resource_group.data_export_resource_group.name
workspace_resource_id = azurerm_log_analytics_workspace.data_export_log_analytics_workspace.id
destination_resource_id = azurerm_eventhub.data_export_eventhub.id
table_names = ["Usage","StorageBlobLogs"]
enabled = true
}
```

Terraform - Creating Azure Event Grid Subscriptions - can it do it?

I've been struggling for a while in Terraform to create an Event Subscription in an Azure Event Grid
As-per screenshot....
EVENT SUBSCRIPTION DETAILS
NAME : EventGrid-Sub1
(don't need to change Event Schema)
TOPIC DETAILS
Event Grid Domain
Topic Resource: EDG-SBX-EventGrid1
Domain Type: EventGrid-DomainTopic1
ENDPOINT DETAILS
Endpoint Type: Event Hubs
Endpoint : eh-sbx-Ingestion
I've been using these as reference, but it seems not only a bit chicken-and-egg, but pieces missing?
https://www.terraform.io/docs/providers/azurerm/r/eventgrid_event_subscription.html
https://www.terraform.io/docs/providers/azurerm/r/eventgrid_topic.html
Has anyone got this working in Terraform?
Thanks in advance
Azure Screenshot on Event Grids / Create Event Subscription screen
#nmca70 There are a couple of ways to achieve this:
Create an ARM template from the final deployment and then run that ARM template using Terraform:
https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html
Create resources in the below order:
Azure event hub: https://www.terraform.io/docs/providers/azurerm/r/eventhub.html
Azure event grid topic: https://www.terraform.io/docs/providers/azurerm/r/eventgrid_topic.html
Azure event grid domain: https://www.terraform.io/docs/providers/azurerm/r/eventgrid_domain.html
Azure event grid subscription: https://www.terraform.io/docs/providers/azurerm/r/eventgrid_event_subscription.html#storage_queue_endpoint
A sample:
resource "azurerm_resource_group" "test" {
name = "resourceGroup1"
location = "West US 2"
}
resource "azurerm_eventhub_namespace" "test" {
name = "acceptanceTestEventHubNamespace"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
capacity = 1
kafka_enabled = false
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "test" {
name = "acceptanceTestEventHub"
namespace_name = "${azurerm_eventhub_namespace.test.name}"
resource_group_name = "${azurerm_resource_group.test.name}"
partition_count = 2
message_retention = 1
}
resource "azurerm_eventgrid_topic" "test" {
name = "my-eventgrid-topic"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
tags = {
environment = "Production"
}
}
resource "azurerm_eventgrid_domain" "test" {
name = "my-eventgrid-domain"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
input_schema = "eventgridschema"
input_mapping_fields= {
topic = "my-eventgrid-topic"
}
tags = {
environment = "Production"
}
}
resource "azurerm_eventgrid_event_subscription" "default" {
name = "defaultEventSubscription"
scope = "${azurerm_resource_group.default.id}"
event_delivery_schema = "EventGridSchema"
topic_name = "my-eventgrid-topic"
eventhub_endpoint {
storage_account_id = "${azurerm_eventhub.test.id}"
}
}
Hope this helps!

Azure Alert Creation Via terraform fails with error code 400

While creating metric alert on storage account via terraform I am getting Error 400
I've gone through the documentation and corss-verified that the name I am using for alert creation is correct
resource "azurerm_metric_alertrule" "test" {
name = "alerttestacc"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
description = "An alert rule to watch the metric Used capacity"
enabled = true
resource_id = "${azurerm_storage_account.to_monitor.id}"
metric_name = "UsedCapacity"
operator = "GreaterThan"
threshold = 20
aggregation = "Total"
period = "PT5M"
email_action {
send_to_service_owners = false
custom_emails = [
"xyz#gmail.com",
]
}
webhook_action {
service_uri = "https://example.com/some-url"
properties = {
severity = "incredible"
acceptance_test = "true"
}
}
Expected: Alert should be created
Actual:
azurerm_metric_alertrule.test:
insights.AlertRulesClient#CreateOrUpdate: Failure responding to
request: StatusCode=400 -- Original Error: autorest/azure: Service
returned an error. Status=400 Code="UnsupportedMetric" Message="The
metric with namespace '' and name 'UsedCapacity' is not supported for
this resource id
You could use azurerm_monitor_metric_alert instead of azurerm_metric_alertrule to create a UsedCapacity metric alert for the storage account. It is possible due to the different experiences between classic alerts and new alerts in Azure monitoring. Read alerts overview.
This example works on my side.
resource "azurerm_resource_group" "main" {
name = "example-resources"
location = "West US"
}
resource "azurerm_storage_account" "to_monitor" {
name = "examplestorageaccount123"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_monitor_action_group" "main" {
name = "example-actiongroup"
resource_group_name = "${azurerm_resource_group.main.name}"
short_name = "exampleact"
webhook_receiver {
name = "callmyapi"
service_uri = "http://example.com/alert"
}
}
resource "azurerm_monitor_metric_alert" "test" {
name = "example-metricalert"
resource_group_name = "${azurerm_resource_group.main.name}"
scopes = ["${azurerm_storage_account.to_monitor.id}"]
description = "Action will be triggered when the Used capacity is Greater than 777 bytes."
criteria {
metric_namespace = "Microsoft.Storage/storageAccounts"
metric_name = "UsedCapacity"
aggregation = "Total"
operator = "GreaterThan"
threshold = 777
}
action {
action_group_id = "${azurerm_monitor_action_group.main.id}"
}
}

Resources