I'm currently using Terraform and bits of Powershell to automate all of my infrastructure and I'm seeking a fully automated means to configure update management for all of my VMs. I'm able to deploy the Automation Account, Log Analytics Workspace, and a linked service resource to manage the connection between the two. However, I'm unable to enable the update management service on the Auto Account.
Is there any automatable means (ps, tf, api, etc.) by which I can simply enable update management for my automation account?
Here is a Terraform module that creates an automation account, creates a link to a log analytics workspace (workspace Id passed in in this example) and then adds the required update management and/or change tracking workspace solutions to the workspace.
This module was built using Terraform 0.11.13 with AzureRM provider version 1.28.0.
# Create the automation account
resource "azurerm_automation_account" "aa" {
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
name = "${var.name}"
sku {
name = "${var.sku}"
}
tags = "${var.tags}"
}
# Link automation account to a Log Analytics Workspace.
# Only deployed if enable_update_management and/or enable_change_tracking are/is set to true
resource "azurerm_log_analytics_linked_service" "law_link" {
count = "${var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
resource_group_name = "${var.resource_group_name}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
linked_service_name = "automation"
resource_id = "${azurerm_automation_account.aa.id}"
}
# Add Updates workspace solution to log analytics if enable_update_management is set to true.
# Adding this solution to the log analytics workspace, combined with above linked service resource enables update management for the automation account.
resource "azurerm_log_analytics_solution" "law_solution_updates" {
count = "${var.enable_update_management}"
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
solution_name = "Updates"
workspace_resource_id = "${var.log_analytics_workspace_id}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
plan {
publisher = "Microsoft"
product = "OMSGallery/Updates"
}
}
# Add Updates workspace solution to log analytics if enable_change_tracking is set to true.
# Adding this solution to the log analytics workspace, combined with above linked service resource enables Change Tracking and Inventory for the automation account.
resource "azurerm_log_analytics_solution" "law_solution_change_tracking" {
count = "${var.enable_change_tracking}"
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
solution_name = "ChangeTracking"
workspace_resource_id = "${var.log_analytics_workspace_id}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
plan {
publisher = "Microsoft"
product = "OMSGallery/ChangeTracking"
}
}
# Send logs to Log Analytics
# Required for automation account with update management and/or change tracking enabled.
# Optional on automation accounts used of other purposes.
resource "azurerm_monitor_diagnostic_setting" "aa_diags_logs" {
count = "${var.enable_logs_collection || var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
name = "LogsToLogAnalytics"
target_resource_id = "${azurerm_automation_account.aa.id}"
log_analytics_workspace_id = "${var.log_analytics_workspace_id}"
log {
category = "JobLogs"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "JobStreams"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "DscNodeStatus"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = false
retention_policy {
enabled = false
}
}
}
# Send metrics to Log Analytics
resource "azurerm_monitor_diagnostic_setting" "aa_diags_metrics" {
count = "${var.enable_metrics_collection || var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
name = "MetricsToLogAnalytics"
target_resource_id = "${azurerm_automation_account.aa.id}"
log_analytics_workspace_id = "${var.metrics_log_analytics_workspace_id}"
log {
category = "JobLogs"
enabled = false
retention_policy {
enabled = false
}
}
log {
category = "JobStreams"
enabled = false
retention_policy {
enabled = false
}
}
log {
category = "DscNodeStatus"
enabled = false
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = true
retention_policy {
enabled = false
}
}
}
as far as I understand this is what you need:
{
"type": "Microsoft.OperationalInsights/workspaces",
"name": "[variables('namespace')]",
"apiVersion": "2017-03-15-preview",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "Standalone"
}
},
"resources": [
{
"name": "Automation", # this onboards automation to oms, which is what you need
"type": "linkedServices",
"apiVersion": "2015-11-01-preview",
"dependsOn": [
"[variables('automation')]",
"[variables('namespace')]"
],
"properties": {
"resourceId": "[resourceId('Microsoft.Automation/automationAccounts/', variables('automation'))]"
}
}
]
},
{
"type": "Microsoft.Automation/automationAccounts",
"name": "[variables('automation')]",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "OMS"
}
}
},
{
"type": "Microsoft.OperationsManagement/solutions", # this install update management solution, you probably need this for update management
"name": "[concat(variables('solutions')[copyIndex()],'(', variables('namespace'), ')')]",
"apiVersion": "2015-11-01-preview",
"location": "[resourceGroup().location]",
"copy": {
"name": "solutions",
"count": "[length(variables('solutions'))]"
},
"plan": {
"name": "[concat(variables('solutions')[copyIndex()], '(', variables('namespace'), ')')]",
"promotionCode": "",
"product": "[concat('OMSGallery/', variables('solutions')[copyIndex()])]",
"publisher": "Microsoft"
},
"properties": {
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces', variables('namespace'))]"
},
"dependsOn": [
"[variables('namespace')]"
]
}
here's the variable I'm using to define solutions to be installed:
"solutions": [
"AlertManagement",
"Updates",
"Security"
]
Basically you can map this to api calls 1-to-1
Related
I am trying to code an Azure Data Factory in Terraform, but I am not sure how to code this REST dataset:
{
"name": "RestResource1",
"properties": {
"linkedServiceName": {
"referenceName": "API_Connection",
"type": "LinkedServiceReference"
},
"annotations": [],
"type": "RestResource",
"schema": []
},
"type": "Microsoft.DataFactory/factories/datasets"
}
I don't see one in the azurerm documentation. Can one instead use an azurerm_data_factory_dataset_http resource instead?
azurerm_data_factory_linked_service_rest - Does not currently exist.
azurerm_data_factory_linked_service_web - This only support a web
table and not a REST API endpoint and can't be used with the Azure
integrated runtime.
As I tried to create the linked service using rest and http it always redirected to create a web table using terraform. Hence, for now the fix for this is to use azurerm_data_factory_linked_custom_service.
Here, is the example: How to create a Custom Linked service :
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "example" {
name = "Your Resource Group"
}
data "azurerm_data_factory" "example" {
name = "vipdashadf"
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_data_factory_linked_custom_service" "example" {
name = "ipdashlinkedservice"
data_factory_id = data.azurerm_data_factory.example.id
type = "RestService"
description = "test for rest linked"
type_properties_json = <<JSON
{
"url": "http://www.bing.com",
"enableServerCertificateValidation": false,
"authenticationType": "Anonymous"
}
JSON
annotations = []
}
resource "azurerm_data_factory_dataset_http" "example" {
name = "apidataset"
resource_group_name = data.azurerm_resource_group.example.name
data_factory_name = data.azurerm_data_factory.example.name
linked_service_name = azurerm_data_factory_linked_custom_service.example.name
relative_url = "http://www.bing.com"
request_body = "foo=bar"
request_method = "POST"
}
Outputs:
Linked Service - ipdashlinkservice type Rest Connector
Dataset: apidataset
You could find the same stated in the GitHub discussion: Support for Azure Data Factory Linked Service for REST API #9431
I am using terraform version 0.15.5 (and also 0.15.4) to provision an Azure Web App resource. The following configuration works fine:
site_config {
http2_enabled = true
always_on = false
use_32_bit_worker_process = true
}
But when I use use_32_bit_worker_process = false to get the script provision a 64-bit web app, it fails and I get the following error message:
2021-06-03T18:06:55.6392592Z [31m│[0m [0m[1m[31mError: [0m[0m[1mError creating App Service "gfdemogatewayapp" (Resource Group "MASKED"): web.AppsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> <nil>[0m
2021-06-03T18:06:55.6411094Z [31m│[0m [0m
2021-06-03T18:06:55.6426506Z [31m│[0m [0m[0m with azurerm_app_service.gfgatewayapp,
2021-06-03T18:06:55.6427703Z [31m│[0m [0m on main.tf line 274, in resource "azurerm_app_service" "gfgatewayapp":
2021-06-03T18:06:55.6428766Z [31m│[0m [0m 274: resource "azurerm_app_service" "gfgatewayapp" [4m{[0m[0m
2021-06-03T18:06:55.6429584Z [31m│[0m [0m
2021-06-03T18:06:55.6430461Z [31m╵[0m[0m
2021-06-03T18:06:55.6534148Z ##[error]Error: The process '/opt/hostedtoolcache/terraform/0.15.4/x64/terraform' failed with exit code 1
2021-06-03T18:06:55.6548186Z ##[section]Finishing: Terraform approve and apply
Is there something that I am missing or terraform has an issue in provisioning the 64-bit web app resource on Azure?
UPDATE: The full source code
The tier is Standard and the SKU size is "F1".
resource "azurerm_app_service_plan" "gfwebappserviceplan" {
name = var.gatewayserviceplanname
location = "${azurerm_resource_group.gf.location}"
resource_group_name = "${azurerm_resource_group.gf.name}"
sku {
tier = var.gatewayserviceplanskutier
size = var.gatewayserviceplanskusize
}
}
resource "azurerm_app_service" "gfgatewayapp" {
name = var.gatewayappname
location = "${azurerm_resource_group.gf.location}"
resource_group_name = "${azurerm_resource_group.gf.name}"
app_service_plan_id = azurerm_app_service_plan.gfwebappserviceplan.id
app_settings = {
APPINSIGHTS_INSTRUMENTATIONKEY = "${azurerm_application_insights.gfapplicationinsights.instrumentation_key}"
}
site_config {
http2_enabled = true
always_on = false
use_32_bit_worker_process = false
}
}
output "gfgatewayhostname" {
value = "${azurerm_app_service.gfgatewayapp.default_site_hostname}"
description = "Gateway default host name"
}
resource "azurerm_template_deployment" "webapp-corestack" {
# This will make it .NET CORE for Stack property, and add the dotnet core logging extension
name = "AspNetCoreStack"
resource_group_name = "${azurerm_resource_group.gf.name}"
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"siteName": {
"type": "string",
"metadata": {
"description": "The Azure App Service Name"
}
},
"extensionName": {
"type": "string",
"metadata": {
"description": "The Site Extension Name."
}
},
"extensionVersion": {
"type": "string",
"metadata": {
"description": "The Extension Version"
}
}
},
"resources": [
{
"apiVersion": "2018-02-01",
"name": "[parameters('siteName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"properties": {
"name": "[parameters('siteName')]",
"siteConfig": {
"appSettings": [],
"metadata": [
{
"name": "CURRENT_STACK",
"value": "dotnetcore"
}
]
}
}
},
{
"type": "Microsoft.Web/sites/siteextensions",
"name": "[concat(parameters('siteName'), '/', parameters('extensionName'))]",
"apiVersion": "2018-11-01",
"location": "[resourceGroup().location]",
"properties": {
"version": "[parameters('extensionVersion')]"
}
}
]
}
DEPLOY
parameters = {
"siteName" = azurerm_app_service.gfgatewayapp.name
"extensionName" = "Microsoft.AspNetCore.AzureAppServices.SiteExtension"
"extensionVersion" = "3.1.7"
}
deployment_mode = "Incremental"
depends_on = [azurerm_app_service.gfgatewayapp]
}
Since this is the first google result you get when searching for
"AppsClient#CreateOrUpdate: Failure sending request: StatusCode=0"
and that was my error, I will try to help other people that might stumble over this problem.
What I did was migrating a function from azure provider version 2.x to 3.x. Since terraform actually changed the resource type from azurerm_function_app to azurerm_windows_function_app they also changed some properties.
What happened for me was that they added the properties application_insights_key and application_insights_connection_string to the site_config. Before you had to manually (in the app_settings) add a key called APPINSIGHTS_INSTRUMENTATIONKEY.
I used the new setting but forgot to get rid of the manually added key and my function creation was failing with the above (not very verbose if you ask me) error.
Took me a while to figure that out, so that's why I'm sharing this here.
You are getting this error because you are using a F1 tier app service plan. Free or Shared tiers do not have a 64 bit option.
Terraform AzureRM Registry
If setting use_32_bit_worker_process or use_32_bit_worker to true does not help, then try to run terraform with logging. Logging can be enabled by setting the TF_LOG environment variable, e.g.:
$ TF_LOG=debug terraform apply
This will produce a lot of logging, including the HTTP responses from Azure. One of the last logged responses should include more details about the cause. In my case it was because always_on is not supported on the free plan, but is enabled by default:
HTTP/2.0 409 Conflict
<snip>
{"Code":"Conflict","Message":"There was a conflict. AlwaysOn cannot be set for this site as the plan does not allow it. [...]}
I'm trying to automate the update manager in Azure Automation using Terraform, but I can't find the information regarding the following 2 points:
Schedule created for the updates doesn't work. I assume the problem is that the runbook is missing that defines which machines need to be updated and etc.
Can't find information on how to automatically enable this update management for all machines in a specific resource group.
Here is the terraform code that I've done.
#Creates automation account
resource "azurerm_automation_account" "aa" {
name = local.autoac
location = local.region
resource_group_name = local.rg
sku_name = "Basic"
tags = {
environment = "test"
}
}
#Creates the schedule for updates
resource "azurerm_automation_schedule" "std-update" {
name = "Weekly-Sunday-6am"
resource_group_name = local.rg
automation_account_name = azurerm_automation_account.aa.name
frequency = "Week"
interval = 1
timezone = "Europe/Berlin"
start_time = "2021-04-28T18:00:15+02:00"
description = "Standard schedule for updates"
week_days = ["Sunday"]
}
#Creates log analitycs workspace
resource "azurerm_log_analytics_workspace" "law" {
name = local.lawname
location = local.region
resource_group_name = local.rg
sku = "PerGB2018"
retention_in_days = 30
tags = {
environment = "test"
}
}
# Link automation account to a Log Analytics Workspace.
resource "azurerm_log_analytics_linked_service" "autoacc_linked_log_workspace" {
resource_group_name = local.rg
workspace_id = azurerm_log_analytics_workspace.law.id
read_access_id = azurerm_automation_account.aa.id
}
# Add Updates workspace solution to log analytics
resource "azurerm_log_analytics_solution" "law_solution_updates" {
resource_group_name = local.rg
location = local.region
solution_name = "Updates"
workspace_resource_id = azurerm_log_analytics_workspace.law.id
workspace_name = azurerm_log_analytics_workspace.law.name
plan {
publisher = "Microsoft"
product = "OMSGallery/Updates"
}
}
Update regarding the question.
I figured out that option to create an update schedule in update managements is not available yet in the Terraform. That's why we need to do this only from the way of the ARM template created in the terraform config.
With the help from the previous comment, I was able to create the following schedule:
#Creates schedule for windows VM to update Monthly on 3rd Sunday
resource "azurerm_template_deployment" "windows-prod-3rd-Sunday" {
name = "windows-prod-3rd-Sunday"
resource_group_name = local.rg
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"apiVersion": "2017-05-15-preview",
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"name": "${azurerm_automation_account.aa.name}/windows-prod-3rd-Sunday",
"properties": {
"updateConfiguration": {
"operatingSystem": "Windows",
"duration": "PT${local.update_max_hours}H",
"windows": {
"excludedKbNumbers": [
],
"includedUpdateClassifications": "${local.update_classifications}",
"rebootSetting": "${local.update_reboot_settings}"
},
"targets": {
"azureQueries": [
{
"scope": [
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}",
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}",
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}"
],
"tagSettings": {
"tags": {
"environment": [
"Prod"
],
"updatedate": [
"3rd_Sunday"
]
},
"filterOperator": "All"
},
"locations": [
"West Europe"
]
}
]
}
},
"scheduleInfo": {
"frequency": "Month",
"startTime": "${local.update_date}T${local.update_time}:00+00:00",
"timeZone": "${local.update_timezone}",
"interval": 1,
"advancedSchedule": {
"monthlyOccurrences": [
{
"occurrence": "${local.sunday_3}",
"day": "${local.update_day}"
}
]
}
}
}
}
]
}
DEPLOY
deployment_mode = "Incremental"
}
Enabling VM diagnostics in Azure is such a pain. I've gotten it working using ARM templates, the Azure PowerShell SDK, and the Azure CLI. But I've been trying for days now to enable VM diagnostics for both Windows and Linux VMs using Terraform and the azurerm_virtual_machine_extension resource. Still not working, ugh!
Here's what I have so far (I've tweaked this a bit to simplify it for this post, so hope I didn't break anything with my manual edits):
resource "azurerm_virtual_machine_extension" "vm-linux" {
count = "${local.is_windows_vm == "false" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "LinuxDiagnostic"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"ladCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/linux_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
# SAS token below: Do not include the leading question mark, as per https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux.
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.current.sas, "/^\\?/", "")}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
resource "azurerm_virtual_machine_extension" "vm-win" {
count = "${local.is_windows_vm == "true" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "Microsoft.Insights.VMDiagnosticsSettings"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.9"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config --is-windows-os", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"wadCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/windows_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${data.azurerm_storage_account_sas.current.sas}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
Notice that for both Linux and Windows I'm loading the diagnostics details from a JSON file within the code base, as per the comments. These are the default configs provided by Azure, so they should be valid.
When I deploy these, the Linux VM extension deploys successfully, but in the Azure portal the extension says "Problems detected in generated mdsd configuration". And if I look at the VM's "Diagnostic settings" it says "Error encountered: TypeError: Object doesn't support property or method 'diagnosticMonitorConfiguration'".
The Windows VM extension fails to deploy altogether, saying that it "Failed to read configuration". If I view the extension in the portal it displays the following error:
"code": "ComponentStatus//failed/-3",
"level": "Error",
"displayStatus": "Provisioning failed",
"message": "Error starting the diagnostics extension"
And if I look at the "Diagnostics settings" pane it just hangs with a never-ending ". . ." animation.
However, if I look at the "terraform apply" output for both VM extensions, the decoded settings look exactly as intended, matching the config files with the placeholders correctly replaced.
Any suggestions on how to get this working?
Thanks in advance!
I've gotten the Windows Diagnostics to work 100% so far in our environment. It seems the AzureRM API is very picky about the config being sent. We had been using powershell to enable it, and the same xmlCfg used in powershell DID NOT WORK with terraform.
So far this has worked for us: (The settings/protected_settings names are Case Sensitive! aka xmlCfg works, while xmlcfg does not)
main.cf
#########################################################
# VM Extensions - Windows In-Guest Monitoring/Diagnostics
#########################################################
resource "azurerm_virtual_machine_extension" "InGuestDiagnostics" {
name = var.compute["InGuestDiagnostics"]["name"]
location = azurerm_resource_group.VMResourceGroup.location
resource_group_name = azurerm_resource_group.VMResourceGroup.name
virtual_machine_name = azurerm_virtual_machine.Compute.name
publisher = var.compute["InGuestDiagnostics"]["publisher"]
type = var.compute["InGuestDiagnostics"]["type"]
type_handler_version = var.compute["InGuestDiagnostics"]["type_handler_version"]
auto_upgrade_minor_version = var.compute["InGuestDiagnostics"]["auto_upgrade_minor_version"]
settings = <<SETTINGS
{
"xmlCfg": "${base64encode(templatefile("${path.module}/templates/wadcfgxml.tmpl", { vmid = azurerm_virtual_machine.Compute.id }))}",
"storageAccount": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}"
}
SETTINGS
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}",
"storageAccountKey": "${data.azurerm_storage_account.InGuestDiagStorageAccount.primary_access_key}",
"storageAccountEndPoint": "https://core.windows.net"
}
PROTECTEDSETTINGS
}
tfvars
InGuestDiagnostics = {
name = "WindowsDiagnostics"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.16"
auto_upgrade_minor_version = "true"
}
wadcfgxml.tmpl (I cut out some of the Perf counters for brevity)
<WadCfg>
<DiagnosticMonitorConfiguration overallQuotaInMB="5120">
<DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Error"/>
<Metrics resourceId="${vmid}">
<MetricAggregation scheduledTransferPeriod="PT1H"/>
<MetricAggregation scheduledTransferPeriod="PT1M"/>
</Metrics>
<PerformanceCounters scheduledTransferPeriod="PT1M">
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Processor Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Privileged Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% User Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\Processor Frequency" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\System\Processes" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\SQLServer:SQL Statistics\SQL Re-Compilations/sec" sampleRate="PT60S" unit="Count" />
</PerformanceCounters>
<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Application!*[System[(Level = 1 or Level = 2)]]"/>
<DataSource name="Security!*[System[(Level = 1 or Level = 2)]"/>
<DataSource name="System!*[System[(Level = 1 or Level = 2)]]"/>
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
</WadCfg>
I finally got the Linux In-Guest Diagnostics to work (LAD). A few notable facts, unlike the windows diagnostics the settings need to be transmitted in json, no base64 encoding. Additionally LAD seems to require a SAS token with the storage account. The normal caveats around AzureRM API being picky about the config, and the settings being Case Sensitive still remain. Here is what is working for me so far..
# Locals
locals {
env = var.workspace[terraform.workspace]
# Use a set/static time to avoid TF from recreating the SAS token every apply, which would then cause it to
# modify/recreate anything that uses it. Not ideal, but the token is for a VERY long time, so it will do for now
sas_begintime = "2019-11-22T00:00:00Z"
sas_endtime = timeadd(local.sas_begintime, "873600h")
}
#########################################################
# VM Extensions - In-Guest Diagnostics
#########################################################
# We need a SAS token for the In-Guest Metrics
data "azurerm_storage_account_sas" "inguestdiagnostics" {
count = (contains(keys(local.env), "InGuestDiagnostics") ? 1 : 0)
connection_string = data.azurerm_storage_account.BootDiagStorageAccount.primary_connection_string
https_only = true
resource_types {
service = true
container = true
object = true
}
services {
blob = true
queue = true
table = true
file = true
}
start = local.sas_begintime
expiry = local.sas_endtime
permissions {
read = true
write = true
delete = true
list = true
add = true
create = true
update = true
process = true
}
}
resource "azurerm_virtual_machine_extension" "inguestdiagnostics" {
for_each = contains(keys(local.env), "InGuestDiagnostics") ? local.env["InGuestDiagnostics"] : {}
depends_on = [azurerm_virtual_machine_extension.dependencyagent]
name = each.value["name"]
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
virtual_machine_name = azurerm_virtual_machine.compute["${each.key}"].name
publisher = each.value["publisher"]
type = each.value["type"]
type_handler_version = each.value["type_handler_version"]
auto_upgrade_minor_version = each.value["auto_upgrade_minor_version"]
settings = templatefile("${path.module}/templates/ladcfg2json.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.BootDiagStorageAccount.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.inguestdiagnostics.0.sas, "/^\\?/", "")}"
}
PROTECTEDSETTINGS
}
# These variations didn't work for me ..
# "ladCfg": "${templatefile("${path.module}/templates/ladcfgjson.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character '\n' in string literal or Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
# "ladCfg": "${replace(data.local_file.ladcfgjson["${each.key}"].content, "/\\n/", "")}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
tfvars
workspace = {
TerraformWorkSpaceName = {
compute = {
# Add additional key/objects for additional Compute
computer01 = {
name = "computer01"
}
}
InGuestDiagnostics = {
# Add additional key/objects for each Compute you want to install the InGuestDiagnostics on
computer01 = {
name = "LinuxDiagnostic"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
}
}
}
}
I couldn't get a template file to work without wrapping the WHOLE thing in jsonencode.
ladcfg2json.tmpl
${jsonencode({
"StorageAccount": "${storageAccountName}",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "${vmid}"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
"performanceCounterConfiguration": [
{
"counterSpecifier": "/builtin/processor/percentiowaittime",
"condition": "IsAggregate=TRUE",
"sampleRate": "PT15S",
"annotation": [
{
"locale": "en-us",
"displayName": "CPU IO wait time"
}
],
"unit": "Percent",
"class": "processor",
"counter": "percentiowaittime",
"type": "builtin"
}
]
},
"syslogEvents": {
"syslogEventConfiguration": {
"LOG_LOCAL0": "LOG_DEBUG"
}
}
}
}
})}
I hope this helps..
As the question was asked more than a year ago this is more for people like me who are trying this for the first time.
We only use linux vms so this advice applies to that:
protected settings should use PROTECTED_SETTINGS not SETTINGS (which you can see in #rv23 answer above)
From the documentation I am following https://learn.microsoft.com/en-gb/azure/virtual-machines/extensions/diagnostics-linux#protected-settings you can see you need to specify storageAccountSasToken not storageAccountKey:
Here is my redacted version of config (replace all bits in all caps with your own settings ):
resource "azurerm_virtual_machine_extension" "vm_linux_diagnostics" {
count = "1"
name = "NAME"
resource_group_name = "YOUR RESOURCE GROUP NAME"
location = "YOUR LOCATION"
virtual_machine_name = "TARGET MACHINE NAME"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
settings = <<SETTINGS
{
"StorageAccount": "tfnpfsnhsuk",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "VM ID"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
.... MORE METRICS - THAT YOU REQUIRE
}
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "YOUR_ACCOUNT_NAME",
"storageAccountSasToken": "YOUR SAS TOKEN"
}
PROTECTED_SETTINGS
tags = "YOUR TAG"
}
Just got this working on a similar question:
Trying to add LinuxDiagnostic Azure VM Extension through terraform and getting errors
This includes getting the SAS token and reading from json files.
I'm trying to setup my azure infrastructure using Terraform which was pretty successful so far. Our app development team needs to define application specific roles within the AzureAD application's manifest which we currently handling with the Azure Portal by simply modifying the manifest:
"appRoles": [
{
"allowedMemberTypes": [
"Application"
],
"displayName": "SurveyCreator",
"id": "1b4f816e-5eaf-48b9-8613-7923830595ad",
"isEnabled": true,
"description": "Creators can create Surveys",
"value": "SurveyCreator"
}
]
Using Terraform I created an azurerm_azuread_application and now want to modify the manifest accordingly.
resource "azurerm_azuread_application" "test" {
name = "APP"
homepage = "http://APPHOMEPAGE"
identifier_uris = ["http://APPHOMEPAGE"]
reply_urls = ["http://APPHOMEPAGE/REPLYURL"]
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
Is there a way to achieve this by using Terraform only?
To create the App role, you could refer to azuread_application_app_role.
resource "azuread_application" "example" {
name = "example"
}
resource "azuread_application_app_role" "example" {
application_object_id = azuread_application.example.id
allowed_member_types = ["User"]
description = "Admins can manage roles and perform all task actions"
display_name = "Admin"
is_enabled = true
value = "administer"
}