Enabling VM diagnostics in Azure is such a pain. I've gotten it working using ARM templates, the Azure PowerShell SDK, and the Azure CLI. But I've been trying for days now to enable VM diagnostics for both Windows and Linux VMs using Terraform and the azurerm_virtual_machine_extension resource. Still not working, ugh!
Here's what I have so far (I've tweaked this a bit to simplify it for this post, so hope I didn't break anything with my manual edits):
resource "azurerm_virtual_machine_extension" "vm-linux" {
count = "${local.is_windows_vm == "false" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "LinuxDiagnostic"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"ladCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/linux_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
# SAS token below: Do not include the leading question mark, as per https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux.
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.current.sas, "/^\\?/", "")}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
resource "azurerm_virtual_machine_extension" "vm-win" {
count = "${local.is_windows_vm == "true" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "Microsoft.Insights.VMDiagnosticsSettings"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.9"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config --is-windows-os", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"wadCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/windows_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${data.azurerm_storage_account_sas.current.sas}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
Notice that for both Linux and Windows I'm loading the diagnostics details from a JSON file within the code base, as per the comments. These are the default configs provided by Azure, so they should be valid.
When I deploy these, the Linux VM extension deploys successfully, but in the Azure portal the extension says "Problems detected in generated mdsd configuration". And if I look at the VM's "Diagnostic settings" it says "Error encountered: TypeError: Object doesn't support property or method 'diagnosticMonitorConfiguration'".
The Windows VM extension fails to deploy altogether, saying that it "Failed to read configuration". If I view the extension in the portal it displays the following error:
"code": "ComponentStatus//failed/-3",
"level": "Error",
"displayStatus": "Provisioning failed",
"message": "Error starting the diagnostics extension"
And if I look at the "Diagnostics settings" pane it just hangs with a never-ending ". . ." animation.
However, if I look at the "terraform apply" output for both VM extensions, the decoded settings look exactly as intended, matching the config files with the placeholders correctly replaced.
Any suggestions on how to get this working?
Thanks in advance!
I've gotten the Windows Diagnostics to work 100% so far in our environment. It seems the AzureRM API is very picky about the config being sent. We had been using powershell to enable it, and the same xmlCfg used in powershell DID NOT WORK with terraform.
So far this has worked for us: (The settings/protected_settings names are Case Sensitive! aka xmlCfg works, while xmlcfg does not)
main.cf
#########################################################
# VM Extensions - Windows In-Guest Monitoring/Diagnostics
#########################################################
resource "azurerm_virtual_machine_extension" "InGuestDiagnostics" {
name = var.compute["InGuestDiagnostics"]["name"]
location = azurerm_resource_group.VMResourceGroup.location
resource_group_name = azurerm_resource_group.VMResourceGroup.name
virtual_machine_name = azurerm_virtual_machine.Compute.name
publisher = var.compute["InGuestDiagnostics"]["publisher"]
type = var.compute["InGuestDiagnostics"]["type"]
type_handler_version = var.compute["InGuestDiagnostics"]["type_handler_version"]
auto_upgrade_minor_version = var.compute["InGuestDiagnostics"]["auto_upgrade_minor_version"]
settings = <<SETTINGS
{
"xmlCfg": "${base64encode(templatefile("${path.module}/templates/wadcfgxml.tmpl", { vmid = azurerm_virtual_machine.Compute.id }))}",
"storageAccount": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}"
}
SETTINGS
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}",
"storageAccountKey": "${data.azurerm_storage_account.InGuestDiagStorageAccount.primary_access_key}",
"storageAccountEndPoint": "https://core.windows.net"
}
PROTECTEDSETTINGS
}
tfvars
InGuestDiagnostics = {
name = "WindowsDiagnostics"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.16"
auto_upgrade_minor_version = "true"
}
wadcfgxml.tmpl (I cut out some of the Perf counters for brevity)
<WadCfg>
<DiagnosticMonitorConfiguration overallQuotaInMB="5120">
<DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Error"/>
<Metrics resourceId="${vmid}">
<MetricAggregation scheduledTransferPeriod="PT1H"/>
<MetricAggregation scheduledTransferPeriod="PT1M"/>
</Metrics>
<PerformanceCounters scheduledTransferPeriod="PT1M">
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Processor Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Privileged Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% User Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\Processor Frequency" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\System\Processes" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\SQLServer:SQL Statistics\SQL Re-Compilations/sec" sampleRate="PT60S" unit="Count" />
</PerformanceCounters>
<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Application!*[System[(Level = 1 or Level = 2)]]"/>
<DataSource name="Security!*[System[(Level = 1 or Level = 2)]"/>
<DataSource name="System!*[System[(Level = 1 or Level = 2)]]"/>
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
</WadCfg>
I finally got the Linux In-Guest Diagnostics to work (LAD). A few notable facts, unlike the windows diagnostics the settings need to be transmitted in json, no base64 encoding. Additionally LAD seems to require a SAS token with the storage account. The normal caveats around AzureRM API being picky about the config, and the settings being Case Sensitive still remain. Here is what is working for me so far..
# Locals
locals {
env = var.workspace[terraform.workspace]
# Use a set/static time to avoid TF from recreating the SAS token every apply, which would then cause it to
# modify/recreate anything that uses it. Not ideal, but the token is for a VERY long time, so it will do for now
sas_begintime = "2019-11-22T00:00:00Z"
sas_endtime = timeadd(local.sas_begintime, "873600h")
}
#########################################################
# VM Extensions - In-Guest Diagnostics
#########################################################
# We need a SAS token for the In-Guest Metrics
data "azurerm_storage_account_sas" "inguestdiagnostics" {
count = (contains(keys(local.env), "InGuestDiagnostics") ? 1 : 0)
connection_string = data.azurerm_storage_account.BootDiagStorageAccount.primary_connection_string
https_only = true
resource_types {
service = true
container = true
object = true
}
services {
blob = true
queue = true
table = true
file = true
}
start = local.sas_begintime
expiry = local.sas_endtime
permissions {
read = true
write = true
delete = true
list = true
add = true
create = true
update = true
process = true
}
}
resource "azurerm_virtual_machine_extension" "inguestdiagnostics" {
for_each = contains(keys(local.env), "InGuestDiagnostics") ? local.env["InGuestDiagnostics"] : {}
depends_on = [azurerm_virtual_machine_extension.dependencyagent]
name = each.value["name"]
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
virtual_machine_name = azurerm_virtual_machine.compute["${each.key}"].name
publisher = each.value["publisher"]
type = each.value["type"]
type_handler_version = each.value["type_handler_version"]
auto_upgrade_minor_version = each.value["auto_upgrade_minor_version"]
settings = templatefile("${path.module}/templates/ladcfg2json.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.BootDiagStorageAccount.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.inguestdiagnostics.0.sas, "/^\\?/", "")}"
}
PROTECTEDSETTINGS
}
# These variations didn't work for me ..
# "ladCfg": "${templatefile("${path.module}/templates/ladcfgjson.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character '\n' in string literal or Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
# "ladCfg": "${replace(data.local_file.ladcfgjson["${each.key}"].content, "/\\n/", "")}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
tfvars
workspace = {
TerraformWorkSpaceName = {
compute = {
# Add additional key/objects for additional Compute
computer01 = {
name = "computer01"
}
}
InGuestDiagnostics = {
# Add additional key/objects for each Compute you want to install the InGuestDiagnostics on
computer01 = {
name = "LinuxDiagnostic"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
}
}
}
}
I couldn't get a template file to work without wrapping the WHOLE thing in jsonencode.
ladcfg2json.tmpl
${jsonencode({
"StorageAccount": "${storageAccountName}",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "${vmid}"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
"performanceCounterConfiguration": [
{
"counterSpecifier": "/builtin/processor/percentiowaittime",
"condition": "IsAggregate=TRUE",
"sampleRate": "PT15S",
"annotation": [
{
"locale": "en-us",
"displayName": "CPU IO wait time"
}
],
"unit": "Percent",
"class": "processor",
"counter": "percentiowaittime",
"type": "builtin"
}
]
},
"syslogEvents": {
"syslogEventConfiguration": {
"LOG_LOCAL0": "LOG_DEBUG"
}
}
}
}
})}
I hope this helps..
As the question was asked more than a year ago this is more for people like me who are trying this for the first time.
We only use linux vms so this advice applies to that:
protected settings should use PROTECTED_SETTINGS not SETTINGS (which you can see in #rv23 answer above)
From the documentation I am following https://learn.microsoft.com/en-gb/azure/virtual-machines/extensions/diagnostics-linux#protected-settings you can see you need to specify storageAccountSasToken not storageAccountKey:
Here is my redacted version of config (replace all bits in all caps with your own settings ):
resource "azurerm_virtual_machine_extension" "vm_linux_diagnostics" {
count = "1"
name = "NAME"
resource_group_name = "YOUR RESOURCE GROUP NAME"
location = "YOUR LOCATION"
virtual_machine_name = "TARGET MACHINE NAME"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
settings = <<SETTINGS
{
"StorageAccount": "tfnpfsnhsuk",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "VM ID"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
.... MORE METRICS - THAT YOU REQUIRE
}
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "YOUR_ACCOUNT_NAME",
"storageAccountSasToken": "YOUR SAS TOKEN"
}
PROTECTED_SETTINGS
tags = "YOUR TAG"
}
Just got this working on a similar question:
Trying to add LinuxDiagnostic Azure VM Extension through terraform and getting errors
This includes getting the SAS token and reading from json files.
Related
I am trying to achieve the following:
Using the lifecycle command to ignore tags being applied to resources by Azure Policy.
Background
I have a terraform template that applies tags to the resource group but the resources in the same template do not have tags applied. Instead, I have an Azure Policy that enforced inheritance of tags from the resource groups.
When I make any changes to the template and run terraform plan I get a load of changes occur which state they will change the tags from values to null. This isn't causing any issue as such; it just bloats my terraform plan with unnecessary changes.
Issue
I have tried using the lifecycle command to says ignore changes and set the value to tags, however it doesn't seem to work, and the plan still shows the tags are going to be removed.
Below is an example of a resource that says the tags will be removed if a change occurs.
Example Code
resource "azurerm_virtual_machine_extension" "ext_ade" {
depends_on = [azurerm_virtual_machine_extension.ext_domain_join, azurerm_virtual_machine_extension.ext_dsc]
count = var.session_hosts.quantity
name = var.ext_ade.name
virtual_machine_id = azurerm_windows_virtual_machine.vm.*.id[count.index]
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = "2.2"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"EncryptionOperation": "EnableEncryption",
"KeyVaultURL": "${data.azurerm_key_vault.key_vault.vault_uri}",
"KeyVaultResourceId": "${data.azurerm_key_vault.key_vault.id}",
"KeyEncryptionKeyURL": "${azurerm_key_vault_key.ade_key.*.id[count.index]}",
"KekVaultResourceId": "${data.azurerm_key_vault.key_vault.id}",
"KeyEncryptionAlgorithm": "RSA-OAEP",
"VolumeType": "All"
}
SETTINGS
lifecycle {
ignore_changes = [settings,tags]
}
}
Tagging using Azure Policy and Terraform templates ignoring tags
I've tried in my environment and was able to deploy it successfully using lifecycle command.
Taken a snippet of Terraform from SO solution given by #Jim Xu and modified it to meet your requirements as shown below:
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.99.0"
}
}
}
provider "azurerm" {
features {}
}
resource "random_string" "password" {
length = 16
special = false
}
data "azurerm_resource_group" "newtest" {
name = var.resource_group_name
}
resource "azurerm_key_vault" "keyvault" {
name = var.key_vault_name
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
enabled_for_deployment=true
enabled_for_template_deployment =true
location=data.azurerm_resource_group.newtest.location
tenant_id = "<tenant-id>"
sku_name = "standard"
soft_delete_retention_days=90
}
resource "azurerm_key_vault_access_policy" "myPolicy" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = "<tenant-id>"
object_id = "<object-id>"
key_permissions = [
"Create",
"Delete",
"Get",
"Purge",
"Recover",
"Update",
"List",
"Decrypt",
"Sign"
]
}
resource "azurerm_key_vault_key" "testKEK" {
name = "testKEK"
key_vault_id = azurerm_key_vault.keyvault.id
key_type = "RSA"
key_size = 2048
depends_on = [
azurerm_key_vault_access_policy.myPolicy
]
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_virtual_machine_extension" "vmextension" {
name = random_string.password.result
virtual_machine_id = "/subscriptions/<subscription_ID>/resourceGroups/<resourceGroup>/providers/Microsoft.Compute/virtualMachines/<VMName>"
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = var.type_handler_version
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"EncryptionOperation": "${var.encrypt_operation}",
"KeyVaultURL": "${azurerm_key_vault.keyvault.vault_uri}",
"KeyVaultResourceId": "${azurerm_key_vault.keyvault.id}",
"KeyEncryptionKeyURL": "${azurerm_key_vault_key.testKEK.id}",
"KekVaultResourceId": "${azurerm_key_vault.keyvault.id}",
"KeyEncryptionAlgorithm": "${var.encryption_algorithm}",
"VolumeType": "${var.volume_type}"
}
SETTINGS
lifecycle {
ignore_changes = [settings,tags]
}
}
variable.tf:
variable "resource_group_name" {
default = "newtest"
}
variable "location" {
default = "EastUS"
}
variable key_vault_name {
default = ""
}
variable virtual_machine_id {
default = ""
}
variable "volume_type" {
default = "All"
}
variable "encrypt_operation" {
default = "EnableEncryption"
}
variable "type_handler_version" {
description = "Defaults to 2.2 on Windows"
default = "2.2"
}
Note: you can modify the tfvars file to suit your needs.
Executed terraform init or terraform init -upgrade:
Executed terraform plan:
Give terraform apply after running the above commands successfully:
Created and observed no changes in keyvault from Portal:
ResourceGroup(newtest) deployed in Portal:
I have a VM template I'm deploying an Azure Virtual Desktop environment with terraform (via octopus deploy) to Azure. On top of the Virtual Machines, I'm installing a number of extensions which culminates with a vm extension to register the VM with the Host Pool.
I'd like to rebuild the VM each time the custom script extension is applied (Extension #2, after domain join). But in rebuilding the VM, I'd like to build out a new VM, complete with the host pool registration before any part of the existing VM is destroyed.
Please accept the cut down version below to understand what I am trying to do.
I expect the largest number of machine recreations to come from enhancements to the configuration scripts that configure the server on creation. Not all of the commands are expected to be idempotent and we want the AVD vms to be ephemeral. If an issue is encountered, the support team is expected to be able to drain a server and destroy it once empty to get a replacement by terraform apply. In a case where the script gets updated though, we want to be able to replace all VMs quickly in an emergency, or at the very least minimize the nightly maintenance window.
Script Process: parameterized script > gets filled out as a template file > gets stored as an az blob > called by custom script extension > executed on the machine.
VM build process: VM is provisioned > currently 8 extensions get applied one at a time, starting with the domain join, then the custom script extension, followed by several Azure monitoring extensions, and finally the host pool registration extension.
I've been trying to use the create_before_destroy lifecycle feature, but I can't get it to spin up the VM, and apply all extensions before it begins removing the hostpool registration from the existing VMs. I assume there's a way to do it using the triggers, but I'm not sure how to do it in such a way that it always has at least the current number of VMs.
It would also need to be able to stop if it encounters an error on the new vm, before destroying the existing vm (or better yet, be authorized to rebuild VMs if an extension fails part way through).
resource "random_pet" "avd_vm" {
prefix = var.client_name
length = 1
keepers = {
# Generate a new pet name each time we update the setup_host script
source_content = "${data.template_file.setup_host.rendered}"
}
}
data "template_file" "setup_host" {
template = file("${path.module}\\scripts\\setup-host.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
domain = var.domain
aad_group_name = var.aad_group_name
}
}
resource "azurerm_storage_blob" "setup_host" {
name = "setup-host.ps1"
storage_account_name = azurerm_storage_account.scripts.name
storage_container_name = time_sleep.container_rbac.triggers["name"]
type = "Block"
source_content = data.template_file.setup_host.rendered #"${path.module}\\scripts\\setup-host.ps1"
depends_on = [
azurerm_role_assignment.account1_write,
data.template_file.setup_host,
time_sleep.container_rbac
]
}
data "template_file" "client_r_drive_mapping" {
template = file("${path.module}\\scripts\\client_r_drive_mapping.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
}
}
resource "azurerm_windows_virtual_machine" "example" {
count = length(random_pet.avd_vm)
name = "${random_pet.avd_vm[count.index].id}"
...
lifecycle {
ignore_changes = [
boot_diagnostics,
identity
]
}
}
resource "azurerm_virtual_machine_extension" "first-domain_join_extension" {
count = var.rdsh_count
name = "${var.client_name}-avd-${random_pet.avd_vm[count.index].id}-domainJoin"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"Name": "${var.domain_name}",
"OUPath": "${var.ou_path}",
"User": "${var.domain_user_upn}#${var.domain_name}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_network_peering.out-primary,
azurerm_virtual_network_peering.in-primary,
azurerm_virtual_network_peering.in-secondary
]
}
# Multiple scripts called by ./<scriptname referencing them in follow-up scripts
# https://web.archive.org/web/20220127015539/https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows
# https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows#using-multiple-scripts
resource "azurerm_virtual_machine_extension" "second-custom_scripts" {
count = var.rdsh_count
name = "${random_pet.avd_vm[count.index].id}-setup-host"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.10"
auto_upgrade_minor_version = "true"
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "${azurerm_storage_account.scripts.name}",
"storageAccountKey": "${azurerm_storage_account.scripts.primary_access_key}"
}
PROTECTED_SETTINGS
settings = <<SETTINGS
{
"fileUris": ["https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/setup-host.ps1","https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/client_r_drive_mapping.ps1"],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file setup-host.ps1"
}
SETTINGS
depends_on = [
azurerm_virtual_machine_extension.first-domain_join_extension,
azurerm_storage_blob.setup_host
]
}
resource "azurerm_virtual_machine_extension" "last_host_extension_hp_registration" {
count = var.rdsh_count
name = "${var.client_name}-${random_pet.avd_vm[count.index].id}-avd_dsc"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.73"
auto_upgrade_minor_version = true
settings = <<-SETTINGS
{
"modulesUrl": "https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_3-10-2021.zip",
"configurationFunction": "Configuration.ps1\\AddSessionHost",
"properties": {
"HostPoolName":"${azurerm_virtual_desktop_host_pool.pooleddepthfirst.name}"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"properties": {
"registrationInfoToken": "${azurerm_virtual_desktop_host_pool_registration_info.pooleddepthfirst.token}"
}
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_machine_extension.second-custom_scripts
]
}
I'm trying to automate the update manager in Azure Automation using Terraform, but I can't find the information regarding the following 2 points:
Schedule created for the updates doesn't work. I assume the problem is that the runbook is missing that defines which machines need to be updated and etc.
Can't find information on how to automatically enable this update management for all machines in a specific resource group.
Here is the terraform code that I've done.
#Creates automation account
resource "azurerm_automation_account" "aa" {
name = local.autoac
location = local.region
resource_group_name = local.rg
sku_name = "Basic"
tags = {
environment = "test"
}
}
#Creates the schedule for updates
resource "azurerm_automation_schedule" "std-update" {
name = "Weekly-Sunday-6am"
resource_group_name = local.rg
automation_account_name = azurerm_automation_account.aa.name
frequency = "Week"
interval = 1
timezone = "Europe/Berlin"
start_time = "2021-04-28T18:00:15+02:00"
description = "Standard schedule for updates"
week_days = ["Sunday"]
}
#Creates log analitycs workspace
resource "azurerm_log_analytics_workspace" "law" {
name = local.lawname
location = local.region
resource_group_name = local.rg
sku = "PerGB2018"
retention_in_days = 30
tags = {
environment = "test"
}
}
# Link automation account to a Log Analytics Workspace.
resource "azurerm_log_analytics_linked_service" "autoacc_linked_log_workspace" {
resource_group_name = local.rg
workspace_id = azurerm_log_analytics_workspace.law.id
read_access_id = azurerm_automation_account.aa.id
}
# Add Updates workspace solution to log analytics
resource "azurerm_log_analytics_solution" "law_solution_updates" {
resource_group_name = local.rg
location = local.region
solution_name = "Updates"
workspace_resource_id = azurerm_log_analytics_workspace.law.id
workspace_name = azurerm_log_analytics_workspace.law.name
plan {
publisher = "Microsoft"
product = "OMSGallery/Updates"
}
}
Update regarding the question.
I figured out that option to create an update schedule in update managements is not available yet in the Terraform. That's why we need to do this only from the way of the ARM template created in the terraform config.
With the help from the previous comment, I was able to create the following schedule:
#Creates schedule for windows VM to update Monthly on 3rd Sunday
resource "azurerm_template_deployment" "windows-prod-3rd-Sunday" {
name = "windows-prod-3rd-Sunday"
resource_group_name = local.rg
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"apiVersion": "2017-05-15-preview",
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"name": "${azurerm_automation_account.aa.name}/windows-prod-3rd-Sunday",
"properties": {
"updateConfiguration": {
"operatingSystem": "Windows",
"duration": "PT${local.update_max_hours}H",
"windows": {
"excludedKbNumbers": [
],
"includedUpdateClassifications": "${local.update_classifications}",
"rebootSetting": "${local.update_reboot_settings}"
},
"targets": {
"azureQueries": [
{
"scope": [
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}",
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}",
"/subscriptions/${local.subscriptionid}/resourceGroups/${local.rg}"
],
"tagSettings": {
"tags": {
"environment": [
"Prod"
],
"updatedate": [
"3rd_Sunday"
]
},
"filterOperator": "All"
},
"locations": [
"West Europe"
]
}
]
}
},
"scheduleInfo": {
"frequency": "Month",
"startTime": "${local.update_date}T${local.update_time}:00+00:00",
"timeZone": "${local.update_timezone}",
"interval": 1,
"advancedSchedule": {
"monthlyOccurrences": [
{
"occurrence": "${local.sunday_3}",
"day": "${local.update_day}"
}
]
}
}
}
}
]
}
DEPLOY
deployment_mode = "Incremental"
}
Need to implement VM extension, using Terraform and Azure DevOps.I am trying to pass fileUris value from .tfvars or create Dynamically from storage account details ["https://${var.Storageaccountname}.blob.core.windows.net/${var.containername}/test.sh"], Both the scenarios are not working.
resource "azurerm_virtual_machine_extension" "main" {
name = "${var.vm_name}"
location ="${azurerm_resource_group.resource_group.location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"fileUris" :"${var.fileUris}",
"commandToExecute": "sh <name of file> --ExecutionPolicy Unrestricted\""
}
SETTINGS
}
Any tips on fixing this issue? Maybe some other solution to achieve zero hardcoding in main.tf/variable.tf?
You could refer to this working sample to deploy the extension on Linux VM. The script file stored a Storage account.
resource "azurerm_virtual_machine_extension" "test" {
name = "test-LinuxExtension"
virtual_machine_id = "/subscriptions/xxx/virtualMachines/www"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.1"
auto_upgrade_minor_version = true
protected_settings = <<PROTECTED_SETTINGS
{
"commandToExecute": "sh aptupdate.sh",
"storageAccountName": "xxxxx",
"storageAccountKey": "xxxxx",
"fileUris": [
"${var.fileUris}"
]
}
PROTECTED_SETTINGS
}
If we store the script in Azure blob storage, we need to provide storage key then the extension can have permission to access the script. For more details, please refer to here. Please add the following setting in your script
...
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "mystorageaccountname",
"storageAccountKey": "myStorageAccountKey"
}
PROTECTED_SETTINGS
...
I'm currently using Terraform and bits of Powershell to automate all of my infrastructure and I'm seeking a fully automated means to configure update management for all of my VMs. I'm able to deploy the Automation Account, Log Analytics Workspace, and a linked service resource to manage the connection between the two. However, I'm unable to enable the update management service on the Auto Account.
Is there any automatable means (ps, tf, api, etc.) by which I can simply enable update management for my automation account?
Here is a Terraform module that creates an automation account, creates a link to a log analytics workspace (workspace Id passed in in this example) and then adds the required update management and/or change tracking workspace solutions to the workspace.
This module was built using Terraform 0.11.13 with AzureRM provider version 1.28.0.
# Create the automation account
resource "azurerm_automation_account" "aa" {
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
name = "${var.name}"
sku {
name = "${var.sku}"
}
tags = "${var.tags}"
}
# Link automation account to a Log Analytics Workspace.
# Only deployed if enable_update_management and/or enable_change_tracking are/is set to true
resource "azurerm_log_analytics_linked_service" "law_link" {
count = "${var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
resource_group_name = "${var.resource_group_name}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
linked_service_name = "automation"
resource_id = "${azurerm_automation_account.aa.id}"
}
# Add Updates workspace solution to log analytics if enable_update_management is set to true.
# Adding this solution to the log analytics workspace, combined with above linked service resource enables update management for the automation account.
resource "azurerm_log_analytics_solution" "law_solution_updates" {
count = "${var.enable_update_management}"
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
solution_name = "Updates"
workspace_resource_id = "${var.log_analytics_workspace_id}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
plan {
publisher = "Microsoft"
product = "OMSGallery/Updates"
}
}
# Add Updates workspace solution to log analytics if enable_change_tracking is set to true.
# Adding this solution to the log analytics workspace, combined with above linked service resource enables Change Tracking and Inventory for the automation account.
resource "azurerm_log_analytics_solution" "law_solution_change_tracking" {
count = "${var.enable_change_tracking}"
resource_group_name = "${var.resource_group_name}"
location = "${var.location}"
solution_name = "ChangeTracking"
workspace_resource_id = "${var.log_analytics_workspace_id}"
workspace_name = "${element(split("/", var.log_analytics_workspace_id), length(split("/", var.log_analytics_workspace_id)) - 1)}"
plan {
publisher = "Microsoft"
product = "OMSGallery/ChangeTracking"
}
}
# Send logs to Log Analytics
# Required for automation account with update management and/or change tracking enabled.
# Optional on automation accounts used of other purposes.
resource "azurerm_monitor_diagnostic_setting" "aa_diags_logs" {
count = "${var.enable_logs_collection || var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
name = "LogsToLogAnalytics"
target_resource_id = "${azurerm_automation_account.aa.id}"
log_analytics_workspace_id = "${var.log_analytics_workspace_id}"
log {
category = "JobLogs"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "JobStreams"
enabled = true
retention_policy {
enabled = false
}
}
log {
category = "DscNodeStatus"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = false
retention_policy {
enabled = false
}
}
}
# Send metrics to Log Analytics
resource "azurerm_monitor_diagnostic_setting" "aa_diags_metrics" {
count = "${var.enable_metrics_collection || var.enable_update_management || var.enable_change_tracking ? 1 : 0}"
name = "MetricsToLogAnalytics"
target_resource_id = "${azurerm_automation_account.aa.id}"
log_analytics_workspace_id = "${var.metrics_log_analytics_workspace_id}"
log {
category = "JobLogs"
enabled = false
retention_policy {
enabled = false
}
}
log {
category = "JobStreams"
enabled = false
retention_policy {
enabled = false
}
}
log {
category = "DscNodeStatus"
enabled = false
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
enabled = true
retention_policy {
enabled = false
}
}
}
as far as I understand this is what you need:
{
"type": "Microsoft.OperationalInsights/workspaces",
"name": "[variables('namespace')]",
"apiVersion": "2017-03-15-preview",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "Standalone"
}
},
"resources": [
{
"name": "Automation", # this onboards automation to oms, which is what you need
"type": "linkedServices",
"apiVersion": "2015-11-01-preview",
"dependsOn": [
"[variables('automation')]",
"[variables('namespace')]"
],
"properties": {
"resourceId": "[resourceId('Microsoft.Automation/automationAccounts/', variables('automation'))]"
}
}
]
},
{
"type": "Microsoft.Automation/automationAccounts",
"name": "[variables('automation')]",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "OMS"
}
}
},
{
"type": "Microsoft.OperationsManagement/solutions", # this install update management solution, you probably need this for update management
"name": "[concat(variables('solutions')[copyIndex()],'(', variables('namespace'), ')')]",
"apiVersion": "2015-11-01-preview",
"location": "[resourceGroup().location]",
"copy": {
"name": "solutions",
"count": "[length(variables('solutions'))]"
},
"plan": {
"name": "[concat(variables('solutions')[copyIndex()], '(', variables('namespace'), ')')]",
"promotionCode": "",
"product": "[concat('OMSGallery/', variables('solutions')[copyIndex()])]",
"publisher": "Microsoft"
},
"properties": {
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces', variables('namespace'))]"
},
"dependsOn": [
"[variables('namespace')]"
]
}
here's the variable I'm using to define solutions to be installed:
"solutions": [
"AlertManagement",
"Updates",
"Security"
]
Basically you can map this to api calls 1-to-1