Need to implement VM extension, using Terraform and Azure DevOps.I am trying to pass fileUris value from .tfvars or create Dynamically from storage account details ["https://${var.Storageaccountname}.blob.core.windows.net/${var.containername}/test.sh"], Both the scenarios are not working.
resource "azurerm_virtual_machine_extension" "main" {
name = "${var.vm_name}"
location ="${azurerm_resource_group.resource_group.location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"fileUris" :"${var.fileUris}",
"commandToExecute": "sh <name of file> --ExecutionPolicy Unrestricted\""
}
SETTINGS
}
Any tips on fixing this issue? Maybe some other solution to achieve zero hardcoding in main.tf/variable.tf?
You could refer to this working sample to deploy the extension on Linux VM. The script file stored a Storage account.
resource "azurerm_virtual_machine_extension" "test" {
name = "test-LinuxExtension"
virtual_machine_id = "/subscriptions/xxx/virtualMachines/www"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.1"
auto_upgrade_minor_version = true
protected_settings = <<PROTECTED_SETTINGS
{
"commandToExecute": "sh aptupdate.sh",
"storageAccountName": "xxxxx",
"storageAccountKey": "xxxxx",
"fileUris": [
"${var.fileUris}"
]
}
PROTECTED_SETTINGS
}
If we store the script in Azure blob storage, we need to provide storage key then the extension can have permission to access the script. For more details, please refer to here. Please add the following setting in your script
...
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "mystorageaccountname",
"storageAccountKey": "myStorageAccountKey"
}
PROTECTED_SETTINGS
...
Related
I am currently looking to deploy the SentinelOne agent via Terraform. There does not appear to be much documentation online for VM extension usage in terms of Terraform. Has anyone successfully deployed the S1 agent via Terraform extension? I am unclear on what to add to the settings/protected_settings blocks. Any help is appreciated.
"azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.0"
To add to the settings/protected settings blocks in terraform
resource "azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.0"
settings = <<SETTINGS
{
"commandToExecute": "powershell.exe -Command \"${local.powershell_command}\""
}
SETTINGS
tags = {
environment = "Production"
}
depends_on = [
azurerm_virtual_machine.example
]
}
Settings - The extension's settings are provided as a string-encoded JSON object.
protected settings In the same way that settings are supplied as a JSON object in a string, the protected settings passed to the extension are also.
The keys in the settings and protected settings blocks must be case sensitive according to some VM Extensions. Make sure they are consistent with how Azure expects them (for example, the keys for the JsonADDomainExtension extension the keys are supposed to be in TitleCase)
Reference: azurerm_virtual_machine_extension
Installing the plugin manually and checking the JSON output gives the following settings block:
{
"LinuxAgentVersion": "22.4.1.2",
"SiteToken": "<your_site_token_here"
}
Unfortunately, this leaves the one critical field required for installation out, since it's a protected setting. That is the field name for the "Sentinel One Console API token".
UPDATE:
Working extension example after finding the correct JSON key value:
resource "azurerm_virtual_machine_extension" "testserver-sentinelone-extension" {
name = "SentinelOneLinuxExtension"
virtual_machine_id = azurerm_linux_virtual_machine.testserver.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.2"
automatic_upgrade_enabled = false
settings = <<SETTINGS
{
"LinuxAgentVersion": "22.4.1.2",
"SiteToken": "<your_site_token_here>"
}
SETTINGS
protected_settings = <<PROTECTEDSETTINGS
{
"SentinelOneConsoleAPIKey": "${var.sentinel_one_api_token}"
}
PROTECTEDSETTINGS
}
EDIT: Figured it out by once again manually installing the extension on another test system, and then digging into the waagent logs on that VM to see what value was being queried by the enable.sh script.
# cat /var/lib/waagent/SentinelOne.LinuxExtension.LinuxExtension-1.2.0/scripts/enable.sh | grep Console
api_token=$(echo "$protected_settings_decrypted" | jq -r ".SentinelOneConsoleAPIKey")
I'm creating terraform configuration files to rapidly create and destory demo environments for our prospective customers. These environments are pretty simple, containing a few VMs in single vnet with a single subnet, some for management, some for apps, and one as an AVD session host.
I have seen this work perfectly well a handfull of times, but most of the time it fails during the VM domain-join. When I troubleshoot the issue it's always because the account being used for the doman-join is locked out. I have confirmed this by connecting to the VM via the bastion and manually attempting the domain-join.
Here's my config to create the admin account used for domain join:
resource "azuread_group" "dc_admins" {
display_name = "AAD DC Administrators"
security_enabled = true
}
resource "azuread_user" "admin" {
user_principal_name = join("#", [var.admin_username, var.onmicrosoft_domain])
display_name = var.admin_username
password = var.admin_password
depends_on = [
azuread_group.dc_admins
]
}
resource "azuread_group_member" "admin" {
group_object_id = azuread_group.dc_admins.object_id
member_object_id = azuread_user.admin.object_id
depends_on = [
azuread_group.dc_admins,
azuread_user.admin
]
}
Here's my domain-join config:
resource "azurerm_virtual_machine_extension" "domain_join_mgmt_devices" {
name = "join-domain"
virtual_machine_id = azurerm_windows_virtual_machine.mgmtvm[count.index].id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.0"
depends_on = [
azurerm_windows_virtual_machine.mgmtvm,
azurerm_virtual_machine_extension.install_rsat_tools
]
count = "${var.vm_count}"
settings = <<SETTINGS
{
"Name": "${var.onmicrosoft_domain}",
"OUPath": "OU=AADDC Computers,DC=hidden,DC=onmicrosoft,DC=com",
"User": "${var.onmicrosoft_domain}\\${var.admin_username}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
}
Here's the console output for the plan:
Terraform will perform the following actions:
# azurerm_virtual_machine_extension.domain_join_mgmt_devices[0] will be created
+ resource "azurerm_virtual_machine_extension" "domain_join_mgmt_devices" {
+ id = (known after apply)
+ name = "join-domain"
+ protected_settings = (sensitive value)
+ publisher = "Microsoft.Compute"
+ settings = jsonencode(
{
+ Name = "hidden.onmicrosoft.com"
+ OUPath = "OU=AADDC Computers,DC=hidden,DC=onmicrosoft,DC=com"
+ Options = "3"
+ Restart = "true"
+ User = "hidden.onmicrosoft.com\\admin_username"
}
)
+ type = "JsonADDomainExtension"
+ type_handler_version = "1.0"
+ virtual_machine_id = "/subscriptions/hiddensubid/resourceGroups/demo-rg/providers/Microsoft.Compute/virtualMachines/mgmt-vm1"
}
Plan: 1 to add, 0 to change, 0 to destroy.
I cannot for the life of my figure out what is locking the account. It's a fresh account, in a fresh subscription, created by the Terraform configuration prior to being used for the domain-join action.
Has anyone else seen anything like this?
Am I missing some knowledge about AAD, AD DS, cloud-only managed
domains?
I create the AAD DC Administrative user before creating the AAD DS managed domain. Could this be an issue?
Should I wait X minutes before creating an admin account and using it for administrative actions?
Is it possible, using Terraform and AzureRM, to prevent an AAD account from being locked out?
I have a VM template I'm deploying an Azure Virtual Desktop environment with terraform (via octopus deploy) to Azure. On top of the Virtual Machines, I'm installing a number of extensions which culminates with a vm extension to register the VM with the Host Pool.
I'd like to rebuild the VM each time the custom script extension is applied (Extension #2, after domain join). But in rebuilding the VM, I'd like to build out a new VM, complete with the host pool registration before any part of the existing VM is destroyed.
Please accept the cut down version below to understand what I am trying to do.
I expect the largest number of machine recreations to come from enhancements to the configuration scripts that configure the server on creation. Not all of the commands are expected to be idempotent and we want the AVD vms to be ephemeral. If an issue is encountered, the support team is expected to be able to drain a server and destroy it once empty to get a replacement by terraform apply. In a case where the script gets updated though, we want to be able to replace all VMs quickly in an emergency, or at the very least minimize the nightly maintenance window.
Script Process: parameterized script > gets filled out as a template file > gets stored as an az blob > called by custom script extension > executed on the machine.
VM build process: VM is provisioned > currently 8 extensions get applied one at a time, starting with the domain join, then the custom script extension, followed by several Azure monitoring extensions, and finally the host pool registration extension.
I've been trying to use the create_before_destroy lifecycle feature, but I can't get it to spin up the VM, and apply all extensions before it begins removing the hostpool registration from the existing VMs. I assume there's a way to do it using the triggers, but I'm not sure how to do it in such a way that it always has at least the current number of VMs.
It would also need to be able to stop if it encounters an error on the new vm, before destroying the existing vm (or better yet, be authorized to rebuild VMs if an extension fails part way through).
resource "random_pet" "avd_vm" {
prefix = var.client_name
length = 1
keepers = {
# Generate a new pet name each time we update the setup_host script
source_content = "${data.template_file.setup_host.rendered}"
}
}
data "template_file" "setup_host" {
template = file("${path.module}\\scripts\\setup-host.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
domain = var.domain
aad_group_name = var.aad_group_name
}
}
resource "azurerm_storage_blob" "setup_host" {
name = "setup-host.ps1"
storage_account_name = azurerm_storage_account.scripts.name
storage_container_name = time_sleep.container_rbac.triggers["name"]
type = "Block"
source_content = data.template_file.setup_host.rendered #"${path.module}\\scripts\\setup-host.ps1"
depends_on = [
azurerm_role_assignment.account1_write,
data.template_file.setup_host,
time_sleep.container_rbac
]
}
data "template_file" "client_r_drive_mapping" {
template = file("${path.module}\\scripts\\client_r_drive_mapping.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
}
}
resource "azurerm_windows_virtual_machine" "example" {
count = length(random_pet.avd_vm)
name = "${random_pet.avd_vm[count.index].id}"
...
lifecycle {
ignore_changes = [
boot_diagnostics,
identity
]
}
}
resource "azurerm_virtual_machine_extension" "first-domain_join_extension" {
count = var.rdsh_count
name = "${var.client_name}-avd-${random_pet.avd_vm[count.index].id}-domainJoin"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"Name": "${var.domain_name}",
"OUPath": "${var.ou_path}",
"User": "${var.domain_user_upn}#${var.domain_name}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_network_peering.out-primary,
azurerm_virtual_network_peering.in-primary,
azurerm_virtual_network_peering.in-secondary
]
}
# Multiple scripts called by ./<scriptname referencing them in follow-up scripts
# https://web.archive.org/web/20220127015539/https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows
# https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows#using-multiple-scripts
resource "azurerm_virtual_machine_extension" "second-custom_scripts" {
count = var.rdsh_count
name = "${random_pet.avd_vm[count.index].id}-setup-host"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.10"
auto_upgrade_minor_version = "true"
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "${azurerm_storage_account.scripts.name}",
"storageAccountKey": "${azurerm_storage_account.scripts.primary_access_key}"
}
PROTECTED_SETTINGS
settings = <<SETTINGS
{
"fileUris": ["https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/setup-host.ps1","https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/client_r_drive_mapping.ps1"],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file setup-host.ps1"
}
SETTINGS
depends_on = [
azurerm_virtual_machine_extension.first-domain_join_extension,
azurerm_storage_blob.setup_host
]
}
resource "azurerm_virtual_machine_extension" "last_host_extension_hp_registration" {
count = var.rdsh_count
name = "${var.client_name}-${random_pet.avd_vm[count.index].id}-avd_dsc"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.73"
auto_upgrade_minor_version = true
settings = <<-SETTINGS
{
"modulesUrl": "https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_3-10-2021.zip",
"configurationFunction": "Configuration.ps1\\AddSessionHost",
"properties": {
"HostPoolName":"${azurerm_virtual_desktop_host_pool.pooleddepthfirst.name}"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"properties": {
"registrationInfoToken": "${azurerm_virtual_desktop_host_pool_registration_info.pooleddepthfirst.token}"
}
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_machine_extension.second-custom_scripts
]
}
Enabling VM diagnostics in Azure is such a pain. I've gotten it working using ARM templates, the Azure PowerShell SDK, and the Azure CLI. But I've been trying for days now to enable VM diagnostics for both Windows and Linux VMs using Terraform and the azurerm_virtual_machine_extension resource. Still not working, ugh!
Here's what I have so far (I've tweaked this a bit to simplify it for this post, so hope I didn't break anything with my manual edits):
resource "azurerm_virtual_machine_extension" "vm-linux" {
count = "${local.is_windows_vm == "false" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "LinuxDiagnostic"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"ladCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/linux_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
# SAS token below: Do not include the leading question mark, as per https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux.
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.current.sas, "/^\\?/", "")}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
resource "azurerm_virtual_machine_extension" "vm-win" {
count = "${local.is_windows_vm == "true" ? 1 : 0}"
depends_on = ["azurerm_virtual_machine_data_disk_attachment.vm"]
name = "Microsoft.Insights.VMDiagnosticsSettings"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
virtual_machine_name = "${local.vm_name}"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.9"
auto_upgrade_minor_version = "true"
# The JSON file referenced below was created by running "az vm diagnostics get-default-config --is-windows-os", and adding/verifying the "__DIAGNOSTIC_STORAGE_ACCOUNT__" and "__VM_RESOURCE_ID__" placeholders.
settings = <<SETTINGS
{
"wadCfg": "${base64encode(replace(replace(file("${path.module}/.diag-settings/windows_diag_config.json"), "__DIAGNOSTIC_STORAGE_ACCOUNT__", "${module.vm_storage_account.name}"), "__VM_RESOURCE_ID__", "${local.metricsresourceid}"))}",
"storageAccount": "${module.vm_storage_account.name}"
}
SETTINGS
protected_settings = <<SETTINGS
{
"storageAccountName": "${module.vm_storage_account.name}",
"storageAccountSasToken": "${data.azurerm_storage_account_sas.current.sas}",
"storageAccountEndPoint": "https://core.windows.net/"
}
SETTINGS
}
Notice that for both Linux and Windows I'm loading the diagnostics details from a JSON file within the code base, as per the comments. These are the default configs provided by Azure, so they should be valid.
When I deploy these, the Linux VM extension deploys successfully, but in the Azure portal the extension says "Problems detected in generated mdsd configuration". And if I look at the VM's "Diagnostic settings" it says "Error encountered: TypeError: Object doesn't support property or method 'diagnosticMonitorConfiguration'".
The Windows VM extension fails to deploy altogether, saying that it "Failed to read configuration". If I view the extension in the portal it displays the following error:
"code": "ComponentStatus//failed/-3",
"level": "Error",
"displayStatus": "Provisioning failed",
"message": "Error starting the diagnostics extension"
And if I look at the "Diagnostics settings" pane it just hangs with a never-ending ". . ." animation.
However, if I look at the "terraform apply" output for both VM extensions, the decoded settings look exactly as intended, matching the config files with the placeholders correctly replaced.
Any suggestions on how to get this working?
Thanks in advance!
I've gotten the Windows Diagnostics to work 100% so far in our environment. It seems the AzureRM API is very picky about the config being sent. We had been using powershell to enable it, and the same xmlCfg used in powershell DID NOT WORK with terraform.
So far this has worked for us: (The settings/protected_settings names are Case Sensitive! aka xmlCfg works, while xmlcfg does not)
main.cf
#########################################################
# VM Extensions - Windows In-Guest Monitoring/Diagnostics
#########################################################
resource "azurerm_virtual_machine_extension" "InGuestDiagnostics" {
name = var.compute["InGuestDiagnostics"]["name"]
location = azurerm_resource_group.VMResourceGroup.location
resource_group_name = azurerm_resource_group.VMResourceGroup.name
virtual_machine_name = azurerm_virtual_machine.Compute.name
publisher = var.compute["InGuestDiagnostics"]["publisher"]
type = var.compute["InGuestDiagnostics"]["type"]
type_handler_version = var.compute["InGuestDiagnostics"]["type_handler_version"]
auto_upgrade_minor_version = var.compute["InGuestDiagnostics"]["auto_upgrade_minor_version"]
settings = <<SETTINGS
{
"xmlCfg": "${base64encode(templatefile("${path.module}/templates/wadcfgxml.tmpl", { vmid = azurerm_virtual_machine.Compute.id }))}",
"storageAccount": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}"
}
SETTINGS
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.InGuestDiagStorageAccount.name}",
"storageAccountKey": "${data.azurerm_storage_account.InGuestDiagStorageAccount.primary_access_key}",
"storageAccountEndPoint": "https://core.windows.net"
}
PROTECTEDSETTINGS
}
tfvars
InGuestDiagnostics = {
name = "WindowsDiagnostics"
publisher = "Microsoft.Azure.Diagnostics"
type = "IaaSDiagnostics"
type_handler_version = "1.16"
auto_upgrade_minor_version = "true"
}
wadcfgxml.tmpl (I cut out some of the Perf counters for brevity)
<WadCfg>
<DiagnosticMonitorConfiguration overallQuotaInMB="5120">
<DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Error"/>
<Metrics resourceId="${vmid}">
<MetricAggregation scheduledTransferPeriod="PT1H"/>
<MetricAggregation scheduledTransferPeriod="PT1M"/>
</Metrics>
<PerformanceCounters scheduledTransferPeriod="PT1M">
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Processor Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% Privileged Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\% User Time" sampleRate="PT60S" unit="Percent" />
<PerformanceCounterConfiguration counterSpecifier="\Processor Information(_Total)\Processor Frequency" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\System\Processes" sampleRate="PT60S" unit="Count" />
<PerformanceCounterConfiguration counterSpecifier="\SQLServer:SQL Statistics\SQL Re-Compilations/sec" sampleRate="PT60S" unit="Count" />
</PerformanceCounters>
<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Application!*[System[(Level = 1 or Level = 2)]]"/>
<DataSource name="Security!*[System[(Level = 1 or Level = 2)]"/>
<DataSource name="System!*[System[(Level = 1 or Level = 2)]]"/>
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
</WadCfg>
I finally got the Linux In-Guest Diagnostics to work (LAD). A few notable facts, unlike the windows diagnostics the settings need to be transmitted in json, no base64 encoding. Additionally LAD seems to require a SAS token with the storage account. The normal caveats around AzureRM API being picky about the config, and the settings being Case Sensitive still remain. Here is what is working for me so far..
# Locals
locals {
env = var.workspace[terraform.workspace]
# Use a set/static time to avoid TF from recreating the SAS token every apply, which would then cause it to
# modify/recreate anything that uses it. Not ideal, but the token is for a VERY long time, so it will do for now
sas_begintime = "2019-11-22T00:00:00Z"
sas_endtime = timeadd(local.sas_begintime, "873600h")
}
#########################################################
# VM Extensions - In-Guest Diagnostics
#########################################################
# We need a SAS token for the In-Guest Metrics
data "azurerm_storage_account_sas" "inguestdiagnostics" {
count = (contains(keys(local.env), "InGuestDiagnostics") ? 1 : 0)
connection_string = data.azurerm_storage_account.BootDiagStorageAccount.primary_connection_string
https_only = true
resource_types {
service = true
container = true
object = true
}
services {
blob = true
queue = true
table = true
file = true
}
start = local.sas_begintime
expiry = local.sas_endtime
permissions {
read = true
write = true
delete = true
list = true
add = true
create = true
update = true
process = true
}
}
resource "azurerm_virtual_machine_extension" "inguestdiagnostics" {
for_each = contains(keys(local.env), "InGuestDiagnostics") ? local.env["InGuestDiagnostics"] : {}
depends_on = [azurerm_virtual_machine_extension.dependencyagent]
name = each.value["name"]
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
virtual_machine_name = azurerm_virtual_machine.compute["${each.key}"].name
publisher = each.value["publisher"]
type = each.value["type"]
type_handler_version = each.value["type_handler_version"]
auto_upgrade_minor_version = each.value["auto_upgrade_minor_version"]
settings = templatefile("${path.module}/templates/ladcfg2json.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })
protected_settings = <<PROTECTEDSETTINGS
{
"storageAccountName": "${data.azurerm_storage_account.BootDiagStorageAccount.name}",
"storageAccountSasToken": "${replace(data.azurerm_storage_account_sas.inguestdiagnostics.0.sas, "/^\\?/", "")}"
}
PROTECTEDSETTINGS
}
# These variations didn't work for me ..
# "ladCfg": "${templatefile("${path.module}/templates/ladcfgjson.tmpl", { vmid = azurerm_virtual_machine.compute["${each.key}"].id, storageAccountName = data.azurerm_storage_account.BootDiagStorageAccount.name })}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character '\n' in string literal or Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
# "ladCfg": "${replace(data.local_file.ladcfgjson["${each.key}"].content, "/\\n/", "")}",
# - This one get's you Error: "settings" contains an invalid JSON: invalid character 'S' after object key:value pair
tfvars
workspace = {
TerraformWorkSpaceName = {
compute = {
# Add additional key/objects for additional Compute
computer01 = {
name = "computer01"
}
}
InGuestDiagnostics = {
# Add additional key/objects for each Compute you want to install the InGuestDiagnostics on
computer01 = {
name = "LinuxDiagnostic"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
}
}
}
}
I couldn't get a template file to work without wrapping the WHOLE thing in jsonencode.
ladcfg2json.tmpl
${jsonencode({
"StorageAccount": "${storageAccountName}",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "${vmid}"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
"performanceCounterConfiguration": [
{
"counterSpecifier": "/builtin/processor/percentiowaittime",
"condition": "IsAggregate=TRUE",
"sampleRate": "PT15S",
"annotation": [
{
"locale": "en-us",
"displayName": "CPU IO wait time"
}
],
"unit": "Percent",
"class": "processor",
"counter": "percentiowaittime",
"type": "builtin"
}
]
},
"syslogEvents": {
"syslogEventConfiguration": {
"LOG_LOCAL0": "LOG_DEBUG"
}
}
}
}
})}
I hope this helps..
As the question was asked more than a year ago this is more for people like me who are trying this for the first time.
We only use linux vms so this advice applies to that:
protected settings should use PROTECTED_SETTINGS not SETTINGS (which you can see in #rv23 answer above)
From the documentation I am following https://learn.microsoft.com/en-gb/azure/virtual-machines/extensions/diagnostics-linux#protected-settings you can see you need to specify storageAccountSasToken not storageAccountKey:
Here is my redacted version of config (replace all bits in all caps with your own settings ):
resource "azurerm_virtual_machine_extension" "vm_linux_diagnostics" {
count = "1"
name = "NAME"
resource_group_name = "YOUR RESOURCE GROUP NAME"
location = "YOUR LOCATION"
virtual_machine_name = "TARGET MACHINE NAME"
publisher = "Microsoft.Azure.Diagnostics"
type = "LinuxDiagnostic"
type_handler_version = "3.0"
auto_upgrade_minor_version = "true"
settings = <<SETTINGS
{
"StorageAccount": "tfnpfsnhsuk",
"ladCfg": {
"sampleRateInSeconds": 15,
"diagnosticMonitorConfiguration": {
"metrics": {
"metricAggregation": [
{
"scheduledTransferPeriod": "PT1M"
},
{
"scheduledTransferPeriod": "PT1H"
}
],
"resourceId": "VM ID"
},
"eventVolume": "Medium",
"performanceCounters": {
"sinks": "",
.... MORE METRICS - THAT YOU REQUIRE
}
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "YOUR_ACCOUNT_NAME",
"storageAccountSasToken": "YOUR SAS TOKEN"
}
PROTECTED_SETTINGS
tags = "YOUR TAG"
}
Just got this working on a similar question:
Trying to add LinuxDiagnostic Azure VM Extension through terraform and getting errors
This includes getting the SAS token and reading from json files.
I am using below terraform .tf file . Its always showing the error as
DownloadFileAsync: Error in downloading file from
https://github.dxc.com/raw/gist/jmathews4/2095e2436571715f94e05e5ac5400a67/raw/f554d018ae4fee12979b2ee6f5ac4abb3ff509aa/Terraform.ps1?token=AAAMrnoBPpBc7C8kN2_haQueWaqDhth-ks5bUGqhwA%3D%3D,
retry count 25, exception: System.Net.WebException: The request was
aborted: Could not create SSL/TLS secure channel. at
System.Net.WebClient.DownloadFile(Uri address, String fileName)
Any idea what could be issue with GIT?
provider "azurerm" {
}
variable "location" { default = "Southeast Asia" }
variable "resourceGroup" { default = "TerraformResearchResourceGroup" }
variable "virtualMachine" { default = "terraformrvm" }
resource "azurerm_virtual_machine_extension" "ext" {
name = "eagpocvmexxt"
location = "Southeast Asia"
resource_group_name = "terraformrg"
virtual_machine_name = "terraformvm"
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.8"
settings = <<SETTINGS
{
"fileUris": ["https://github.dxc.com/raw/gist/jmathews4/2095e2436571715f94e05e5ac5400a67/raw/f554d018ae4fee12979b2ee6f5ac4abb3ff509aa/Terraform.ps1?token=AAAMrnoBPpBc7C8kN2_haQueWaqDhth-ks5bUGqhwA%3D%3D"],
"commandToExecute": "powershell.exe -ExecutionPolicy unrestricted -NoProfile -NonInteractive -File Terraform.ps1"
}
SETTINGS
tags {
environment = "Production"
}
}
Yeah, the issue is Git is enforcing TLS 1.2 and extension doesnt like that, your workaround is to put the file somewhere where extension can download it without intervention. Azure Storage.