SentinelOne LinuxExtension - Azure - linux

I am currently looking to deploy the SentinelOne agent via Terraform. There does not appear to be much documentation online for VM extension usage in terms of Terraform. Has anyone successfully deployed the S1 agent via Terraform extension? I am unclear on what to add to the settings/protected_settings blocks. Any help is appreciated.
"azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.0"

To add to the settings/protected settings blocks in terraform
resource "azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.0"
settings = <<SETTINGS
{
"commandToExecute": "powershell.exe -Command \"${local.powershell_command}\""
}
SETTINGS
tags = {
environment = "Production"
}
depends_on = [
azurerm_virtual_machine.example
]
}
Settings - The extension's settings are provided as a string-encoded JSON object.
protected settings In the same way that settings are supplied as a JSON object in a string, the protected settings passed to the extension are also.
The keys in the settings and protected settings blocks must be case sensitive according to some VM Extensions. Make sure they are consistent with how Azure expects them (for example, the keys for the JsonADDomainExtension extension the keys are supposed to be in TitleCase)
Reference: azurerm_virtual_machine_extension

Installing the plugin manually and checking the JSON output gives the following settings block:
{
"LinuxAgentVersion": "22.4.1.2",
"SiteToken": "<your_site_token_here"
}
Unfortunately, this leaves the one critical field required for installation out, since it's a protected setting. That is the field name for the "Sentinel One Console API token".
UPDATE:
Working extension example after finding the correct JSON key value:
resource "azurerm_virtual_machine_extension" "testserver-sentinelone-extension" {
name = "SentinelOneLinuxExtension"
virtual_machine_id = azurerm_linux_virtual_machine.testserver.id
publisher = "SentinelOne.LinuxExtension"
type = "LinuxExtension"
type_handler_version = "1.2"
automatic_upgrade_enabled = false
settings = <<SETTINGS
{
"LinuxAgentVersion": "22.4.1.2",
"SiteToken": "<your_site_token_here>"
}
SETTINGS
protected_settings = <<PROTECTEDSETTINGS
{
"SentinelOneConsoleAPIKey": "${var.sentinel_one_api_token}"
}
PROTECTEDSETTINGS
}
EDIT: Figured it out by once again manually installing the extension on another test system, and then digging into the waagent logs on that VM to see what value was being queried by the enable.sh script.
# cat /var/lib/waagent/SentinelOne.LinuxExtension.LinuxExtension-1.2.0/scripts/enable.sh | grep Console
api_token=$(echo "$protected_settings_decrypted" | jq -r ".SentinelOneConsoleAPIKey")

Related

AzureWebJobsDashboard no longer supported, but added automatically to Azure Function App

In our Application Insights logs for Azure Functions there are a lot of warnings with the message:
The Dashboard setting is no longer supported. See https://aka.ms/functions-dashboard for details.
We build our Azure resources using Terraform, and since our Function Apps target the "~4" runtime version we don't add the AzureWebJobsDashboard setting to our Function's Application settings. (According to the docs: The AzureWebJobsDashboard setting is only valid for apps that target version 1.x of the Azure Functions runtime.)
I was therefore surprised to find the AzureWebJobsDashboard setting with a value in the Azure portal. Any idea how it got there?
I deleted the setting manually in the portal for four of the apps we have running, and the logged warnings went away - however, the setting reappeared in one of them after a little while 🤯 Is there any way to make sure the deletion is permanent?
Edit: I tried deleting the setting manually for four new apps - making sure to save the changes, and the setting reappeared in two of them after some hours.
Edit2: After 1-2 days the setting is back in all eight apps.
There's a special setting builtin_logging_enabled in terraform resource for Azure functions:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app#enable_builtin_logging
Setting it to false should disable AzureWebJobsDashboard.
Just add it in your azurerm_windows_function_app resource like this:
resource "azurerm_windows_function_app" "func" {
name = "sample-function-app"
builtin_logging_enabled = false
...
}
We have tried the same in our environment to check ,when deploying azure function using terraform if AzureWebJobsDashboard is there or not.
Yes, It was there and the document you followed which is correct , So to it manually we need to follow the below to resolve the above issue.
To do that make sure that we have applied the APPINSIGHTS_INSTRUMENTATIONKEY after deleting AzureWebJobsDashboard
And enabled App insights for our function app as shown below and the value will be store automatically after enable.
In your case, the configuration is appeared automatically after sometime or days, But if we enabled the aforementioned it seems to be work. As we checked several times but still its not appeared.
NOTE:- we used Python3.9 with function runtime v4 in Linux environment.
Below is the terraform code that we used for reproducing;
main.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "ajayXXXX"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "exatst"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_service_plan" "example" {
name = "example-service-plan1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
os_type = "Linux"
sku_name = "S1"
}
resource "azurerm_linux_function_app" "example" {
name = "funterraform"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
service_plan_id = azurerm_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
site_config {
application_stack {
python_version = "3.9"
}
}
}
resource "azurerm_function_app_function" "example" {
name = "example-function-app-function"
function_app_id = azurerm_linux_function_app.example.id
language = "Python"
test_data = jsonencode({
"name" = "Azure"
})
config_json = jsonencode({
"bindings" = [
{
"authLevel" = "function"
"direction" = "in"
"methods" = [
"get",
"post",
]
"name" = "req"
"type" = "httpTrigger"
},
{
"direction" = "out"
"name" = "$return"
"type" = "http"
},
]
})
}
Source Code taken from : HashiCrop Terraformregistry|azurerm_function_app_function
For more information please refer the below links:-
GitHub Issue| Remove support for AzureWebJobsDashboard
MICROSOFT DOCUMENT| App settings reference for Azure Functions.

Is there a way in terraform to create a replacement group of related resources before destroying the original group?

I have a VM template I'm deploying an Azure Virtual Desktop environment with terraform (via octopus deploy) to Azure. On top of the Virtual Machines, I'm installing a number of extensions which culminates with a vm extension to register the VM with the Host Pool.
I'd like to rebuild the VM each time the custom script extension is applied (Extension #2, after domain join). But in rebuilding the VM, I'd like to build out a new VM, complete with the host pool registration before any part of the existing VM is destroyed.
Please accept the cut down version below to understand what I am trying to do.
I expect the largest number of machine recreations to come from enhancements to the configuration scripts that configure the server on creation. Not all of the commands are expected to be idempotent and we want the AVD vms to be ephemeral. If an issue is encountered, the support team is expected to be able to drain a server and destroy it once empty to get a replacement by terraform apply. In a case where the script gets updated though, we want to be able to replace all VMs quickly in an emergency, or at the very least minimize the nightly maintenance window.
Script Process: parameterized script > gets filled out as a template file > gets stored as an az blob > called by custom script extension > executed on the machine.
VM build process: VM is provisioned > currently 8 extensions get applied one at a time, starting with the domain join, then the custom script extension, followed by several Azure monitoring extensions, and finally the host pool registration extension.
I've been trying to use the create_before_destroy lifecycle feature, but I can't get it to spin up the VM, and apply all extensions before it begins removing the hostpool registration from the existing VMs. I assume there's a way to do it using the triggers, but I'm not sure how to do it in such a way that it always has at least the current number of VMs.
It would also need to be able to stop if it encounters an error on the new vm, before destroying the existing vm (or better yet, be authorized to rebuild VMs if an extension fails part way through).
resource "random_pet" "avd_vm" {
prefix = var.client_name
length = 1
keepers = {
# Generate a new pet name each time we update the setup_host script
source_content = "${data.template_file.setup_host.rendered}"
}
}
data "template_file" "setup_host" {
template = file("${path.module}\\scripts\\setup-host.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
domain = var.domain
aad_group_name = var.aad_group_name
}
}
resource "azurerm_storage_blob" "setup_host" {
name = "setup-host.ps1"
storage_account_name = azurerm_storage_account.scripts.name
storage_container_name = time_sleep.container_rbac.triggers["name"]
type = "Block"
source_content = data.template_file.setup_host.rendered #"${path.module}\\scripts\\setup-host.ps1"
depends_on = [
azurerm_role_assignment.account1_write,
data.template_file.setup_host,
time_sleep.container_rbac
]
}
data "template_file" "client_r_drive_mapping" {
template = file("${path.module}\\scripts\\client_r_drive_mapping.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
}
}
resource "azurerm_windows_virtual_machine" "example" {
count = length(random_pet.avd_vm)
name = "${random_pet.avd_vm[count.index].id}"
...
lifecycle {
ignore_changes = [
boot_diagnostics,
identity
]
}
}
resource "azurerm_virtual_machine_extension" "first-domain_join_extension" {
count = var.rdsh_count
name = "${var.client_name}-avd-${random_pet.avd_vm[count.index].id}-domainJoin"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"Name": "${var.domain_name}",
"OUPath": "${var.ou_path}",
"User": "${var.domain_user_upn}#${var.domain_name}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_network_peering.out-primary,
azurerm_virtual_network_peering.in-primary,
azurerm_virtual_network_peering.in-secondary
]
}
# Multiple scripts called by ./<scriptname referencing them in follow-up scripts
# https://web.archive.org/web/20220127015539/https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows
# https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows#using-multiple-scripts
resource "azurerm_virtual_machine_extension" "second-custom_scripts" {
count = var.rdsh_count
name = "${random_pet.avd_vm[count.index].id}-setup-host"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.10"
auto_upgrade_minor_version = "true"
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "${azurerm_storage_account.scripts.name}",
"storageAccountKey": "${azurerm_storage_account.scripts.primary_access_key}"
}
PROTECTED_SETTINGS
settings = <<SETTINGS
{
"fileUris": ["https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/setup-host.ps1","https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/client_r_drive_mapping.ps1"],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file setup-host.ps1"
}
SETTINGS
depends_on = [
azurerm_virtual_machine_extension.first-domain_join_extension,
azurerm_storage_blob.setup_host
]
}
resource "azurerm_virtual_machine_extension" "last_host_extension_hp_registration" {
count = var.rdsh_count
name = "${var.client_name}-${random_pet.avd_vm[count.index].id}-avd_dsc"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.73"
auto_upgrade_minor_version = true
settings = <<-SETTINGS
{
"modulesUrl": "https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_3-10-2021.zip",
"configurationFunction": "Configuration.ps1\\AddSessionHost",
"properties": {
"HostPoolName":"${azurerm_virtual_desktop_host_pool.pooleddepthfirst.name}"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"properties": {
"registrationInfoToken": "${azurerm_virtual_desktop_host_pool_registration_info.pooleddepthfirst.token}"
}
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_machine_extension.second-custom_scripts
]
}

Creating azure automation dsc configuration and dsc configuration node using terraform doesn't seems to be working

As a very first step of my release process I run the following terraform code
resource "azurerm_automation_account" "automation_account" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "${local.automation_account_prefix}-${each.key}"
location = each.key
resource_group_name = each.value.name
sku_name = "Basic"
tags = {
environment = "development"
}
}
The automation accounts created as expected and I can see those in Azure portal.
I also have terraform code that creates a couple of windows VMs,each VM creation accompained by the following
resource "azurerm_virtual_machine_extension" "dsc" {
name = "DevOpsDSC"
virtual_machine_id = var.vm_id
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.83"
settings = <<SETTINGS_JSON
{
"configurationArguments": {
"RegistrationUrl": "${var.dsc_server_endpoint}",
"NodeConfigurationName": "${var.dsc_config}",
"ConfigurationMode": "${var.dsc_mode}",
"ConfigurationModeFrequencyMins": 15,
"RefreshFrequencyMins": 30,
"RebootNodeIfNeeded": false,
"ActionAfterReboot": "continueConfiguration",
"AllowModuleOverwrite": true
}
}
SETTINGS_JSON
protected_settings = <<PROTECTED_SETTINGS_JSON
{
"configurationArguments": {
"RegistrationKey": {
"UserName": "PLACEHOLDER_DONOTUSE",
"Password": "${var.dsc_primary_access_key}"
}
}
}
PROTECTED_SETTINGS_JSON
}
The result is the following
So VM extension is created for each VM and the status says that provisioning succeeded.
For the next step I run the following terraform code
resource "azurerm_automation_dsc_configuration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
location = each.key
content_embedded = "configuration iswebserver {}"
}
resource "azurerm_automation_dsc_nodeconfiguration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver.localhost"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
depends_on = [azurerm_automation_dsc_configuration.iswebserver]
content_embedded = file("${path.cwd}/iswebserver.mof")
}
The mof file content is the following
/*
#TargetNode='IsWebServer'
#GeneratedBy=P120bd0
#GenerationDate=02/25/2021 17:33:16
#GenerationHost=D-MJ05UA54
*/
instance of MSFT_RoleResource as $MSFT_RoleResource1ref
{
ResourceID = "[WindowsFeature]IIS";
IncludeAllSubFeature = True;
Ensure = "Present";
SourceInfo = "D:\\DSC\\testconfig.ps1::5::9::WindowsFeature";
Name = "Web-Server";
ModuleName = "PsDesiredStateConfiguration";
ModuleVersion = "1.0";
ConfigurationName = "TestConfig";
};
instance of OMI_ConfigurationDocument
{
Version="2.0.0";
MinimumCompatibleVersion = "1.0.0";
CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
Author="P120bd0";
GenerationDate="02/25/2021 17:33:16";
GenerationHost="D-MJ05UA54";
Name="TestConfig";
};
After running the code I have got the following result
The configuration is created as expected, clicking on configuration entry in UI grid, leads to the following
Meaning that node configuration is created as well. My expectation was that for each VM I will see the Node configured to run configuration provided in mof file but Nodes UI shows empty Nodes
So I was trying to configure node manually to connect all peaces together
and that fails with the following
So I am totally confisued. On the one hand there's azurerm_virtual_machine_extension that allows to create extension and bind it to the automation account. In addition there are azurerm_automation_dsc_configuration and azurerm_automation_dsc_nodeconfiguration that allows to create configuration and node configuration. But the bottom line is that you cannot connect all those dots to be able to create node.
Just to confirm that configuration is valid, I create additional vm without using azurerm_virtual_machine_extension and I was able succesfully add this MV to created node configuration
The problem was in azurerm_virtual_machine_extension dsc_configuration parameter. The value needs to be the same as name property of the azurerm_automation_dsc_nodeconfiguration resource.

Terraform Azure VM extension type and type_handler_version parameter values for AADLoginForWindows

I'm trying to add the AADLoginForWindows VM extension to an Azure Windows Server VM using version 1.21.0 of the terraform azurerm provider.
The install fails with the message:
Failure sending request: StatusCode=404 -- Original Error: Code="ArtifactNotFound" Message="Extension with publisher 'Microsoft.Azure.ActiveDirectory', type 'AADLoginForWindows', and type handler version '1.0' could not be found in the extension repository.
The Terraform config to apply AADLoginForLinux (which works):
resource "azurerm_virtual_machine_extension" "AADLoginForLinux" {
name = "AADLoginForLinux"
location = "${azurerm_virtual_machine.vm-linux-bastion.location}"
resource_group_name = "${azurerm_virtual_machine.vm-linux-bastion.resource_group_name}"
virtual_machine_name = "${azurerm_virtual_machine.vm-linux-bastion.name}"
publisher = "Microsoft.Azure.ActiveDirectory.LinuxSSH"
type = "AADLoginForLinux"
type_handler_version = "1.0"
auto_upgrade_minor_version = true
}
I suspect there is something wrong with either the type or type_handler_version parameter values but I don't understand what these values relate to (and some Googling has not provided enlightenment).
There is no documentation available for AADLoginForWindows (perhaps that should be a warning! ;) ) but I'm hoping that it works much the same way as AADLoginForLinux, which allows us to log in to Linux VMs using credentials managed directly in Azure AD.
My Terraform configuration is:
resource "azurerm_virtual_machine_extension" "AADLoginForWindows" {
name = "AADLoginForWindows"
location = "${azurerm_resource_group.rg-dataaq-prd-neu-ftps.location}"
resource_group_name = "${azurerm_resource_group.rg-dataaq-prd-neu-ftps.name}"
virtual_machine_name = "${azurerm_virtual_machine.vm-windows.name}"
publisher = "Microsoft.Azure.ActiveDirectory"
type = "AADLoginForWindows"
type_handler_version = "1.0"
auto_upgrade_minor_version = true
depends_on = ["azurerm_virtual_machine_extension.antimal"]
}
Using the az cli I can find the following info about versions of the extension:
az vm extension image list --name AADLoginForWindows
[
{
"name": "AADLoginForWindows",
"publisher": "Microsoft.Azure.ActiveDirectory",
"version": "0.3.0.0"
},
{
"name": "AADLoginForWindows",
"publisher": "Microsoft.Azure.ActiveDirectory",
"version": "0.3.1.0"
}
]
Inquiring about a specific version of the extension:
az vm extension image show --name AADLoginForWindows --publisher "Microsoft.Azure.ActiveDirectory" --location northeurope --version "0.3.1.0"
{
"computeRole": "IaaS",
"handlerSchema": null,
"id": "/Subscriptions/.../Providers/Microsoft.Compute/Locations/northeurope/Publishers/Microsoft.Azure.ActiveDirectory/ArtifactTypes/VMExtension/Types/AADLoginForWindows/Versions/0.3.1.0",
"location": "northeurope",
"name": "0.3.1.0",
"operatingSystem": "Windows",
"supportsMultipleExtensions": false,
"tags": null,
"type": null,
"vmScaleSetEnabled": false
}
I think the "publisher" Terraform parameter must equate to the publisher value in the first query.
The fact that type comes back as null in the second query makes me wonder if that really does map to the "type" Terraform param.
There doesn't seem to be anything related to a type_handler_version.
Does anyone know what config I should be using to get this VM extension installed?
Can anyone describe the Terraform type and type_handler_version parameters in a bit more detail (and describe how to find valid values)?
To test whether this is a Terraform bug I tried applying the extension using the az cli tool:
az vm extension set -n AADLoginForWindows --publisher "Microsoft.Azure.ActiveDirectory" --vm vmname --resource-group rg-name
This gives the below error:
Handler 'Microsoft.Azure.ActiveDirectory.AADLoginForWindows' has reported failure for VM Extension 'AADLoginForWindows' with terminal error code '1007' and error message: 'Install failed for plugin (name: Microsoft.Azure.ActiveDirectory.AADLoginForWindows, version 0.3.1.0) with exception Command C:\Packages\Plugins\Microsoft.Azure.ActiveDirectory.AADLoginForWindows\0.3.1.0\AADLoginForWindowsHandler.exe of Microsoft.Azure.ActiveDirectory.AADLoginForWindows has exited with Exit code: 51'
change your type_handler_version to match the actual one (0.3.1.0 according to your findings)
type_handler_version = "0.3.1.0"
it cannot downgrade the version, only upgrade, and only minor version.
Linux version works because (so its higher than 1.0.0.0):
While windows version is still not on 1.0:
I use the following code with AzureRM 2.x and Terraform v0.12.x to add the AD login for Linux:
resource "azurerm_virtual_machine_extension" "ad-extenstion-linux" {
depends_on=[azurerm_linux_virtual_machine.ubuntu-linux-vm]
name = "AADLoginForLinux"
publisher = "Microsoft.Azure.ActiveDirectory.LinuxSSH"
type = "AADLoginForLinux"
type_handler_version = "1.0"
auto_upgrade_minor_version = true
virtual_machine_id = azurerm_linux_virtual_machine.ubuntu-linux-vm.id
}
May be a bit late to the party, but the issue is still ongoing. I managed to successfully deploy the extension to a Windows Server 2019 VM using type_handler_version = "1.0"
The version of azure_rm that i am using is > 2.50 and terraform > 1.0
First! your Virtual Machine is windows so use the login from windows.
Second! the virtual machine must be sent as the id
Third! the resource group is not needed
The correct extension and code are:
resource "azurerm_virtual_machine_extension" "example" {
name = "AADLoginForWindows"
virtual_machine_id = azurerm_windows_virtual_machine.example.id
type = "AADLoginForWindows"
type_handler_version = "1.0"
auto_upgrade_minor_version = true
publisher = "Microsoft.Azure.ActiveDirectory"
}
I quickly gave up trying to get the resource to work. Instead i just used a gross workaround of calling a local provisioner and adding the extension through the AZ Cli
provisioner "local-exec" {
command = "az vm extension set --publisher Microsoft.Azure.ActiveDirectory --name AADLoginForWindows --resource-group ${azurerm_resource_group.managementRG.name} --vm-name myWinVm"
}
The provisioner block is inside my azurerm_virtual_machine resource. I may take another look at getting it to work properly at some point, but this gets the issue unblocked for now.

Issue with install DSC extension on Azure VM during deployment using Terraform

I am trying to use the information in this article:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-template#default-configuration-script
to onboard a VM to Azure Automation at deployment time and apply a configuration.
I am using Terraform to do the deployment, below is the code I am using for the extensions:
resource "azurerm_virtual_machine_extension" "cse-dscconfig" {
name = "${var.vm_name}-dscconfig-cse"
location = "${azurerm_resource_group.my_rg.location}"
resource_group_name = "${azurerm_resource_group.my_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.my_vm.name}"
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.76"
depends_on = ["azurerm_virtual_machine.my_vm"]
settings = <<SETTINGS
{
"configurationArguments": {
"RegistrationUrl": "${var.endpoint}",
"NodeConfigurationName": "VMConfig"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"configurationArguments": {
"registrationKey": {
"userName": "NOT_USED",
"Password": "${var.key}"
}
}
}
PROTECTED_SETTINGS
}
I am getting the RegistrationURL value at execution time by running the command below and passing the value into Terraform:
$endpoint = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).Endpoint
I am getting the Password value at execution time by running the command below and passing the value into Terraform:
$key = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).PrimaryKey
I can tell from the logs on the VM that the extension is getting installed but never registers with the Automation Account.
Figured out what the problem was. The documentation is thin on details in some areas so it really was by trial and error that I discovered what was causing the problem. I had the wrong value in the NodeConfigurationName properties. What the documentation says about this property: Specifies the node configuration in the Automation account to assign to the node. Not having much experience with DSC, I interrupted this to mean the name of the configuration as seen in the Configurations section of the State configuration (DSC) blade of the Automation Account in the Azure portal.
What the NodeConfigurationName property is really referring to is the Node definition inside the configuration and it should be in the format of ConfigurationName.NodeName. As an example, the name of my configuration is VMConfig and in the config source I have a Node block defined called localhost. So, with this...the value of the NodeConfigurationName property should be VMConfig.localhost.

Resources