Getting computername in Azure Automation DSC - dsc

I´m trying to use Azure Automation Pull server to add DSC configuration to a VM. Normally you can get the name of the current machine with the environment variable $env:COMPUTERNAME - i.e. like this:
xComputer JoinDomain
{
Name = $env:COMPUTERNAME
DomainName = $ConfigurationData.NonNodeData.DomainDetails.DomainName
Credential = $domainAdminCredential
}
But when using Azure Automation $env:COMPUTERNAME seems to always return CLIENT regardless of the current machine name. What is the best/most recommended approach to dynamically get the name of the current VM within the DSC configuration when using Azure Automation?
Thanks in advance.
Best regards,
Thomas

In summary a DSC configuration is applied as under:
Author the configuration in PowerShell
Compile the configuration - this generates a mof file
Deliver the mof file to the LocalConfigurationManager (LCM) which enacts the same
The code specified actually gets the name of the computer in step #2. Note that #2 can happen in a different computer than the one where you want to apply the configuration. In this case it happens to be the one on the AA service side.
Unfortunately, at the moment there is no way to obtain the name of the computer in which the configuration is executing unless you choose to use the script resource. So in summary you can either specify the computer name via configuration data or do the domain join using a script resource

To piggy-back on #NanaLakshaman's answer, let's paramaterize this configuration.
For the sake of ease of understanding, I'm going to pretend that you're only configuring the computer name and domain join.
configuration DomainJoin
{
param
(
[string[]]$NodeName ='localhost',
[string]$DomainName,
[string]$credential
)
#Import the required DSC Resources
Import-DscResource -Module xComputer
Node $NodeName
{ #ConfigurationBlock
xComputer JoinDomain
{
Name = $nodename
DomainName = $DomainName
Credential = $Credential
}
}
}
Now, compile it into memory by hitting F5 or running the script (in Azure Automation, you'd be running the script). Then, call the config like you would a function to generate the new Desired State Configuration. Here is where you can specify the local computer name.
DomainJoin -NodeName $env:ComputerName -DomainName SomeDomain -Credential (Get-Cred)
This will create a new configuration.mof file, which you can then apply using:
Start-DscConfiguration -ComputerName localhost -Wait -Force -Verbose

Another option is to provide the computer name in the configuration data.
An example of how to supply the configuration data is here in Azure Automation is here, along with an example of how to use the NodeName in a resource:
https://azure.microsoft.com/en-us/documentation/articles/automation-dsc-compile/
Also, the particular use you mention, should not require the name to be supplied. I have filed an issue for that here:
https://github.com/PowerShell/xComputerManagement/issues/29

Related

Microsoft Azure: 'New-AzResourceGroupDeployment : A parameter cannot be found that matches parameter name 'location'

I'm a beginner at Microsoft Azure so please bear with me. I'm following this tutorial on deploying bicep templates with parameters, my bicep file is the exact same as the one in the tutorial. However, when I attempt to deploy it I get the following error
New-AzResourceGroupDeployment : A parameter cannot be found that matches parameter name 'location'.
The location parameter definitely exists. I'm deploying with the following command:
New-AzResourceGroupDeployment -ResourceGroupName ResourceGroup -TemplateFile c:\Users\Name\Desktop\files\azure\testing\test.bicep -location region -storagename storageaccountname -storagetype Standard_LRS -WhatIf
Any help would be appreciated!
It looks like the tutorial contains an error. In the official documentation, there is no location parameter in the New-AzResourceGroupDeployment cmdlet.
Also, you have already specified a resource group, and the resources you describe with bicep contain a location. So the location parameter makes no sense here - just leave it out!
Note that you can also deploy your bicep files using the Azure CLI. See Deploy local Bicep file
A way to not have to put in the location for a resource if you have all resources in the same location is to make use of
So as long as you have run this command, it will apply for all bicep files you run with it configured as such:
This means you don't need to provide it with every CLI command for bicep.
I followed this tutorial series and found it fantastic:
Beginner
Intermediate
Advanced
What the New-AzResourceGroupDeployment command allows you to do is supply parameter values via PowerShell params. So while the command has its own parameters and location is not one of them, any extra parameters supplied to the command are passed as template parameters. If the template does not have a parameter named location (for example) - you'll see that error.
That error is usually pretty accurate, so the template may very well not have a parameter named location. Check to make sure the file c:\Users\Name\Desktop\files\azure\testing\test.bicep has a location param, and that you've saved the file since changing it (I forget that often).
If that doesn't unblock, share your file/code and that may help debug.

[Azure Terraform]: Create Start/Stop VM Solution

I am using Terraform to create an Automation Account in Azure.
The following resource in Azure provider does the job: azurerm_automation_account.
Ok. So I got my AA created... here is when problems arise.
"Run As" account: there seems to be a way to create it from Terraform... but the process is cumbersome. I have lost hope, and will probably resort to enable it manually from Azure portal (it is just one click)... but it will brake my automation pipeline :(
"Start/Stop VM Solution": I need the powershell runbooks in this solution to start-stop VMs according to a given schedule. There is a resource in Azure provider called "azurerm_automation_runbook". It has 2 useful arguments to reference runbook scripts:
"content": with it I could "load" a local powershell script content. I know this would work (I could manually download the .ps1 script used by "Start/Stop VM Solution" and use "content" to load it), but I would be missing any fixes/updates made by Microsoft in its code)
"publish_content_link": by which I could point to the URI of a given powershell runbook. I have looked in the "Runbook Gallery" for the runbooks contained in the "Start/Stop VM Solution" (not found them). Anyone had any luck with this? A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)... but not sure if this would be possible.
Thanks in advance.
For point 1: I also found it very challenging and while things have improved lately, there still doesn't seem to be an easy, straight forward way of creating the Run As Account. I eventually resorted to creating it manually from the Azure Portal but below are potential areas you can explore:
I'm not sure if you've considered using the external data source from terraform to execute the Powershell script from Microsoft. It's still a pain because of the last step where you have to authenticate manually, but it still brings you closer to having a blueprint of your environment. Although I'm not sure how it would behave if running this Terraform script a second time.
For point 2: Could you confirm that the script you want to use is a Powershell script and not a Powershell Workflow script? Also could you please elaborate on this approach (I have a feeling that might be the best approach):
A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)
If you look at the Runbooks Gallery, you'll see most of these Powershell scripts have not been updated for many years and are still working fine. If this will be used in a production environment, it would be better if you have control over the changes and update then at your convenience. If you want to get the URI, you can just click on 'View Source Project' and it will lead you to the GitHub repo. E.g. for the Runbook Stop-Start-AzureVM (Scheduled VM Shutdown/Startup).
You'll also notice most of the scripts is submitted by external parties. If you link to a URI that's maintained by someone else and that person publishes malicious code in there or even accidentally messes up the code, it's not desirable. But again I'm not sure as to the extent of your automation (e.g. if you expect to execute the terraform script once a month to ensure the Runbook is up to date)
If I get the scripts from somewhere, I'll validate it prior to using them in my environment.
data "local_file" "start_vm_parallel" {
filename = "./scripts/start-vm-parallel.ps1"
}
resource "azurerm_automation_runbook" "start_vm_parallel" {
name = local.NAME
location = local.REGION
resource_group_name = local.RG
automation_account_name = azurerm_automation_account.automation_prod.name
log_verbose = "true"
log_progress = "true"
description = "This runbook starts VMs in parallel based on a matching tag value"
runbook_type = "PowerShellWorkflow"
content = data.local_file.start_vm_parallel.content
publish_content_link {
uri = "https://path.to.script/script.ps1"
}
}
If you're using a Powershell Workflow, you need to make sure that the Runbook name matches the workflow name inside the script.
One last thing to remember before you even start using your Runbooks, is to update the modules by creating a 'modules update' Runbook from the Azure Automation team and running it on schedule, once a month.

Passing customdata to Operating system option of azure vmss - Terraform

While we create Virtual machine scale set in azure , there is an option for passing the Custom data under Operating System like below
How can i pass the script there using terraform , there is an option custom data which seems to be used for newly created machines from terraform, but the script is not getting stored there. How do i fill this with the script i have using terraform. Any help on this would be appreciated.
From the official document, the custom_data can only be passed to the Azure VM at provisioning time.
Custom data is only made available to the VM during first boot/initial
setup, we call this 'provisioning'. Provisioning is the process where
VM Create parameters (for example, hostname, username, password,
certificates, custom data, keys etc.) are made available to the VM and
a provisioning agent processes them, such as the Linux Agent and
cloud-init.
The scripts are saved differed from the OS.
Windows
Custom data is placed in %SYSTEMDRIVE%\AzureData\CustomData.bin as a binary file, but it is not processed.
Linux
Custom data is passed to the VM via the ovf-env.xml file, which is copied to the /var/lib/waagent directory during provisioning. Newer versions of the Microsoft Azure Linux Agent will also copy the base64-encoded data to /var/lib/waagent/CustomData as well for convenience.
To upload custom_data from your local path to your Azure VM with terraform, you can use filebase64 Function.
For example, there is a test.sh script or cloud-init.txt file under the path where your main.tfor terraform.exe file exists.
custom_data = filebase64("${path.module}/test.sh")
If you are looking for executing scripts after VMSS created, you could look at the custom extension and this sample.

How to run a remote command (powershell/bash) against an existing Azure VM in Azure Data Factory V2?

I've been trying to find a way to run a simple command against one of my existing Azure VMs using Azure Data Factory V2.
Options so far:
Custom Activity/Azure Batch won't let me add existing VMs to the pool
Azure Functions - I have not played with this but I have not found any documentation on this using AZ Functions.
Azure Cloud Shell - I've tried this using the browser UI and it works, however I cannot find a way of doing this via ADF V2
The use case is the following:
There are a few tasks that are running locally (Azure VM) in task scheduler that I'd like to orchestrate using ADF as everything else is in ADF, these tasks are usually python applications that restore a SQL Backup and or purge some folders.
i.e. sqdb-restore -r myDatabase
where sqldb-restore is a command that is recognized locally after installing my local python library. Unfortunately the python app needs to live locally in the VM.
Any suggestions? Thanks.
Thanks to #martin-esteban-zurita, his answer helped me to get to what I needed and this was a beautiful and fun experiment.
It is important to understand that Azure Automation is used for many things regarding resource orchestration in Azure (VMs, Services, DevOps), this automation can be done with Powershell and/or Python.
In this particular case I did not need to modify/maintain/orchestrate any Azure resource, I needed to actually run a Bash/Powershell command remotely into one of my existing VMs where I have multiple Powershell/Bash commands running recurrently in "Task Scheduler".
"Task Scheduler" was adding unnecessary overhead to my data pipelines because it was unable to talk to ADF.
In addition, Azure Automation natively only runs Powershell/Python commands in Azure Cloud Shell which is very useful to orchestrate resources like turning on/off Azure VMs, adding/removing permissions from other Azure services, running maintenance or purge processes, etc, but I was still unable to run commands locally in an existing VM. This is where the Hybrid Runbook Worker came into to picture. A Hybrid worker group
These are the steps to accomplish this use case.
1. Create an Azure Automation Account
2. Install the Windows Hybrid Worker in my existing VM . In my case it was tricky because my proxy was giving me some errors. I ended up downloading the Nuget Package and manually installing it.
.\New-OnPremiseHybridWorker.ps1 -AutomationAccountName <NameofAutomationAccount> -AAResourceGroupName <NameofResourceGroup>
-OMSResourceGroupName <NameofOResourceGroup> -HybridGroupName <NameofHRWGroup>
-SubscriptionId <AzureSubscriptionId> -WorkspaceName <NameOfLogAnalyticsWorkspace>
Keep in mind that in the above code, you will need to find your own parameter values, the only parameter that does not have to be found and will be created is HybridGroupName this will define the name of the Hybrid Group
3. Create a PowerShell Runbook
[CmdletBinding()]
Param
([object]$WebhookData) #this parameter name needs to be called WebHookData otherwise the webhook does not work as expected.
$VerbosePreference = 'continue'
#region Verify if Runbook is started from Webhook.
# If runbook was called from Webhook, WebhookData will not be null.
if ($WebHookData){
# Collect properties of WebhookData
$WebhookName = $WebHookData.WebhookName
# $WebhookHeaders = $WebHookData.RequestHeader
$WebhookBody = $WebHookData.RequestBody
# Collect individual headers. Input converted from JSON.
$Input = (ConvertFrom-Json -InputObject $WebhookBody)
# Write-Verbose "WebhookBody: $Input"
#Write-Output -InputObject ('Runbook started from webhook {0} by {1}.' -f $WebhookName, $From)
}
else
{
Write-Error -Message 'Runbook was not started from Webhook' -ErrorAction stop
}
#endregion
# This is where I run the commands that were in task scheduler
$callBackUri = $Input.callBackUri
# This is extremely important for ADF
Invoke-WebRequest -Uri $callBackUri -Method POST
4. Create a Runbook Webhook pointing to the Hybrid Worker's VM
4. Create a webhook activity in ADF where the above PowerShell runbook script will be called via a POST Method
Important Note: When I created the webhook activity it was timing out after 10 minutes (default), so I noticed in the Azure Automation Account that I was actually getting INPUT data (WEBHOOKDATA) that contained a JSON structure with the following elements:
WebhookName
RequestBody (This one contains whatever you add in the Body plus a default element called callBackUri)
All I had to do was to invoke the callBackUri from Azure Automation. And this is why in the PowerShell runbook code I added Invoke-WebRequest -Uri $callBackUri -Method POST. With this, ADF was succeeding/failing instead of timing out.
There are many other details that I struggled with when installing the hybrid worker in my VM but those are more specific to your environment/company.
This looks like a use case that is supported with Azure Automation, using a hybrid worker. Try reading here: https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker
You can call runbooks with webhooks in ADFv2, using the web activity.
Hope this helped!

Can a script update the Identity tab fields of Application Pool properties in IIS 6.0+

I am a developer and I have arrived at a solution to a webservice authentication problem that involved ensuring Kerberos was maintained because of multiple network hops. In short:
A separate application pool for the virtual directory hosting the webservice was established
The Identity of this application pool is set to a configurable account (DOMAINname\username which will remain constant but the strong password is somehow changed every 90 days I think); at a given point in time, the password is known or obtainable somehow by our system admin).
Is there a script language that could be used to setup a new application pool for this application and then set the identity as described (rather than manual data entry into property pages in IIS)?
I think our system admin knows a little about Powershell but can someone help me offer him something to use (he will need to repeat this on 2 more servers as the app is rolled out). Thanks.
You can use such PowerShell script:
Import-Module WebAdministration
$appPool = New-WebAppPool -Name "MyAppPool"
$appPool.processModel.userName = "domain\username"
$appPool.processModel.password = "ReallyStrongPassword"
$appPool.processModel.identityType = "SpecificUser"
$appPool | Set-Item

Resources