How to run a remote command (powershell/bash) against an existing Azure VM in Azure Data Factory V2? - azure

I've been trying to find a way to run a simple command against one of my existing Azure VMs using Azure Data Factory V2.
Options so far:
Custom Activity/Azure Batch won't let me add existing VMs to the pool
Azure Functions - I have not played with this but I have not found any documentation on this using AZ Functions.
Azure Cloud Shell - I've tried this using the browser UI and it works, however I cannot find a way of doing this via ADF V2
The use case is the following:
There are a few tasks that are running locally (Azure VM) in task scheduler that I'd like to orchestrate using ADF as everything else is in ADF, these tasks are usually python applications that restore a SQL Backup and or purge some folders.
i.e. sqdb-restore -r myDatabase
where sqldb-restore is a command that is recognized locally after installing my local python library. Unfortunately the python app needs to live locally in the VM.
Any suggestions? Thanks.

Thanks to #martin-esteban-zurita, his answer helped me to get to what I needed and this was a beautiful and fun experiment.
It is important to understand that Azure Automation is used for many things regarding resource orchestration in Azure (VMs, Services, DevOps), this automation can be done with Powershell and/or Python.
In this particular case I did not need to modify/maintain/orchestrate any Azure resource, I needed to actually run a Bash/Powershell command remotely into one of my existing VMs where I have multiple Powershell/Bash commands running recurrently in "Task Scheduler".
"Task Scheduler" was adding unnecessary overhead to my data pipelines because it was unable to talk to ADF.
In addition, Azure Automation natively only runs Powershell/Python commands in Azure Cloud Shell which is very useful to orchestrate resources like turning on/off Azure VMs, adding/removing permissions from other Azure services, running maintenance or purge processes, etc, but I was still unable to run commands locally in an existing VM. This is where the Hybrid Runbook Worker came into to picture. A Hybrid worker group
These are the steps to accomplish this use case.
1. Create an Azure Automation Account
2. Install the Windows Hybrid Worker in my existing VM . In my case it was tricky because my proxy was giving me some errors. I ended up downloading the Nuget Package and manually installing it.
.\New-OnPremiseHybridWorker.ps1 -AutomationAccountName <NameofAutomationAccount> -AAResourceGroupName <NameofResourceGroup>
-OMSResourceGroupName <NameofOResourceGroup> -HybridGroupName <NameofHRWGroup>
-SubscriptionId <AzureSubscriptionId> -WorkspaceName <NameOfLogAnalyticsWorkspace>
Keep in mind that in the above code, you will need to find your own parameter values, the only parameter that does not have to be found and will be created is HybridGroupName this will define the name of the Hybrid Group
3. Create a PowerShell Runbook
[CmdletBinding()]
Param
([object]$WebhookData) #this parameter name needs to be called WebHookData otherwise the webhook does not work as expected.
$VerbosePreference = 'continue'
#region Verify if Runbook is started from Webhook.
# If runbook was called from Webhook, WebhookData will not be null.
if ($WebHookData){
# Collect properties of WebhookData
$WebhookName = $WebHookData.WebhookName
# $WebhookHeaders = $WebHookData.RequestHeader
$WebhookBody = $WebHookData.RequestBody
# Collect individual headers. Input converted from JSON.
$Input = (ConvertFrom-Json -InputObject $WebhookBody)
# Write-Verbose "WebhookBody: $Input"
#Write-Output -InputObject ('Runbook started from webhook {0} by {1}.' -f $WebhookName, $From)
}
else
{
Write-Error -Message 'Runbook was not started from Webhook' -ErrorAction stop
}
#endregion
# This is where I run the commands that were in task scheduler
$callBackUri = $Input.callBackUri
# This is extremely important for ADF
Invoke-WebRequest -Uri $callBackUri -Method POST
4. Create a Runbook Webhook pointing to the Hybrid Worker's VM
4. Create a webhook activity in ADF where the above PowerShell runbook script will be called via a POST Method
Important Note: When I created the webhook activity it was timing out after 10 minutes (default), so I noticed in the Azure Automation Account that I was actually getting INPUT data (WEBHOOKDATA) that contained a JSON structure with the following elements:
WebhookName
RequestBody (This one contains whatever you add in the Body plus a default element called callBackUri)
All I had to do was to invoke the callBackUri from Azure Automation. And this is why in the PowerShell runbook code I added Invoke-WebRequest -Uri $callBackUri -Method POST. With this, ADF was succeeding/failing instead of timing out.
There are many other details that I struggled with when installing the hybrid worker in my VM but those are more specific to your environment/company.

This looks like a use case that is supported with Azure Automation, using a hybrid worker. Try reading here: https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker
You can call runbooks with webhooks in ADFv2, using the web activity.
Hope this helped!

Related

[Azure Terraform]: Create Start/Stop VM Solution

I am using Terraform to create an Automation Account in Azure.
The following resource in Azure provider does the job: azurerm_automation_account.
Ok. So I got my AA created... here is when problems arise.
"Run As" account: there seems to be a way to create it from Terraform... but the process is cumbersome. I have lost hope, and will probably resort to enable it manually from Azure portal (it is just one click)... but it will brake my automation pipeline :(
"Start/Stop VM Solution": I need the powershell runbooks in this solution to start-stop VMs according to a given schedule. There is a resource in Azure provider called "azurerm_automation_runbook". It has 2 useful arguments to reference runbook scripts:
"content": with it I could "load" a local powershell script content. I know this would work (I could manually download the .ps1 script used by "Start/Stop VM Solution" and use "content" to load it), but I would be missing any fixes/updates made by Microsoft in its code)
"publish_content_link": by which I could point to the URI of a given powershell runbook. I have looked in the "Runbook Gallery" for the runbooks contained in the "Start/Stop VM Solution" (not found them). Anyone had any luck with this? A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)... but not sure if this would be possible.
Thanks in advance.
For point 1: I also found it very challenging and while things have improved lately, there still doesn't seem to be an easy, straight forward way of creating the Run As Account. I eventually resorted to creating it manually from the Azure Portal but below are potential areas you can explore:
I'm not sure if you've considered using the external data source from terraform to execute the Powershell script from Microsoft. It's still a pain because of the last step where you have to authenticate manually, but it still brings you closer to having a blueprint of your environment. Although I'm not sure how it would behave if running this Terraform script a second time.
For point 2: Could you confirm that the script you want to use is a Powershell script and not a Powershell Workflow script? Also could you please elaborate on this approach (I have a feeling that might be the best approach):
A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)
If you look at the Runbooks Gallery, you'll see most of these Powershell scripts have not been updated for many years and are still working fine. If this will be used in a production environment, it would be better if you have control over the changes and update then at your convenience. If you want to get the URI, you can just click on 'View Source Project' and it will lead you to the GitHub repo. E.g. for the Runbook Stop-Start-AzureVM (Scheduled VM Shutdown/Startup).
You'll also notice most of the scripts is submitted by external parties. If you link to a URI that's maintained by someone else and that person publishes malicious code in there or even accidentally messes up the code, it's not desirable. But again I'm not sure as to the extent of your automation (e.g. if you expect to execute the terraform script once a month to ensure the Runbook is up to date)
If I get the scripts from somewhere, I'll validate it prior to using them in my environment.
data "local_file" "start_vm_parallel" {
filename = "./scripts/start-vm-parallel.ps1"
}
resource "azurerm_automation_runbook" "start_vm_parallel" {
name = local.NAME
location = local.REGION
resource_group_name = local.RG
automation_account_name = azurerm_automation_account.automation_prod.name
log_verbose = "true"
log_progress = "true"
description = "This runbook starts VMs in parallel based on a matching tag value"
runbook_type = "PowerShellWorkflow"
content = data.local_file.start_vm_parallel.content
publish_content_link {
uri = "https://path.to.script/script.ps1"
}
}
If you're using a Powershell Workflow, you need to make sure that the Runbook name matches the workflow name inside the script.
One last thing to remember before you even start using your Runbooks, is to update the modules by creating a 'modules update' Runbook from the Azure Automation team and running it on schedule, once a month.

Request timing out when exporting Azure resource group in powershell

I want to export an ARM template for a resource group in Windows Azure. I'm using the Azure Powershell module.
Whenever I try to export the resource group using Export-AzResourceGroup, the cmdlet fails with the following error:
Export-AzResourceGroup : Operation failed because a request timed out.
I have inspected the web traffic with Fiddler and I can see that the actual HTTP call to Azure is completing successfully, it's just taking a long time. So it's not a matter of e.g. extending the timeout on the ServicePointManager.
Is there any solution or work-around to stop this cmdlet timing out?
This issue maybe occurring due to a previous version of Az powershell module. Please update your Az module to the latest version and try again.
Also make sure you have the write permissions in the directory you are trying to create the json file.
Meanwhile, you can create the template from the portal just to check if the json file is getting created successfully:
https://learn.microsoft.com/en-us/azure/azure-resource-manager/export-template-portal#export-template-from-a-resource-group

Is there a way trigger "http trigger" azure function, after deploy ARM template?

I have ARM who deploy kubernetes cluster and httptrigger function app. Inside httptrigger func I have client for kubernetes who do some action if I trigger this func manually, its work fine. But I need run this trigger automatically after deploy ARM was finished.
The HTTP request that triggers the Azure Function may be sent either by ARM itself or by whatever orchestrator you use to execute the template (e.g. Azure DevOps pipeline). Terraform can execute scripts directly; unless you really want to use ARM, it might be an option.
If you want to go with ARM, there are at least three options:
Make the Azure Function return an "empty" ARM template and trigger it by a request for a nested deployment template. https://blog.cloudtrooper.net/2017/04/04/run-azure-functions-from-your-quickstart-arm-templates/
Use Azure Container Instances to launch an instance of a container image as a stand-alone container in Azure and execute an arbitrary command inside. https://samcogan.com/run-scripts-in-arm-deployments-with-aci/
Use the deployment scripts resource (Microsoft.Resources/deploymentScripts). It is basically built-in support for the approach using Azure Container Instances. See the official docs or an older article from the time the feature was still in preview that I still like: https://dev.to/omiossec/arm-template-what-s-new-for-2020-4kli#deployementsscripts-resource-provider
In any case, you will need to properly set up the dependsOn references so that the request is sent at the right time. Or better, use Bicep that mostly takes care of the dependencies implicitly, if used right.
Assuming you are using Powershell to deploy your ARM template, you can use Powershell to trigger your azure http trigger function right after you deploy your ARM template:
Invoke-WebRequest -Uri <function_uri> -Method POST
Hope this helps!

Can custom console application be executed in Azure (PowerShell) Runbook?

Can I use my own/custom console application in Azure Runbook (PowerShell or any other)?
When I try to run my own console application (after retrieving it from Azure Storage), it fails.
When I try to execute it simply by:
.\myapp.exe
I get:
Program 'myapp.exe' failed to run: The operation was canceled by the user
When I use System.Diagnostics.Process:
$process = New-Object System.Diagnostics.Process
$process.StartInfo.FileName = $path
$process.StartInfo.UseShellExecute = $False
$process.Start()
I get a more meaningful error:
Exception calling "Start" with "0" argument(s): "This program is blocked by group policy. For more information, contact
your system administrator"
Is there any settings in Azure Automation, I can toggle, to allow executing custom applications? Or is it simply not possible in the restricted Runbook environment?
Thanks for reaching out! Unfortunately your ask of running .exe inside an Azure Automation runbook is currently not supported. And yes, you can go with Azure Web Job(s). One other customer has recently reached out with similar ask and solved their issue by leveraging Azure Web Job(s). For clarification, you may refer this MSDN thread. Hope this helps you.
Cheers!
Unfortunately , it is not supported by Azure Automation Runbook for now.Here is a feedback replied by Automation PG team , there is no update on it.
You can however
Run console app as Azure WebJob on App Service and call it remotely via SCM endpoint or,
Compile console application as PowerShell cmdlet
using System.Management.Automation;
namespace MyApp
{
[Cmdlet(VerbsCommunications.Send, "RunApplication")]
public class RunApplicationCommand : Cmdlet
{
protected override void ProcessRecord()
{
// App Code goes here
}
}
}
Attach compiled DLL to PowerShell module
Deploy PowerShell modle to Automation account using Modules > Import
Call cmdlet from runbook

What is a method for checking where an Azure Automation runbook (PowerShell) is running?

I'm writing a set of PowerShell runbooks in Azure Automation. Some of them run on-premises (ala Hybrid Runbook Worker) and some in Azure directly.
I'd like to immediately error and exit any hybrid scripts if they are accidentally kicked off in Azure (since it's the default selection when using the portal).
I thought I check by getting the results of Get-AutomationConnection -Name AzureRunAsConnection but it takes about 4 seconds to respond, but it also returns values when run via Hybrid Worker. Does anyone know of a better/quicker method?
Thanks!
Update: A one-liner that is crude but seems to work is:
Try {$AmIInAzure = Get-AzureRmEnvironment AzureCloud -ErrorAction Stop;Throw "This runbook must be run on-premises via Hybrid Runbook Worker. Exiting."} Catch {}
The variable $AmIInAzure is simply used to hide the output of Get-AzureRMEnvironment, while the Try..Catch is to hide any errors. If this code is run in Azure, it will throw the specified text and the runbook will error out (as desired). If it is run on a hybrid worker, it doesn't do anything (allowing the rest of the runbook to run).
I'm curious if anyone might have a better method.
Update 2: That oneliner doesn't seem to work, as neither throw, exit, or break will cause the runbook to exit. Still looking for a working method...
You could test using $PSPrivateMetadata
begin {
if ($null -eq $PSPrivateMetadata) {
throw "This command can only be run within the context of an Azure Automation Runbook Worker"
}
}
I had the exact same problem and did not get it to work.
Ended up with another solution, I´m just running this at the top of my runbook, or directly after my param list if you have input parameters.
$checkHybridWorker = hostname
if ($checkHybridWorker -ne "myhybridworkerhostname"){
Write-Warning "Job must be started from Hybrid worker, exiting."
Exit 1
}
Not pretty but it works fine.

Resources