Call API when Azure Automation runbook has completed - azure

I want to call an API to trigger an Azure Automation runbook. I believe this can be done with webhooks. When doing so, I get back 202 response code which suggests that the request was successfully queued.
Now I'm trying to find out how I can specify a call-back API call that the Azure Automation should trigger once it has finished the execution, including the result status (completed, failed). Is this call-back something I should code in the Azure Automation job myself, or is there a default functionality available that would allow an API call-back when the runbook completes?
I'm trying to avoid that my client application which triggered the automation job would have to continuously poll to see if the automation job is still running.

First, there is no default functionality that would allow an API call-back when runbook completes.
And as you may know, we can do this behavior via writing code to check it's status or setup an alert when it's completed. But it would have latency or need poll periodically.
The best solution I can think of is that put the call-back api in the runbook. For example, you can put your code in try - catch - finally code block, and put the api in the finally section.
Hope it can help.

Related

Azure Databricks - Use Event Grid

I am currently using an Azure Durable Function (Orchestrator) to orchestrate a process that involves creating a job in Azure Databricks. The Durable Function creates the job using the REST API of Azure Databricks and provides a callback URL. Once the job is created, the orchestrator waits indefinitely for the external event (callback) to be triggered, indicating the completion of the job (callback pattern). The job in Azure Databricks is wrapped in a try:except block to ensure that a status (success/failure) is reported back to the orchestrator no matter the outcome.
However, I am concerned about the scenario where the job status turns to Internal Error, and the piece of code is not executed, leaving the orchestrator waiting indefinitely. To ensure reliability, I am considering several solutions:
Setting a timeout on the orchestrator
Polling: Checking the status of the job every x minutes
Using an event-driven architecture by writing an event to a topic (e.g. Azure Event Grid) and having a separate service subscribe to it
My question is, can I send events to a topic (Azure Event Grid) when the Databricks job completes (succeeds, fails, errors, every possible outcome) to ensure that the orchestrator is notified and can take appropriate action? Looking at the REST API Jobs 2.1 docs, I can get notified via email or a specify a webhook on start, success and failure (Preview Feature). Can I enter the topic URL of Event Grid here so that Databricks writes events to it? Docs to manage notification destinations. It's not clear to me. Is there another way in Azure to achieve the same result?
Edit: I've looked into the documentation to find how to manage notification destinations and created a new system notification:
However, when testing the connection:
The request fails:
401: Request must contain one of the following authorization signature: aeg-sas-token, aeg-sas-key. Report '12345678-7678-4ab9-b90f-37aabf1b10b8:7:1/23/2023 6:17:09 PM (UTC)' to our forums for assistance or raise a support ticket.
The same would happen if there was a POST request from another client (e.g. Postman). Now the question is: How can I provide a token so that Databricks can write events to a topic?
I've posted this question also here: Webhook Security (Bearer Auth)

Is there a way to get the live logs of a running pipeline using Azure REST API?

I am running a build pipe line in azure that is having multiple tasks. But I have a requirement to get logs using rest API calls after triggering pipeline. I used Builds-Get Build Logs, but it listing only completed task logs and not listing ongoing task log. Is there any mechanism available to get ongoing task logs/live logs?
Is there a way to get the live logs of a running pipeline using Azure REST API?
I am afraid there is no such mechanism available to get ongoing task logs/live logs.
As we know, the Representational State Transfer (REST) APIs are service endpoints that support sets of HTTP operations (methods), which provide create, retrieve, update, or delete access to the service's resources.
The task is executed inside the agent, and the result of the execution be passed back to azure devops only after a task is completed. So, HTTP operations (methods) are triggered only when the task is completed and the results are returned, and then we could use the REST API to get the results.
So, we could not use the Azure REST API to get ongoing task logs/live logs. This is limited by the azure devops design pattern.
Hope this helps.

Is that possible to use control M to orchestrate Azure Data factory Jobs

Is that possible to use control M to orchestrate Azure Data factory Jobs?
I found this agent that can be installed on an VM:
https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bmc-software.ctm-agent-linux-arm
But I didn't find documentation about it.
Cal Control M call an REST API to run and monitor a Job? I could user Azure functions and Blobs to control it.
All Control-M components can be installed and operated on Azure (and most other cloud infrastructure). Either use the link you quote or alternatively deploy Agents using Control-M Automation API (AAPI) or a combination of the two.
So long as you are on a fairly recent version Control-M you can do most operational tasks, for example you can monitor a job like so -
ctm run jobs:status::get -s "jobid=controlm:00001"
The Control-M API is developing quickly, check out the documentation linked from here -
https://docs.bmc.com/docs/automation-api/9019100monthly/services-872868740.html#Control-MAutomationAPI-Services-ctmrunjob:status::get
Also see -
https://github.com/controlm/automation-api-quickstart http://controlm.github.io https://docs.bmc.com/docs/display/public/workloadautomation/Control-M+Automation+API+-+Services https://52.32.170.215:8443/automation-api/swagger-ui.html
At this time, I don't believe you will find any out of the box connectors for Control-M to Azure Data Factory integration. You do have some other options, though!
Proxy ADF Yourself
You can write the glue code for this, essentially being the mediator between the two.
Write a program that will invoke the ADF REST API to run a pipeline.
Details Here
After triggering the pipeline, then write the code for monitoring for status.
Details Here
Have Control-M call your code via an Agent that has access to it.
I've done this with a C# console app running on a local server, and a Control-M Agent that invokes the glue code.
Control-M Documentation here also allows a way for you to execute an Azure Function directly from Control-M. This means you could put your code in an Azure Function.
Details Here'
ALTERNATIVE METHOD
For a "no code" way, check out this Logic App connector.
Write a logic app to run the pipeline and get the pipeline run to monitor status in a loop.
Next, Control-M should be able to use a plugin to invoke the logic app.
Notes
**Note that Control-M required an HTTP Trigger for Azure Functions and Logic Apps.
**You might also be able to take advantage of the Control-M Web Services plugin. Though, in my experience, I wasn't impressed with the lack of support for different authentication methods.
Hope this helps!
I just came across this post so a bit late to the party.
Control-M includes Application Integrator which enables you to use integrations created by others and to either enhance them or build your own. You can use REST or cli to instruct Control-M what requests should be generated to an application when a job is started, during execution and monitoring and how to analyze results and collect output.
A public repository accessible from Application Integrator shows existing jobs and there is one for Data Factory. I have extended it a bit so that the the Data Factory is started and monitored to completion via REST but then a Powershell script is invoked to retrieve the pipeline run information for each activity within the pipeline.
I've posted that job and script in https://github.com/JoeGoldberg/automation-api-community-solutions/tree/master/4-ai-job-type-examples/CTM4AzureDataFactory but the README is coming later.

Queue triggers not working directly after deployment

I have a (C#-based) Function App on a consumption plan that only contains queue triggers. When I deploy it (via Azure DevOps) and have something written to the queue, the trigger does not fire unless I go to the Azure Console and visit the Function App. It also works to add an HTTP trigger to the App and call that. After that, all other triggers work.
The same phenomenon is observed with Timer triggers.
My hypothesis is that these triggers only work when the runtime is active but not directly after deployment when no runtime was created. Is that true? If so, what is the suggested way around this?
My only workaround idea is to add an HTTP trigger and fire regular keepalive pings to that trigger. But that sounds wrong.

Programmatically Schedule one-time execution of Azure function

I have looked through documentation for WebJobs, Functions and Logic Apps in Azure but I cannot find a way to schedule a one-time execution of a process through code. My users need to be able to schedule notifications to go out at a specific time in the future (usually within a few hours or a day from being scheduled). Everything I am reading on those processes is using CRON expressions which is not designed for one-time executions. I realize that I could schedule the job to run on intervals and check the database to see if the rest of the job needs to run, but I would like to avoid running the jobs unnecessarily if possible. Any help is appreciated.
If it is relevant, I am using C#, ASP.NET MVC Core, App Services and a SQL database all hosted in Azure. My plan was to use Logic apps to check the database for a scheduled event and send notifications through Twilio, SendGrid, and iOS/Android push notifications.
One option is to create Azure Service Bus Messages in your App using the ScheduledEnqueueTimeUtc property. This will create the message in the queue, but will only be consumable at that time.
Then a Logic App could be listening to that Service Bus Queue and doing the further processing, e.g. SendGrid, Twilio, etc...
HTH
You could use Azure Queue trigger with deferred visibility. This will keep the message invisible for a specified timeout. This conveniently acts as a timer.
CloudQueue queueOutput; // same queue as trigger listens on
var strjson = JsonConvert.SerializeObject(message); // message is your payload
var cloudMsg = new CloudQueueMessage(strjson);
var delay = TimeSpan.FromHours(1);
queueOutput.AddMessage(cloudMsg, initialVisibilityDelay: delay);
See https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.storage.queue.cloudqueue.addmessage?view=azure-dotnet for more details on this overload of AddMessage.
You can use Azure Automation to schedule tasks programmatically using REST API. Learn about it here.
You can use Azure Event Grid also. Based on this article you can “Extend existing workflows by triggering a Logic App once there is a new record in your database".
Hope this helps.
The other answers are all valid options, but there are some others as well.
For Logic Apps you can build this behavior into the app as described in the Scheduler migration guide. The solution described there is to create a logic app with a http trigger, and pass the desired execution time to that trigger (in post data or query parameters). The 'Delay Until' block can then be used to postpone the execution of the following steps to the time passed to the trigger.
You'd have to change the logic app to support this, but depending on the use case that may not be an issue.
For Azure functions a similar pattern could be achieved using Durable Functions which has support for Timers.

Resources