I have a published and scheduled pipeline running at regular intervals. Some times, the pipeline may fail (for example if the datastore is offline for maintenance). Is there a way to specify the scheduled pipeline to perform a certain action if the pipeline fails for any reason? Actions could be to send me an email, try to run again in a few hours later or invoke a webhook. As it is now, I have to manually check the status of our production pipeline at regular intervals, and this is sub-optimal for obvious reasons. I could of course instruct every script in my pipeline to perform certain actions if they fail for whatever reason, but it would be cleaner and easier to specify it globally for the pipeline schedule (or the pipeline itself).
Possible sub-optimal solutions could be:
Setting up an Azure Logic App to invoke the pipeline
Setting a cron job or Azure Scheduler
Setting up a second Azure Machine Learning pipeline on a schedule that triggers the pipeline, monitors the output and performs relevant actions if errors are encountered
All the solutions above suffers from being convoluted and not very clean - surely there must exist a simple, clean solution for this problem?
This solution reads from the logs of your pipeline and let's you do something within a Logic App capability, I used it to email the team when a scheduled pipeline failed.
Steps:
Create Event Namespace and Event Hub
Create Service Bus Namespace and Service Bus Queue
Create a Stream Analytics Job using EventHub as Input and Service
Bus Queue as Output
Create Logic App with a trigger to any event coming into the Service
Bus Queue then, add an Outlook 360 send an email (v2) step
Create an Event Subscription inside ML Service that sends filtered
events to the Event Hub
Start Stream Analytics Job
Two fundamental steps while creating the Event subscription:
Subscribe to the 'Run Status Changed' event to get the log when a pipeline fails
Use the advanced filters section to specify which pipeline you want to monitor (change 'deal-UAT' to your specific ml experiment), like this:
It looks like a lot of setup but it's super easy and quick to do, it would look something like this in the end:
Related
Is it possible to get the name, type, associated resource group, trigger time, trigger type, and trigger source for all Azure Functions within a subscription? We have set up a number of Azure Functions over time but they are not cleanly organized or properly documented, so we only know that we have a lot of Functions executing but we don't know when they're scheduled to execute, what triggers their execution, etc. Having this information would help us better balance jobs and more effectively plan how to add and schedule other automated tasks.
Yes, you can get to know the trigger, which is triggering at which time, by enabling application insights while creating your function app like below:
Then when you run the trigger, you can check in logs of appinsights of function app which trigger is running like below.
You can use kusto query to get the details of functions running:
Another query:
Over here you will get to know about the logs of function app:
I have pipeline need to run at every one hour.
Using tumbling window trigger.
If pipeline is successfull it should continue to run every one hour.
If pipeline fails with some reason, next instance, I mean pipline should not run for next hour.
How can we stop trigger if pipeline fails.
Currently, this feature is not available in the Azure data factory. You can raise a feature request from the Azure data factory feedback.
Alternatively, you can add a web activity in your pipeline upon the failure of the previous activity. In web activity, you can make a call to an HTTP request to Stop the trigger.
Refer to this document to Stop a trigger.
I need to setup an alert system if my Azure Datafactory pipeline runs for more than 20 minutes. The alert should come while the pipeline is running and the duration passes 20mins, not after the completion of pipeline. How can I do this? I think this can be done using Azure function but I am not familiar with it so I'm in search for a script for the same.
Yes, azure function is a solution to acheive your requirement.
For example, if you are using Python. You need an azure function that runs periodically to monitor the status of the pipeline. The key is the duration time of the pipeline. pipeline is based on activities. You can monitor every activity.
In Python, This is how to get the activity you want:
https://learn.microsoft.com/en-us/python/api/azure-mgmt-datafactory/azure.mgmt.datafactory.operations.activityrunsoperations?view=azure-python#query-by-pipeline-run-resource-group-name--factory-name--run-id--filter-parameters--custom-headers-none--raw-false----operation-config-
The below is to get the duration time of azure datafactory activity:
https://learn.microsoft.com/en-us/python/api/azure-mgmt-datafactory/azure.mgmt.datafactory.models.activityrun?view=azure-python#variables
(There is a variable named duration_in_ms, you can use this to get the duration time of the activity run.)
This is use Python to monitor pipeline:
https://learn.microsoft.com/en-us/azure/data-factory/monitor-programmatically#python
You can create a azure function app with a timetrigger to monitor the azure datafactory activity. This is the document of azure function timetrigger:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=python
The basic idea is put the code that monitor the pipeline whether run more than N minutes in the logic body of azure function timetrigger. And then use the status of the azure function to reflect whether the pipeline running time of azure datafactory exceeds N hours.
Then use the alarm event of the azure function. The alarm events supported by azure for the azure function are as follows: (You can set an output binding of your azure function.)
In azure portal, you can find the alert in this place:
(Select Email/SMS message as the action type and give it your email address.)
Is Azure functions a good alternative to Azure Data Factory to use as scheduler? It has blob trigger to monitor and can use C# to trigger databricks jobs using API. But is it a viable alternative.
Edited to add more information. Wanted to trigger a databricks job based on a trigger file but do not want to use Azure Data Factory or Data bricks job.
I would probably use simple logic app with Event Grid trigger on blob storage event blob created event. Based on trigger data I would call Databricks Job REST API.
I did entire demo below working in under 10 minutes so its fast to set up.
With this demo I used
And logic app setup as trigger
Where I strongly suggest to add prefix filter like
/blobServices/default/containers/<container_name>
So you don't fire too many logic apps from different containers as event grid reacts to all events in entire storage account.
And HTTP call like so
Of course at this point simply change clusters list to submitting job REST call.
And see execution like
Just make sure that EventGrid resource provider is registered or logic app will never fire off.
I have looked through documentation for WebJobs, Functions and Logic Apps in Azure but I cannot find a way to schedule a one-time execution of a process through code. My users need to be able to schedule notifications to go out at a specific time in the future (usually within a few hours or a day from being scheduled). Everything I am reading on those processes is using CRON expressions which is not designed for one-time executions. I realize that I could schedule the job to run on intervals and check the database to see if the rest of the job needs to run, but I would like to avoid running the jobs unnecessarily if possible. Any help is appreciated.
If it is relevant, I am using C#, ASP.NET MVC Core, App Services and a SQL database all hosted in Azure. My plan was to use Logic apps to check the database for a scheduled event and send notifications through Twilio, SendGrid, and iOS/Android push notifications.
One option is to create Azure Service Bus Messages in your App using the ScheduledEnqueueTimeUtc property. This will create the message in the queue, but will only be consumable at that time.
Then a Logic App could be listening to that Service Bus Queue and doing the further processing, e.g. SendGrid, Twilio, etc...
HTH
You could use Azure Queue trigger with deferred visibility. This will keep the message invisible for a specified timeout. This conveniently acts as a timer.
CloudQueue queueOutput; // same queue as trigger listens on
var strjson = JsonConvert.SerializeObject(message); // message is your payload
var cloudMsg = new CloudQueueMessage(strjson);
var delay = TimeSpan.FromHours(1);
queueOutput.AddMessage(cloudMsg, initialVisibilityDelay: delay);
See https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.storage.queue.cloudqueue.addmessage?view=azure-dotnet for more details on this overload of AddMessage.
You can use Azure Automation to schedule tasks programmatically using REST API. Learn about it here.
You can use Azure Event Grid also. Based on this article you can “Extend existing workflows by triggering a Logic App once there is a new record in your database".
Hope this helps.
The other answers are all valid options, but there are some others as well.
For Logic Apps you can build this behavior into the app as described in the Scheduler migration guide. The solution described there is to create a logic app with a http trigger, and pass the desired execution time to that trigger (in post data or query parameters). The 'Delay Until' block can then be used to postpone the execution of the following steps to the time passed to the trigger.
You'd have to change the logic app to support this, but depending on the use case that may not be an issue.
For Azure functions a similar pattern could be achieved using Durable Functions which has support for Timers.