I have multiple pipelines in Azure Data factory that get data from APIs and then push it to a datalake. I get alerts in case one of the pipelines fail. I then go to the ADF instance and rerun the the failed pipeline manually. I am trying to come up with an automated way of rerunning a pipeline in case it fails. Any suggestions or guidance would be helpful. thought of Azure logic apps or powerautomate but turns out don't have the right actions in there to trigger a failed pipeline.
If the pipeline design could be modified then a method can be to
Set parameter pMax_rerun_count ( This is to ensure pipeline doesn go into indefinite loop )
set 2 variables:
(2.a) Pipeline_status default value : Fail
(2.b) Max_loop_count default value : 0 ; This would be to ensure the pipeline doesnt run in loops . The value could be set during the pipeline run to have the maximum permissible retry count (i.e. pMax_rerun_count) passed as parameter in the pipeline
All activities should be inside and Untill activity which will have expression or(equals(Pipeline_status,'Success'),equals(pMax_rerun_count,Max_loop_count)
The first activity inside Untill activity will be Set Variable activity that increment the value of variable
Max_loop_count by 1 .
The final activity insisde Untill activity will be to Set variable activity that sets Pipeline_status to "Success"
The purpose here is to run all intended activities inside untill block untill the intended activities in pipeline completes successfully . pMax_rerun_count is to ensure pipeline doesnt go into indefinite loops.
This setup can considered as a framework if all pipelines needs to rerun in case of failure
I came with a streamlined way of running failed pipelines. I decided to use the Azure Data factory API alongside Azure Logic apps to solve the problem.
I run logic apps on a scheduled run time and then use the following API commands:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/?api-version=2018-06-01
This API query gives us all the pipeine runs. If we want to filter it down to failed values, we can add the following body to it:
{
"lastUpdatedAfter": "2018-06-16T00:36:44.3345758Z",
"lastUpdatedBefore": "2018-06-16T00:49:48.3686473Z",
"filters": [
{
"operand": "status",
"operator": "Equals",
"values": [
"failed"
]
}
]
}
After getting the failed pipelines, We can then invoke the following API on each failed pipeline, to rerun them:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelines/{pipelineName}/createRun?api-version=2018-06-01
This solution can be created using a scripting language , powerautomate workflow or Azure Logic apps.
As of now there is no inbuilt method to automate the process of "rerunning from
failed activity" in the ADF, but each activity has a Retry option that
you should certainly employ. In the pipeline, you may attempt any
action as many times as necessary if it fails.
Allow the trigger to point to a new pipeline with a Execute activity that points to the current Azure Datafactory with the copy activity:
Then choose the Advanced -> Wait for completion option.
After the execute pipeline is complete, the webhook action should contain logic to halt the DW.
Related
I have a web app where clients can press a button that calls a durable function. As long as that function is executing, a spinning wheel is animated. When the function finishes executing, I stop/hide the loading wheel, and show a green check-mark.
Currently, I'm just hitting the "statusQueryGetUri" returned by the durable function, on a regular interval, to see if the function is running/stopped.
Is there a way for me to 'listen' for changes in status so I'm not hitting that URL repeatedly?
You may follow this link to configure the email/ any notification alert for your azure functions.
You may also check these links for step by step implementation :
http://www.mattruma.com/adventures-with-azure-functions-create-an-alert-from-app-insight-to-send-an-email-notification/
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/alerts-log
While you create the alert rule you have an option to select condition:
There you get "Custom Log Search" as an option:
You can configure the type of trigger or resource on which you want to perform this action while creating the action group:
You can customize the rule according to your requirement. i.e. you can check for traces log for any execution or you may just check the status of the function. [ this is a reference ]
You may look into these documents for reference:
https://www.fourmoo.com/2020/02/19/how-to-configure-azure-function-app-notifications-for-errors/
http://www.mattruma.com/adventures-with-azure-functions-create-an-alert-from-app-insight-to-send-an-email-notification
You can set up a timer-triggered logic app that will do this for you on a recurrent schedule.
Trying to Load some Excel data using ADF pipeline via Logic Apps. However when triggering through Logic Apps, the task triggers and then moves to the next step immediately. Looking for a solution where the next step waits for a "Execute Data factory Pipeline" to execute completely before proceeding.
Adding an image for clarity.
-Thanks
For this requirement, I provide a sample of my logic app below for your reference:
1. Add a "Create a pipeline run" action and initialize a variable named status(set its value as "InProgerss").
2. Then add a "Until" action, set the break condition as status is equal to "Succeeded". Add a "Get a pipeline run" action and set the variable status as the value of Status comes from "Get a pipeline run" in the "Until" action. Shown as below screenshot:
3. After that, run your logic app. The steps will run after the "Until" action(also after your pipeline complete).
By the way:
You can also do it in Data Factory, you can delete the data after completion. Please refer to this document.
I have created 2 pipeline in Azure Datafactory. We have a custom activity created to run a python script inside the pipeline.When the pipeline is executed manually it successfully run for n number of time.But i have created a scheduled trigger of an interval of 15 minutes in order to run the 2 pipelines.The first execution successfully runs but in the next interval i am getting the error "Operation on target PyScript failed: Hit unexpected exception and execution failed." we are blocked wiht this.any input on this would be really helpful.
from ADF troubleshooting guide, it states...
Custom Activity :
The following table applies to Azure Batch.
Error code: 2500
Message: Hit unexpected exception and execution failed.
Cause: Can't launch command, or the program returned an error code.
Recommendation: Ensure that the executable file exists. If the program started, make sure stdout.txt and stderr.txt were uploaded to the storage account. It's a good practice to emit copious logs in your code for debugging.
Related helpful doc: Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Hope this helps.
If you are still blocked, please share failed pipeline run ID & failed activity run ID, for further analysis.
I want to automatically Re-trigger a failed pipeline using the If Condition Activity (dynamic content).
Process :
Pipeline 1 running at a schedule time with trigger 1 - works
If pipeline 1 fails, scheduled trigger 2 will run pipeline 2 - works
Pipeline 2 should contain if condition to check if pipeline 1 failed - This is the Issue
If pipeline 1 failed, then rerun else ignore - need to fix this
How can this be done?
All help is appreciated.
Thank you.
I can give you an idea,
For example. Your pipeline1 failed by some reasons. At this time, you can create a file to Azure Storage Blob.
(Here is an example, you can use the activities what you want to use.)
Then create the trigger2 triggered by a blob been created.
can't you do it with "Execute Pipeline" activity?
For example:
You create a simple pipeline named "Rerun main pipeline" and use "Execute Pipeline" inside that and link it to "main pipeline". Then in main pipeline, you add failure output and link it to "Rerun main pipeline".
I had a question regarding Azure DevOps pipelines and tasks and was wondering if anyone could help.
I have a pipeline with a task that runs a PowerShell script. This script kicks off a separate pipeline, but once that script is run, the original task returns a "pass" (as expected) and the next task in the original pipeline begins to run.
Ideally, I would like the next task in pipeline 1 to wait until the pipeline that was kicked off by the script is complete (and returns a pass). Does anyone know of a way this can be achieved? The steps are using YAML. So far I have seen conditions to wait for other steps in the same pipeline, but nothing to stop a step from running until a completely separate pipeline is completed (and passes successfully).
Hopefully I am making sense. I can provide screenshots if that would help as well!
Instead of trigger the build with your PowerShell script, you can install the Trigger Build Task extension and use it. there you have an option Wait till the triggered builds are finished before build continues:
In YAML:
- task: TriggerBuild#3
displayName: 'Trigger a new build of Test'
inputs:
buildDefinition: Test
waitForQueuedBuildsToFinish: true
waitForQueuedBuildsToFinishRefreshTime: 60
If this option is enabled, the script will wait until the all the queued builds are finished. Note: This can take a while depending on your builds and your build will not continue. If you only have one build agent you will even end up in a deadlock situation!
According to the description, the whole process could be separated to four parts:
The task1 in Pipeline1 should trigger the task in Pipeline2 and if the task in Pipeline is not editable you might need to create a new task in it at last for next step usement
The last task in Pipeline2 should do something like create a txt file in a specific folder or any another things that could be detected by task2 in Pipeline1
The task2 in Pipeline1 should wait and listen if a txt file in the folder is created which means the Pipeline1 is completed successfully.
Run the task2
The main problem you face here is that all variables are evaluated on queue with YAML.
You could do a few things like utilise an external service such as an Azure Storage Account. Doing this you could do other stuff like write comments or statuses from the pipeline into a text file and read the values into your first pipeline.
At the end of script 1 in pipeline 1:
Do {
$File = check for storage account file
if ($File) {
$FileExists = $true
}
} until ($FileExists)
At the end of your other pipeline
Do stuff
Write stuff to file
Upload file to storage account
If you just want to wait for completion you could use azure devops cli at the end of your first powershell step. This is a good link: https://techcommunity.microsoft.com/t5/ITOps-Talk-Blog/Get-Azure-Pipeline-Build-Status-with-the-Azure-CLI/ba-p/472104 . You could run your logic on the returned status or result