Azure Data Factory automatically re-trigger failed pipeline - azure

I want to automatically Re-trigger a failed pipeline using the If Condition Activity (dynamic content).
Process :
Pipeline 1 running at a schedule time with trigger 1 - works
If pipeline 1 fails, scheduled trigger 2 will run pipeline 2 - works
Pipeline 2 should contain if condition to check if pipeline 1 failed - This is the Issue
If pipeline 1 failed, then rerun else ignore - need to fix this
How can this be done?
All help is appreciated.
Thank you.

I can give you an idea,
For example. Your pipeline1 failed by some reasons. At this time, you can create a file to Azure Storage Blob.
(Here is an example, you can use the activities what you want to use.)
Then create the trigger2 triggered by a blob been created.

can't you do it with "Execute Pipeline" activity?
For example:
You create a simple pipeline named "Rerun main pipeline" and use "Execute Pipeline" inside that and link it to "main pipeline". Then in main pipeline, you add failure output and link it to "Rerun main pipeline".

Related

A pull request against master with specific source branch doesn't trigger the Azure pipeline

I am trying to write a pipeline to build an image and deploy to the test environment which is hosted on Azure. My codebase lies on GitHub. While trying to trigger the pipeline on a pull request from the source branch against the target branch, I am facing an issue where the pipeline doesn't trigger for the PR but runs fine for my other conditions, such as, push to develop or master.
The condition used for the PR trigger is as follows:
and(succeeded(), eq(variables['Build.Reason'], 'PullRequest'), startsWith(variables['System.PullRequest.SourceBranch'], 'release/'), eq(variables['System.PullRequest.TargetBranch'], 'master'))
The triggers in the yaml file can be seen below:
trigger:
branches:
include:
- develop
- master
paths:
exclude:
- k8s/*
- src/VERSION
- src/package.json
pr:
- master
Am I missing something here?
There are two scenarios:
Scenario 1: The pipeline was triggered when the pull request is created, but the stages/jobs/tasks with the condition you showed don't run.
Then the issue should be related to condition, not trigger.
I have tested and confirmed that your condition is right. So, it's probably not the condition notation but something else that's causing your task not to run.
Here is a troubleshooting advice:
Go to the build log, click on the stages/jobs/tasks that were skipped. You will find a comparison between the condition and the real value. From here, you can tell which part of the condition is keeping your tasks from running.
Scenario 2: The pipeline wasn't triggered when the pull request is created.
Then the issue should be related to trigger, not condition.
Please select documents below for detailed troubleshooting advice based on your case:
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
My CI or PR triggers have been working fine. But, they stopped working now.
I'm not sure if this will fix it but according to the documentation the System.PullRequest.TargetBranch variable has format refs/heads/main which would mean your condition needs updating to add refs/heads in front of the variables.
As such I would add a step to echo these variables just to confirm if they have the refs/head prefix and if so adjust your logic accordingly

Rerunning a failed ADF pipeline automatically

I have multiple pipelines in Azure Data factory that get data from APIs and then push it to a datalake. I get alerts in case one of the pipelines fail. I then go to the ADF instance and rerun the the failed pipeline manually. I am trying to come up with an automated way of rerunning a pipeline in case it fails. Any suggestions or guidance would be helpful. thought of Azure logic apps or powerautomate but turns out don't have the right actions in there to trigger a failed pipeline.
If the pipeline design could be modified then a method can be to
Set parameter pMax_rerun_count ( This is to ensure pipeline doesn go into indefinite loop )
set 2 variables:
(2.a) Pipeline_status default value : Fail
(2.b) Max_loop_count default value : 0 ; This would be to ensure the pipeline doesnt run in loops . The value could be set during the pipeline run to have the maximum permissible retry count (i.e. pMax_rerun_count) passed as parameter in the pipeline
All activities should be inside and Untill activity which will have expression or(equals(Pipeline_status,'Success'),equals(pMax_rerun_count,Max_loop_count)
The first activity inside Untill activity will be Set Variable activity that increment the value of variable
Max_loop_count by 1 .
The final activity insisde Untill activity will be to Set variable activity that sets Pipeline_status to "Success"
The purpose here is to run all intended activities inside untill block untill the intended activities in pipeline completes successfully . pMax_rerun_count is to ensure pipeline doesnt go into indefinite loops.
This setup can considered as a framework if all pipelines needs to rerun in case of failure
I came with a streamlined way of running failed pipelines. I decided to use the Azure Data factory API alongside Azure Logic apps to solve the problem.
I run logic apps on a scheduled run time and then use the following API commands:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/?api-version=2018-06-01
This API query gives us all the pipeine runs. If we want to filter it down to failed values, we can add the following body to it:
{
"lastUpdatedAfter": "2018-06-16T00:36:44.3345758Z",
"lastUpdatedBefore": "2018-06-16T00:49:48.3686473Z",
"filters": [
{
"operand": "status",
"operator": "Equals",
"values": [
"failed"
]
}
]
}
After getting the failed pipelines, We can then invoke the following API on each failed pipeline, to rerun them:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelines/{pipelineName}/createRun?api-version=2018-06-01
This solution can be created using a scripting language , powerautomate workflow or Azure Logic apps.
As of now there is no inbuilt method to automate the process of "rerunning from
failed activity" in the ADF, but each activity has a Retry option that
you should certainly employ. In the pipeline, you may attempt any
action as many times as necessary if it fails.
Allow the trigger to point to a new pipeline with a Execute activity that points to the current Azure Datafactory with the copy activity:
Then choose the Advanced -> Wait for completion option.
After the execute pipeline is complete, the webhook action should contain logic to halt the DW.

How to make partially succeeded stage to failed in Azure DevOps?

I'm running newman tests on the release pipeline but when tests have error it just mark the stage as partially succeeded, but I want to make it failed to trigger the Auto-Redeploy Trigger. Is this possible?
If you are using powershell tasks for trigger newman tests, you can set task result state with below commands.
Write-Host "##vso[task.complete result=Succeeded;]"
Write-Host "##vso[task.complete result=SucceededWithIssues;]"
Write-Host "##vso[task.complete result=Failed;]"
I want to make it failed to trigger the Auto-Redeploy Trigger. Is
this possible?
To make the stage to fail:
Yes, it's possible. When we enable the Continue on error option for one test task, the test task and following tasks would continue to run even when there's error in test task. And the stage which contains the test task will be marked with partial succeeded, so unchecking/disabling the Continue on error would make the stage to fail instead of partially succeed.
To make auto-redeploy:
More details about auto-redeploy you can check this thread.

Is there a way to "wait" for "Azure Data Factory" Execution task to complete before executing next steps of Azure Logic Apps

Trying to Load some Excel data using ADF pipeline via Logic Apps. However when triggering through Logic Apps, the task triggers and then moves to the next step immediately. Looking for a solution where the next step waits for a "Execute Data factory Pipeline" to execute completely before proceeding.
Adding an image for clarity.
-Thanks
For this requirement, I provide a sample of my logic app below for your reference:
1. Add a "Create a pipeline run" action and initialize a variable named status(set its value as "InProgerss").
2. Then add a "Until" action, set the break condition as status is equal to "Succeeded". Add a "Get a pipeline run" action and set the variable status as the value of Status comes from "Get a pipeline run" in the "Until" action. Shown as below screenshot:
3. After that, run your logic app. The steps will run after the "Until" action(also after your pipeline complete).
By the way:
You can also do it in Data Factory, you can delete the data after completion. Please refer to this document.

Azure DevOps pipeline task to wait to run for another pipeline to complete

I had a question regarding Azure DevOps pipelines and tasks and was wondering if anyone could help.
I have a pipeline with a task that runs a PowerShell script. This script kicks off a separate pipeline, but once that script is run, the original task returns a "pass" (as expected) and the next task in the original pipeline begins to run.
Ideally, I would like the next task in pipeline 1 to wait until the pipeline that was kicked off by the script is complete (and returns a pass). Does anyone know of a way this can be achieved? The steps are using YAML. So far I have seen conditions to wait for other steps in the same pipeline, but nothing to stop a step from running until a completely separate pipeline is completed (and passes successfully).
Hopefully I am making sense. I can provide screenshots if that would help as well!
Instead of trigger the build with your PowerShell script, you can install the Trigger Build Task extension and use it. there you have an option Wait till the triggered builds are finished before build continues:
In YAML:
- task: TriggerBuild#3
displayName: 'Trigger a new build of Test'
inputs:
buildDefinition: Test
waitForQueuedBuildsToFinish: true
waitForQueuedBuildsToFinishRefreshTime: 60
If this option is enabled, the script will wait until the all the queued builds are finished. Note: This can take a while depending on your builds and your build will not continue. If you only have one build agent you will even end up in a deadlock situation!
According to the description, the whole process could be separated to four parts:
The task1 in Pipeline1 should trigger the task in Pipeline2 and if the task in Pipeline is not editable you might need to create a new task in it at last for next step usement
The last task in Pipeline2 should do something like create a txt file in a specific folder or any another things that could be detected by task2 in Pipeline1
The task2 in Pipeline1 should wait and listen if a txt file in the folder is created which means the Pipeline1 is completed successfully.
Run the task2
The main problem you face here is that all variables are evaluated on queue with YAML.
You could do a few things like utilise an external service such as an Azure Storage Account. Doing this you could do other stuff like write comments or statuses from the pipeline into a text file and read the values into your first pipeline.
At the end of script 1 in pipeline 1:
Do {
$File = check for storage account file
if ($File) {
$FileExists = $true
}
} until ($FileExists)
At the end of your other pipeline
Do stuff
Write stuff to file
Upload file to storage account
If you just want to wait for completion you could use azure devops cli at the end of your first powershell step. This is a good link: https://techcommunity.microsoft.com/t5/ITOps-Talk-Blog/Get-Azure-Pipeline-Build-Status-with-the-Azure-CLI/ba-p/472104 . You could run your logic on the returned status or result

Resources