"Build after the previous execution has succeeded" in Azure Devops - azure

I have a an Azure Pipeline A, that executes a deployment to my Salesforce org in the event of a PR merge.
My problem statement is,
I am not able to restrict the execution of this pipeline such that it executes only after the previous execution of the same pipeline has completed.
In other words, if this pipeline is triggered by multiple PR's, then I would want only once instance of the pipeline to run. The next one should wait until the previous run has been completed.
Is there a way to achieve this?

You can enable "Batch changes while a build is in progress" option to execute one pipeline at a time.
If your question was on Release Pipeline, you can achieve this through specifying number of executions in the "Deployment queue settings" under Pre-Deployment conditions for the particular stage.

If you are using YAML you should be able to use the following trigger:
trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#triggers

Related

How to retrigger a successful gitlab child pipeline?

Is there a way of retriggering a successful child pipeline in gitlab ? I don't see any retry button in trigger job, which we otherwise see in normal pipline job. Is there a way/workaround to get an option to do so ?
I went through the gitlab doc, it only talks about retrying failed jobs in child pipeline.
That is currently (Q4 2022) not supported yet.
(And retry: is indeed only for failed job)
It is requested by issue 29456:
Ability to rerun a successful pipeline via "Retry" button
Not only failed pipelines sometimes need a rerun but also successful ones:
If your tests are unreliable and you are sceptical that the test success is repeatable
If your jobs depend on outside factors
If your job depends on some predefined CI variable which can change without a code change
So in general, a pipeline should show the retry button even in case of a success. Then, all jobs should be retried again
The currently suggested workaround of CI / CD -> Pipelines -> Run Pipeline does not always work, especially not for merge request pipelines.
In my case, I have all jobs defined as only: merge_requests and "Run Pipeline" responds with the error "No stages / jobs for this pipeline"

Is there a way to test Azure pipeline changes on top of stages that have already run?

Is there a way to test pipeline changes on a stage that has finished running? It would be super useful when a stage has failed and instead of running the entire pipeline to test your changes, you could trigger a rerun of the stage with the updated yml file.
By definition, re-running a stage means running it the same way as it was run initially. When you think about, this is the very desired behaviour.
For pipeline development purposes, I would recommend cloning the pipeline and removing all unnecessary steps. Then work in this minimal/debugging pipeline for faster feedback.

Azure Data Factory: skip activity in debug mode

Basic question
How can I skip an activity within a pipeline in Azure Data Factory if the pipeline runs in debug mode?
Background information
I have a complex pipeline setup (one master pipeline that triggers multiple sub pipelines) which also triggers fail messages if some activities failed. When testing things in debug mode, the fail messages are also triggered. This should not be happening to avoid spam.
Current approach
I could use the system variable #pipeline().TriggerType, which has the value Manual and pass that information as parameter from master pipeline through every single sub pipeline and check for the trigger type before sending the message (if triggerType != Manual). But this would mean a lot of changes and more things to consider when creating new pipelines, because that parameter always needs to be there then.
Does anyone have a better idea? Any idea how I can check in a sub-pipeline if the whole process was initially triggered via a scheduled trigger or as a debug run?
Currently we can't disable / skip an activity in ADF pipeline during its run
Please submit the feedback for this feature here:
https://feedback.azure.com/forums/270578-data-factory/suggestions/33111574-ability-to-disable-an-activity
You can either follow one of these for now:
Manually delete the activity and click debug for execution but don't publish it
Create a copy of that pipeline by cloning from original pipeline and delete the activities that you need to skip and save that with a suffix DEBUG which will become easy to identify and then you can run that pipeline whenever you need to debug
Perform the steps using parameter as you mentioned
Thanks

How to re-trigger downstream pipelines via upstream pipeline

I have an upstream and a downstream pipeline. Unfortunately, I cannot easily re-trigger a downstream pipeline run via the upstream pipeline:
Expected situation:
Actual situation:
The upstream pipeline is responsible for creating the build, and then triggers the downstream pipeline for deployment or undeployment activities. Upstream sets a variable called ACTION in order to inform the downstream pipeline which stage is expected to run.
My goal is, to be able to re-run deployments and undeployments as often as I want.
How can I re-run the downstream pipeline with either the "deploy" or the "undeploy" parameter set, triggered by the upstream pipeline?
You cannot "retry" bridge jobs from the upstream pipeline, unfortunately. This is a known limitation in GitLab.
The only way to work around this is to re-run the downstream pipeline is to manually retry the pipeline/jobs from the downstream pipeline's interface. However, this only covers cases like failure of a downstream job. Retrying can't change the pipeline structure itself (like what jobs are included) because rules: are only evaluated at pipeline creation time. Even if you change variable settings and retry a pipeline, you won't see any difference due to rules:.
My advice would be to not use this - if: $ACTION rule. Instead, always include both jobs in the pipeline and set the undeploy job as the on_stop parameter for the deployment job.

Setting for running pipelines in sequence - Azure Devops

Is there a parameter or a setting for running pipelines in sequence in azure devops?
I currently have a single dev pipeline in my azure DevOps project. I use this for infrastructure because I build, test, and deploy using scripts in multiple stages in my pipeline.
My issue is that my stages are sequential, but my pipelines are not. If I run my pipeline multiple times back-to-back, agents will be assigned to every run and my deploy scripts will therefore run in parallel.
This is an issue if our developers commit close together because each commit kicks off a pipeline run.
You can reduce the number of parallel jobs to 1 in your project settings.
I swear there was a setting on the pipeline as well but I can't find it. You could do an API call as part or your build/release to pause and start the pipeline as well. Pause as the first step and start as the last step. This will ensure the active pipeline is the only one running.
There is a new update to Azure DevOps that will allow sequential pipeline runs. All you need to do is add a lockBehavior parameter to your YAML.
https://learn.microsoft.com/en-us/azure/devops/release-notes/2021/sprint-190-update
Bevan's solution can achieve what you want, but there has an disadvantage that you need to change the parallel number manually back and forth if sometimes need parallel job and other times need running in sequence. This is little unconvenient.
Until now, there's no directly configuration to forbid the pipeline running. But there has a workaruond that use an parameter to limit the agent used. You can set the demand in pipeline.
After set it, you'll don't need to change the parallel number back and forth any more. Just define the demand to limit the agent used. When the pipeline running, it will pick up the relevant agent to execute the pipeline.
But, as well, this still has disadvantage. This will also limit the job parallel.
I think this feature should be expand into Azure Devops thus user can have better experience of Azure Devops. You can raise the suggestion in our official Suggestion forum. Then vote it. Our product group and PMs will review it and consider taking it into next quarter roadmap.

Resources