I have two pipelines (also called "build definitions") in azure pipelines, one is executing system tests and one is executing performance tests. Both are using the same test environment. I have to make sure that the performance pipeline is not triggered when the system test pipeline is running and vice versa.
What I've tried so far: I can access the Azure DevOps REST-API to check whether a build is running for a certain definition. So it would be possible for me to implement a job executing a script before the actual pipeline runs. The script then just checks for the build status of the other pipeline by checking the REST-API each second and times out after e.g. 1 hour.
However, this seems quite hacky to me. Is there a better way to block a build pipeline while another one is running?
If your project is private, the Microsoft-hosted CI/CD parallel job limit is one free parallel job that can run for up to 60 minutes each time, until you've used 1,800 minutes (30 hours) per month.
The self-hosted CI/CD parallel job limit is one self-hosted parallel job. Additionally, for each active Visual Studio Enterprise subscriber who is a member of your organization, you get one additional self-hosted parallel job.
And now, there isn't such setting to control different agent pool parallel job limit.But there is a similar problem on the community, and the answer has been marked. I recommend you can check if the answer is helpful for you. Here is the link.
Related
I have a long-running Java/Gradle process and an Azure Pipelines job to run it.
It's perfectly fine and expected for the process to run for several days, potentially over a week. The Azure Pipelines job is run on a self-hosted agent (to rule out any timeout issues) and the timeout is set to 0, which in theory means that the job can run forever.
Sometimes the Azure Pipelines job fails after a day or two with an error message that says "We stopped hearing from agent". Even when this happens, the job may still be running, as evident when SSH-ing into the machine that hosts the agent.
When I discuss investigating these failures with DevOps, I often hear that Azure Pipelines is a CI tool that is not designed for long-running jobs. Is there evidence to support this claim? Does Microsoft commit to only support running jobs within a certain duration limit?
Based on the troubleshooting guide and timeout documentation page referenced above, there's a duration limit applicable to Microsoft-hosted agents, but I fail to see anything similar for self-hosted agents.
Agree with #Dianel Mann.
It's not common to run long-time jobs, but as per doc, it should be supported.
stopped hearing from agent could be caused by network problem on the agent, or agent issue due to high cpu, storage, ram...etc. You can check the agent diagnostic log to troubleshoot.
We are currently in the process of migrating our CI/CD pipelines to GitLab. For testing purposes we are using a runner deployed on OpenShift 4 using the GitLab Runner operator. Using the default configuration for now (we are still in a very early stage) we are able to spin up runners normally and without issues, however we are noticing long delays between jobs. For example a build job normally takes about 2 minutes for the actual build to take place, takes almost 8 minutes in total as a job to finish. This happens even after the job has successfully finished (as evident by the logs). This in turn means that there are long delays between jobs of a single pipeline.
I took a look at the configuration properties of the runner but I am unable to figure out whether we have something misconfigured. For reference we are using GitLab CE version 13.12.15 and the Runner in question is running version 15.0.0.
Does anyone know how to mitigate this problem?
I'm quite new to Azure Devops, so sorry if it's obvious questions.
I have a release pipeline with 3 stages like this:
First stage is run on Agent A on Machine A, Stages 2,3 run on Agent B on Machine B.
Once stage 1 in prev. pipeline is finished -> it will start stage 1 of next scheduled pipeline run.
Is there a way to prevent this? I would like to start next scheduled pipeline run only after all stages are finished in previous one.
You can add Dependencies: If you are using YAML:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml
IF you are using classic pipelines you can use "Pre deployment Condition"
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/define-multistage-release-process?view=azure-devops
Also you can use approval gate if you want manual intervention.
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/approvals/?view=azure-devops
According to Specify queuing policies, this is not currently possible except for using manual intervention.
YAML pipelines don't support queuing policies. Each run of a pipeline
is independent from and unaware of other runs. In other words, your
two successive commits may trigger two pipelines, and both of them
will execute the same sequence of stages without waiting for each
other. While we work to bring queuing policies to YAML pipelines, we
recommend that you use manual approvals in order to manually sequence
and control the order the execution if this is of importance.
You can use Post-Deployment approvals:
or Pre-Deployment approvals:
The deployment of the company product has several tasks to finish. For example,
task 1 to copy some build files to server A
task 2 to copy some build files to server B
task 1 or 2 could fail and we need to redeploy only the failed task because each task takes a long time to finish.
I can split the tasks into different stages but we have a long tasks list and if we include staging and production it will be difficult to manage.
so my question is
is there an easy way to redeploy partial tasks without editing and disabling the tasks in the stage?
or a better way to organize multiple stages into one group like 'Staging' or 'Production' so I can have a better visualization of the release stages
thanks
Update:
Thanks #jessehouwing
Found there is an option when I click redeploy. See screenshot below.
You can group each stage with one or more jobs. You can easily retry jobs without having to run the whole stage. You will get the overhead of each job fetching sources or downloading artifacts and to use the output of a previous job you need to publish the result. One advantage is that jobs can run in parallel, your overall duration may actually be shorter that way.
I use Azure DevOps to schedule jobs on Azure Batch AI. Launching of jobs works great, I have python code that does the same.
What I am trying to achieve is that all jobs in the Batch AI experiment should be terminated when the build is cancelled. Currently, cancelling the build doesn't affect the run status of the Batch AI jobs.
Hence, is there a sort of "OnCancel" event to hook on to in the build to run a command (which will be python code to terminate all jobs) ?
There is no need to look for an event, as a pipeline task can be configured to execute where the build was cancelled.
Note: this applies (as far as I am aware) to any task of the pipeline:
Specifically, the Run this task setting, under Control Options will let you dictate when and under what conditions a task will run.
In the example above, this task will execute even if previous tasks fail, and even if the build was canceled.
In your case, I would place this as the last task that will perform the cleanup that you want, regardless of the outcome of the build.