Limiting CI queue size to 1 to make Bitbucket's pipeline jobs blocking - bitbucket-pipelines

I have a hard time searching for this as I'm getting lot of results regarding the paralerism of the steps inside the pipeline itself, which is not my problem (as I'm concerned about parallelism one level above the pipeline steps). I was looking through google/so and Atlassian documentation, but probably I'm searching for it under the wrong term.
I have two steps in my pipeline, build HTML files and deploy them. The deployment just does git push of the final HTML files to the final reposity. This works very well, but my concern is that if I would do by accident multiple commits and pushes quickly one after the other. Then depending on their content, they might finish in a different order than they started and doing an out-of-order deployment, which I want to avoid.
There might be more robust ways of deployment, but because this is a fairly simple project, I wouldn't want to overcomplicate it and I would like to keep deployment as it is. And just limit my pipeline CI to running one job/task at the time and if I will push faster than it can build then just block/wait for the previous one to finish.
In essence, I want my CI queue size to be just 1 job to make the incoming jobs triggered by commits blocking instead of asynchronous. Is there some way or workaround to achieve something like that and make the jobs blocking?

Related

How to optimise the issues with the luigi pipeline?

I have a pipeline built with luigi. I have one luigi task which downloads data from an external service, based on a txt file with the information to fetch. As there are over 3000 of requests to the external service, the pipeline often fails, because of the large time it takes for the task to finish.
What could be done to improve the scalability of the pipeline and to make sure the pipeline doesn't fail? -threading, multiprocessing? What solution can be optimal to make sure the pipeline doesn't fail at handling big tasks, which take a long time?
I didn't provide any code example because I would need a general approach, not an example based one.

How to redeploy partial tasks without editing and disabling tasks in azure release pipeline

The deployment of the company product has several tasks to finish. For example,
task 1 to copy some build files to server A
task 2 to copy some build files to server B
task 1 or 2 could fail and we need to redeploy only the failed task because each task takes a long time to finish.
I can split the tasks into different stages but we have a long tasks list and if we include staging and production it will be difficult to manage.
so my question is
is there an easy way to redeploy partial tasks without editing and disabling the tasks in the stage?
or a better way to organize multiple stages into one group like 'Staging' or 'Production' so I can have a better visualization of the release stages
thanks
Update:
Thanks #jessehouwing
Found there is an option when I click redeploy. See screenshot below.
You can group each stage with one or more jobs. You can easily retry jobs without having to run the whole stage. You will get the overhead of each job fetching sources or downloading artifacts and to use the output of a previous job you need to publish the result. One advantage is that jobs can run in parallel, your overall duration may actually be shorter that way.

Azure Yaml Schema Batch Trigger

can anyone explain what Batch in Azure YAML Schema Trigger does?
The only explanation on MSFT website is
batch changes if true; start a new build for every push if false (default)
and this isn't really clear to me
Batch changes or Batch trigger actually means batching your CI runs.
If you have many team members uploading changes often, you may want to reduce the number of runs you start. If you set batch to true, when a pipeline is running, the system waits until the run is completed, then starts another run with all changes that have not yet been built.
To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is running, additional pushes B and C occur into the repository. These updates do not start new independent runs immediately. But after the first run is completed, all pushes until that point of time are batched together and a new run is started.
My interpretation of MS documentation is that the batch boolean is meant to address concerns with encountering frequent pushes to the same trigger branch or set of branches (and possibly tags) and works such that if the build pipeline is already running, any additional changes pushed to the listed branches will be batched together and queued behind the current run. This does mean that those subsequent pushes will be part of the same subsequent pipeline run which is a little strange, but given that's how Microsoft intended it to work it should be fine.
Basically, for repos that have a high potential for demanding pipeline runs and multiple overlapping pushes occurring, batching is great.
For reference, here is the documentation link: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/azure-repos-git?view=azure-devops&tabs=yaml#batching-ci-runs

Azure Pipelines: How to block pipeline A if pipeline B is running

I have two pipelines (also called "build definitions") in azure pipelines, one is executing system tests and one is executing performance tests. Both are using the same test environment. I have to make sure that the performance pipeline is not triggered when the system test pipeline is running and vice versa.
What I've tried so far: I can access the Azure DevOps REST-API to check whether a build is running for a certain definition. So it would be possible for me to implement a job executing a script before the actual pipeline runs. The script then just checks for the build status of the other pipeline by checking the REST-API each second and times out after e.g. 1 hour.
However, this seems quite hacky to me. Is there a better way to block a build pipeline while another one is running?
If your project is private, the Microsoft-hosted CI/CD parallel job limit is one free parallel job that can run for up to 60 minutes each time, until you've used 1,800 minutes (30 hours) per month.
The self-hosted CI/CD parallel job limit is one self-hosted parallel job. Additionally, for each active Visual Studio Enterprise subscriber who is a member of your organization, you get one additional self-hosted parallel job.
And now, there isn't such setting to control different agent pool parallel job limit.But there is a similar problem on the community, and the answer has been marked. I recommend you can check if the answer is helpful for you. Here is the link.

Multiple instances of continuous Webjob on single VM in Azure

I have a continuous Webjob running on my Azure Website. It is responsible for doing some work after retrieving items from a QueueTrigger. I am attempting to increase the rate in which the items are processed off the Queue. As I scale out my App Service Plan, the processing rate increases as expected.
My concern is that it seems wasteful to pay for additional VMs just to run additional instances of my Webjob. I am looking for options/best practices to run multiple instances of the same Webjob on a single server.
I've tried starting multiple JobHosts in individual threads within Main(), but either that doesn't work or I was doing something wrong... the Webjob would fail to run due to what looks like each thread trying to access 'WebJobSdk.marker'. My current solution is to publish my Webjob multiple times, each time modifying 'webJobName' slightly in 'webjob-publish-settings.json' so that the same project is considered a different Webjob at publish time. This works great so far, expect that it creates a lot of additional work each time I need to make any update.
Ultimately, I'm looking for some advice on what the recommended way of accomplishing this would be. Ideally, I would like to get the multiple instances running via code, and only have to publish once when I need to update the code.
Any thoughts out there?
You can use the JobHostConfiguration.QueuesConfiguration.BatchSize and NewBatchThreshold settings to control the concurrency level of your queue processing. The latter NewBatchThreshold setting is new in the current in progress beta1 release. However, by enabling "prerelease" packages in your Nuget package manager, you'll see the new release if you'd like to try it. Raising the NewBatchThreshold setting increases the concurrency level - e.g. setting it to 100 means that once the number of currently running queue functions drops below 100, a new batch of messages will be fetched for concurrent processing.
The marker file bug was fixed in this commit a while back, and again is part of the current in progress v1.1.0 release.

Resources