Pipelines - Average build time - bitbucket-pipelines

I wonder if there is a way to get the average build time from bitbucket pipelines. (API ?)
We have several projects in parallel and build times can go from 20s to 4-5 minutes depending on what each project needs.
Since this build process has slack reporting, i would like to introduce the reporting with a "The average build time for this project is 3.2 minutes" so the people know when to expect the end or to worry about a failure.
Anyone has any lead about meta information about the bitbucket pipelines that could be accessed from within the pipeline ?

Related

GitLab runners hanging after job is finished

We are currently in the process of migrating our CI/CD pipelines to GitLab. For testing purposes we are using a runner deployed on OpenShift 4 using the GitLab Runner operator. Using the default configuration for now (we are still in a very early stage) we are able to spin up runners normally and without issues, however we are noticing long delays between jobs. For example a build job normally takes about 2 minutes for the actual build to take place, takes almost 8 minutes in total as a job to finish. This happens even after the job has successfully finished (as evident by the logs). This in turn means that there are long delays between jobs of a single pipeline.
I took a look at the configuration properties of the runner but I am unable to figure out whether we have something misconfigured. For reference we are using GitLab CE version 13.12.15 and the Runner in question is running version 15.0.0.
Does anyone know how to mitigate this problem?

Limiting CI queue size to 1 to make Bitbucket's pipeline jobs blocking

I have a hard time searching for this as I'm getting lot of results regarding the paralerism of the steps inside the pipeline itself, which is not my problem (as I'm concerned about parallelism one level above the pipeline steps). I was looking through google/so and Atlassian documentation, but probably I'm searching for it under the wrong term.
I have two steps in my pipeline, build HTML files and deploy them. The deployment just does git push of the final HTML files to the final reposity. This works very well, but my concern is that if I would do by accident multiple commits and pushes quickly one after the other. Then depending on their content, they might finish in a different order than they started and doing an out-of-order deployment, which I want to avoid.
There might be more robust ways of deployment, but because this is a fairly simple project, I wouldn't want to overcomplicate it and I would like to keep deployment as it is. And just limit my pipeline CI to running one job/task at the time and if I will push faster than it can build then just block/wait for the previous one to finish.
In essence, I want my CI queue size to be just 1 job to make the incoming jobs triggered by commits blocking instead of asynchronous. Is there some way or workaround to achieve something like that and make the jobs blocking?

Azure DevOps build using Docker becoming progressively slower

I'm building multiple projects using a single docker build, generating an image and pushing that into AWS ECR. I've recently noticed that builds that were taking 6-7 minutes are now taking on the order of 25 minutes. The Docker build portion of the process that checks out git repos and does the project builds takes ~5 minutes, but what is really slow are the individual Docker build commands such as COPY, ARG, RUN, ENV, LABEL etc. Each one is taking a very long time resulting in an additional 18 minutes or so. The timings vary quite a bit, even though the build remains generally the same.
When I first noticed this degradation Azure was reporting that their pipelines were impacted by "abuse", which I took as a DDOS against the platform (early April 2021). Now, that issue has apparently been resolved, but the slow performance continues.
Are Azure DevOps builds assigned random agents? Should we be running some kind of cleanup process such as docker system prune etc?
Are Azure DevOps builds assigned random agents? Should we be running some kind of cleanup process such as docker system prune etc?
Based on your description:
The timings vary quite a bit, even though the build remains generally the same.
This issue should still be a performance problem of the hosted agent.
And based on the settings of Azure DevOps, every time you run the pipeline with host-agent, the system will randomly match a new qualified agent. Azure DevOps builds assigned random new agent, so we do not need run some kind of cleanup process.
To verify this, you could set your private agent to check if the build time is much different each time (The first build time may be a bit longer because there is no local cache resource).
By the way, if you still want to determine whether the decline in hosted performance is causing your problem, you should contact the Product team directly, and they can check the region where your organization is located to determine whether there is degradation in the region.

Gitlab: How to create a badge for the percent of jobs passing in latest pipeline

I have a gitlab pipeline running regression tests on a project of ours. The pipeline runs a few jobs in the same stage. We usually have a few failures each night and I want a badge that will give me a percent of how many jobs are passing. It seems like coverage reports and their badges are designed more for a single job, where I want a coverage report for multiple jobs based on how many jobs pass.
How would I go about this? Is a coverage report the right path to investigate?

Azure Pipelines: How to block pipeline A if pipeline B is running

I have two pipelines (also called "build definitions") in azure pipelines, one is executing system tests and one is executing performance tests. Both are using the same test environment. I have to make sure that the performance pipeline is not triggered when the system test pipeline is running and vice versa.
What I've tried so far: I can access the Azure DevOps REST-API to check whether a build is running for a certain definition. So it would be possible for me to implement a job executing a script before the actual pipeline runs. The script then just checks for the build status of the other pipeline by checking the REST-API each second and times out after e.g. 1 hour.
However, this seems quite hacky to me. Is there a better way to block a build pipeline while another one is running?
If your project is private, the Microsoft-hosted CI/CD parallel job limit is one free parallel job that can run for up to 60 minutes each time, until you've used 1,800 minutes (30 hours) per month.
The self-hosted CI/CD parallel job limit is one self-hosted parallel job. Additionally, for each active Visual Studio Enterprise subscriber who is a member of your organization, you get one additional self-hosted parallel job.
And now, there isn't such setting to control different agent pool parallel job limit.But there is a similar problem on the community, and the answer has been marked. I recommend you can check if the answer is helpful for you. Here is the link.

Resources