I have a pipeline build for a .NET application called Master Site and then I have three other build definitions using different repos. Every time I have to do a build of the Master site and then the subsequent build. I need to know if we can do multiple builds by triggering a single build request. (all the builds are built off of different repos)
I need to know if we can do multiple builds by triggering a single
build request.
For this issue,the answer is yes. You can do this by adding multiple Trigger Build Tasks to the agent job in a build pipeline .
This tasks allows to trigger a new Build (add it to the queue) as part of a Build Definition.
Related
Context
I’m using CI for developing a solution using a monorepo. It has two custom python libraries, and high level orchestration scripts using these packages.
The CI is split into 3 parts: build, test and deploy.
During build, I create an image (tag being a fixed name pattern followed by pipeline id) in ECR (staging repository) with kaniko.
This "base" image is then used in test stage to run unit tests and integration tests.
These two stages run always for any merge request into default branch (main). If both of these pass, I’ll merge the development branch into main branch, and a new pipeline gets triggered post merge.
This will repeat the build and test stage, and then deploy stage starts.
Some very high level wrapper scripts are added to the "base" image created in build stage and publish it with a versioned tag (identified by a fixed name pattern followed by commit id) and also as latest tag in ECR (deploy repository).
Question
Everything works in this approach, but I don't like the fact that the build and test stages repeat between pre-merge (triggered as merge request pipeline) and post-merge (triggered as branch pipeline) pipelines. These are identical time consuming steps, and shall always be identical as we allow only fast forward merges and each merge creates a merge commit. Is there a way to handle this scenario?
Issues
The main challenge I’m facing is how to identify the "base" image created during build stage (identified by pipeline id). If I can do that, it becomes simple. But I do not know how to get that pipeline id. I have to use an identifier in image tag as a lot of people are working and there can be simultaneous pipelines by other people, but as soon as I use pipeline id or commit id, it is becoming impossible to track those after merge.
For my team where we have partner teams providing us SW pieces that need to be integrated on HW systems and tested together, our code footprint is small and hence churn is small, while number of changes from partner teams is frequent. In such a scenario, I see the need to trigger the release part of the yaml many more times than the build part. Is multi-stage pipelines the way to go? I want to trigger new release instances using RestAPI invoke only the Release stages on the YAML file, using AzureDevOps Rest API.
Regards,
You don't have to use multi-stage pipelines to be able to trigger repeated releases, it just makes the management of the pipeline cleaner.
It's possible to create a pipeline that include a build stage and release stages for each of your environments, trigger the build stage (manually or based on a CI trigger), and then from that Pipeline "Run", deploy as many times as you see fit to whatever environments you like. That can be done from API or portal.
It's also possible to create a pipeline that is "release-only" - that is, it gets created manually, or as the result of seeing a specified build having been run.
Personally, I like the multi-stage build because it's a little easier to see what build created the release that you're deploying around. It's not a requirement, though.
So I have a project set up as a polyrepo, meaning all services and libraries are in different repos.
Now, I've updated the service projects to reference the library project directly when using Debug as the build config in visual studio, and the package on NuGet when building in Release, on Azure DevOps.
The problem I am trying to solve is that my service builds sometimes fail when I've updated the library because the Library build hasn't completed and update NuGet yet.
What are my options here? Is there a way to ensure that the Library build completes first before the service builds start, or something similar?
Since you are using the Release feature in Azure Devops, I suggest that you could split the whole process into two stages (Library Build Stage and Service Build Stage).
For example:
These two stages are connected by After Stage relationship.
Then when the Library Stage completes , the Service Build stage will start.
If the two stages need to specify different files in the build, you could select the target files in the Visual Studio Build task -> Solution.
By the way, if Service Build stage needs to use the build results of Library Build stage, I suggest that you can use the same Self-hosted agents to run the release.
You can define it in Tasks -> Agent Job -> Demands.
Updates:
Since the Service Build and Library Build are two pipelines, you could try the following steps to order the Build running Step.
Step1: Set the CI trigger(continuous integration) in Library Build. This could make sure the commits could trigger the Library Build.
Step2: Disable the CI trigger in Service Build. Then Set the Pipeline trigger in Service Build. You could set the Library Build trigger the Service Build.
In this case, you could make sure that the Service Build will run after the Library Build.
I have created an Azure build pipeline using the 'classic editor' (i.e. not yaml). The build consists of two agent jobs:
Job 1 - Build code and deploy to test enviornment using a single agent.
Job 2 - Run tests against the test enviornment in parallel (using max 3 agents at a time).
My problem with this setup is if a build is triggered, and the tests are mid-run, if a second build is triggered, the code that is deployed to the test enviornment will be overwritten by the subsequent build, causing the test run in Job 2 of the first build to fail.
Is it possible to tell the build pipeline to only trigger builds sequentially?
I have figured out how to use Azure DevOps API to check if the latest build has completed, however Im not sure how I can use it in the pipeline. Is it possible to do something like:
1 - Invote REST API to check status of latest build.
2 - Success Criteria met (i.e. the build has completed)? If yes continue build, if not wait a minute and check again.
You have the option to control this in the Build options. Should work based on what you described.
Edit:
After looking again to your question, i noticed that you are running your tests after you deploy your app to test environment, so it means you run your tests during release, so you need to control the flow on your release and not on your buid. In order to do it, you should control on the maximum number of parallel deployments:
I'm trying to setup Azure Pipelines for a CI setup and I'm using the YAML syntax to get started. However, I was wondering if it is possible to script the flow at "runtime"? Like you can do in Jenkins script: spawn builds etc.
Depending on the commit I want to have a vastly different flow.
This is because I currently have a mono-repo setup with Conan libraries and I want to rebuild the libraries that are necessary depending on the commit, thus the build-flow is not the same for each commit. I want to spawn jobs so I can take advantage of parallel building on several agents.
For your issue ,do you refer to trigger builds based on specified commits? If so, you can trigger builds by adding tag trigger in yaml. You can create tags on the commits. If the tag created meets the trigger condition of the tag trigger in yaml , then the build will be triggered.
trigger:
tags:
include:
- v2.*