I am using Git as my repository and I am use Git actions to execute my flow.
According to the image I have four files.
InfrastructureMain.yml => execute the ARM template related stuff
ResourcesMain.yml => execute the application related stuff.
My Question is When I make PR to different environment the both InfrastructureMain.yml and ResourcesMain.yml starts to execute, but I want to firstly execute the InfrastructureMain.yml and then when it gets success, then execute ResourcesMain.yml.
to fulfil my requirment. Is there have any configurations to do?
Related
My team uses Pre-commit in our repositories to run various code checks and formatters. Most of my teammates use it but some skip it entirely by committing with git commit --no-verify. Is there anyway to run something in CI/CD to ensure all Pre-commit hooks pass (we're using GitHub actions). If at least one hooks fails, then throw an error.
There are several choices:
an adhoc job which runs pre-commit run --all-files as demonstrated in pre-commit.com docs (you probably also want --show-diff-on-failure if you're using formatters)
pre-commit.ci a CI system specifically built for this purpose
pre-commit/action a github action which is in maintenance-only mode (not recommended for new usecases)
disclaimer: I created pre-commit, and pre-commit.ci, and the github action
So to give you a bit of context we have a service which has been split into two different services ie one for the read and one for the write side operations. The read side is called ProductStore and the write side is called ProductCatalog. The issue were facing was down the write side as the load tests create 100 products in the write side resource web app and then they are transferred to the read side for the load test to then read x number of times. If a build is launched in the product catalog because something new was merged to master then this will cause issues in the product store pipeline if it gets run concurrently.
The question I want to ask is there a way in the ProductStore yaml file to directly query via a specified azure task or via an AzurePowershell script to check if a build is currently running in the ProductCatalog pipeline.
The second part of this would be to loop/wait until that pipeline has successfully finished before resuming the product store pipeline.
Hope this is clear as I'm not sure how to best ask this question as I'm very new to the DevOps pipelines flow but this would massively help if there was a good way of checking this sort of thing.
As a workaround , you can set Pipeline completion trigger in ProductStore pipeline.
To trigger a pipeline upon the completion of another, specify the triggering pipeline as a pipeline resource.
Or configure build completion triggers in the UI, choose Triggers from the settings menu, and navigate to the YAML pane.
Is any possibility to test created Azure Pipelines? From UI or your yaml definition of pipelines?
Mean that I have some yamls pipelines or pipelines defined from UI and I want to ensure by some tests(Unit Tests e.x.) that all have defined variables, build, test, and package parts or something else in each pipeline.
And verify pipelines configurations after some changes of them or after adding some new repos/branches if it's required.
Thanks...
Is any possibility to test created Azure Pipelines? From UI or your
yaml definition of pipelines?
If you want a out of box feature to achieve this, sorry to say, No, there hasn't.
BUT, the work around is using API to check them.
Client API.
You could write a simple script to get Builds definition with Client API.
In this simple script, you first get the whole definition:
List<BuildDefinitionReference> buildDefinitions = new List<BuildDefinitionReference>();
Then you could apply your customized check/test into this definition with scripts. In one word, write some test classes/methods. After the script complete, you can import it into VSTS, and then use task to run those tests part. Only this test succeed, then your builds could be executed.
So, at this time, it need you add 2 agent jobs into your pipeline, the first one is used to run your script test(names test agent job). And the second agent job is the one you want to check. In the second agent job, set its condition as:
At this time, only the test succeed, this current job can be ran.
Or, if you don't want the builds you want to check would be broken because of the test, please consider about using Build completion trigger. Set a separate pipeline to run the test. In the pipeline you want to check, set it can be run only when the test pipeline finished.
Rest API
You could use rest api with powershell which very similar with the above description. Use api to get builds definition, and then write some check powershell script.
I more recommend you to put the test at a separate pipeline. Then the API could only get the part you want check, not including the test part.
I'm trying to setup Azure Pipelines for a CI setup and I'm using the YAML syntax to get started. However, I was wondering if it is possible to script the flow at "runtime"? Like you can do in Jenkins script: spawn builds etc.
Depending on the commit I want to have a vastly different flow.
This is because I currently have a mono-repo setup with Conan libraries and I want to rebuild the libraries that are necessary depending on the commit, thus the build-flow is not the same for each commit. I want to spawn jobs so I can take advantage of parallel building on several agents.
For your issue ,do you refer to trigger builds based on specified commits? If so, you can trigger builds by adding tag trigger in yaml. You can create tags on the commits. If the tag created meets the trigger condition of the tag trigger in yaml , then the build will be triggered.
trigger:
tags:
include:
- v2.*
We have two servers in our organisation.
1) server with gitlab
2) Build server
I would like to create an automate build happen in the second machine(Build server ) for the source code in the gitlab server.
How can I achieve this using gitlab ?
Thanks,
siva
If you are moving from an "pull" continuous integration system (e.g. using a kind of crontab that regularly checks if the source code on the versioning system has changed and start the configure/build/test/deploy stages if it has), then know that gitlab has a much better way of doing this.
gitlab approach is to configure a "pull" system: every time the code is updated (in any branch) on the git repository then the script defined in your .gitlab-ci.yml is read to see if continuous integration jobs have to be launched. jobs are send to your configured gitlab runners. gitlab runners are defined on your build server(s) and takes the job when they are coming.
Definition of what to do is also describes in the .gitlab-ci.yml.
Here is a list of documentation to start learning about gitlab CI:
the official documentation can be helpful
A general introduction to gitlab ci using docker can be found in this blog article (the first slides are great). If your build server or your intended build is on Linux, I would recommend using the "docker executor" (e.g. gitlab runners are executed inside a docker machine inside your build server). It is easy and quick to setup.
Hope this helps you starting...