How to add automated tests in azure build pipeline - azure

I want to add some automation tests (it can be selenium c#/ java tests) to my existing azure pipeline. My pipeline is working correctly i.e. when I push changes to my master branch (GitHub), it triggers the build and it gets deployed to the live. But I want to integrate some automation test which will decide whether to deploy the build or not example:
After committing to the master branch:
run the automation test
if tests passed > deploy
if the test fails > stop deployment
Any suggestions or reference material would be helpful.
Thanks,

Depends on what kind of tests you are going to run - if they are unit tests, it is easy but selenium tests you would need to deploy it to a webserver before running the tests.
When edit the pipeline, and click "Show assistance" if is't not already done, you can see the tasks, including test dashboard etc.
For example if you add a VStests, it will add something like this to the yaml file.
- task: DotNetCoreCLI#2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '-- no-build --configuration $(buildConfiguration)'
Above just executes 'dotnet test' command which will look for tests in the project and run them.
It is executing a command line, so you can run any tests as far as you can provide a command line using batch/powershell commands.
The Microsoft tutorial at https://learn.microsoft.com/en-us/learn/modules/run-quality-tests-build-pipeline/ explains how to use the test wigits etc.
I would suggest to start with unit tests which is simpler before trying selenium.
Krish

You can specify the conditions under which each stage, job, or step runs. By default, a job or stage runs if it does not depend on any other job or stage, or if all of the jobs or stages that it depends on have completed and succeeded. By default, a step runs if nothing in its job has failed yet and the step immediately preceding it has finished.
In another word, if test step/job/stage passed, the next step/job/stage can precede.

Related

Gitlab CI: Why next stage is allowed to run

I have one gitlab CI file like this
stages:
- build
- deploy
build-job:
stage: build
script:
- echo "Compiling the code..."
- echo "Compile complete."
when: manual
deploy-bridge:
stage: deploy
trigger:
project: tests/ci-downstream
What I understand is that the deploy-bridge stage should not be run unless the manual build-job is run successfully. But it is not the case here. Is this normal?
Jobs in the same stage run in parallel. Jobs in the next stage run
after the jobs from the previous stage complete successfully.
You're not defining your deploy-bridge job as a dependent job, or that it needs another job to finish first, so it can run right away as soon as it reaches the stage. Since the previous stage is all manual jobs, GitLab CI/CD sort of interprets it as 'done', at least enough so that other stages can start.
Since it doesn't look like you're uploading the compiled code from build-job as an artifact, we can't use the dependencies keyword here. All that keyword does is control which jobs' dependencies this job needs, but if it needs the artifacts of a prior job, that job will need to run and finish successfully for this job to start. Also, by default all available artifacts from all prior jobs will be downloaded and available for all jobs in the pipeline. The dependencies keyword can also be used to limit which artifacts this job actually needs. However, if there are no artifacts available in the job we "depend" on, it will throw an error. Luckily there's another keyword we can use.
The needs keyword controls the "flow" of the pipeline, so much so that if a job anywhere in the pipeline (even in the last of say 1,000 stages) had needs: [] it will run as soon as the pipeline starts (and as soon there is an available runner). We can use needs here to make the pipeline flow the way you need.
...
deploy-bridge:
stage: deploy
needs: ['build-job']
trigger:
project: tests/ci-downstream
Now, the deploy-bridge job won't run until the build-job has finished successfully. If build-job fails, deploy-bridge will be skipped.
One other use for needs is that it has the same functionality as dependencies, in that it can control what artifacts are downloaded in which jobs, but it won't fail if the "needed" jobs don't have artifacts at all.
Both dependencies and needs accept an empty array which equates to 'don't download any artifacts' and for needs, run as soon as a runner is available.

How to create a bug automatically in DevOps when the automated test fails in Azure Test Plans

I am running my Selenium tests from Azure Test Plans in DevOps. When any of my test fails, I have the option in DevOps to create a bug for it**(attaching the screenshot of the option to create a bug manually once the test case fails)** but I want the bug to be created automatically as soon as the test case fails. Is there a way to configure it so that my work item(bug) gets created automatically?
There is an option to create a work item on failure in build pipeline as below, so you could enable it to create a bug when this build is failed.
In addition, if you run test plan in release pipeline, this option is not available in release pipeline, so you need to use API to create a work item in it. You could use Powershell Task to run Rest API: Work Items - Create to create a work.
Set the condition
Or you could directly use this Extension- Create Bug on Release failure.
For the YAML pipeline, you can use the CreateBug task:
Example task:
- task: CreateBug#2
inputs:
isyamlpipeline: true
custompaths: false
customrequestor: false
Link: https://marketplace.visualstudio.com/items?itemName=AmanBedi18.CreateBugTask

How to Run Azure Pipeline Builds Sequentially

I have created an Azure build pipeline using the 'classic editor' (i.e. not yaml). The build consists of two agent jobs:
Job 1 - Build code and deploy to test enviornment using a single agent.
Job 2 - Run tests against the test enviornment in parallel (using max 3 agents at a time).
My problem with this setup is if a build is triggered, and the tests are mid-run, if a second build is triggered, the code that is deployed to the test enviornment will be overwritten by the subsequent build, causing the test run in Job 2 of the first build to fail.
Is it possible to tell the build pipeline to only trigger builds sequentially?
I have figured out how to use Azure DevOps API to check if the latest build has completed, however Im not sure how I can use it in the pipeline. Is it possible to do something like:
1 - Invote REST API to check status of latest build.
2 - Success Criteria met (i.e. the build has completed)? If yes continue build, if not wait a minute and check again.
You have the option to control this in the Build options. Should work based on what you described.
Edit:
After looking again to your question, i noticed that you are running your tests after you deploy your app to test environment, so it means you run your tests during release, so you need to control the flow on your release and not on your buid. In order to do it, you should control on the maximum number of parallel deployments:

How to run mocha test parser on deployments with bamboo

I'm running mocha tests as part of a deployment process and need to use mocha test parser for bamboo to know what failed (using reporter mocha-bamboo-reporter).
Mocha test parser task is only able to run during the build process (it can't be added as part of a deployment process). Is there a way to run it from a command, node.js or npm task?
Currently when tests fail bamboo is still saying that the deployment was ok.
Test run configuration:
config
Typically you would only want to run your tests as part of the build process and not the deployment process. This is why you do not see these options as part of the deployment. Generally, Bamboo deployment failures are the result of files not copying, connection errors, or errors in scripts.
Because you are running tests in the deployment, the test runner will return "0" indicating that the task to execute the tests ran fine. Bamboo allows you to do this so that you can run tests as a deployment and still deploy.
Instead of failing the deploy, add two tasks to the build to run the tests and parse the results. If the tests pass you can have the deploy trigger on the success of the build. This gives you the following advantages that you are currently missing:
Deployments only will start if tests pass.
Bamboo has a nice summary page for tests and will provide useful metrics such as how many times a specific test has failed.
It separates the deployment from the integration (i.e. build/test).
However, if you are dead set on running tests and parsing in a deployment, you could have a node.js, command, or script task that parses the results and then returns -1 (or a non-zero number) if tests failed.

Azure DevOps pipeline - how to catch error

I am using Azure DevOps build pipeline, to run my selenium web automation tests (run by maven, inside docker container)
Now, even If one of my test scenarios fail, pipeline job is still considered as successful, how can I specify to look for certain log to fail it?
The 'pipeline job/task/phase' I am using to run my test is 'docker compose'
My second question is, is there possibility to filter pipeline output logs? Currently it is flooded with output from few services run in container:
The only thing I found is, possibility to search through logs, but no filtering, regards.
If your objective is to fail the build when one of more of your test has failed, I advise you to add one more step to your build process : Publish Test Results task
This is a necessary step for test running with another task than the default Visual Studio Test task, which consist in publishing your test result file to Azure DevOps, and enable your build to be aware of your tests results (and then let you decide what to do if one or more tests fail)
In your case, you will also probably have to find a way to extract test results file from your containers, as your test results might probably be produced and stored inside of your containers (and unavailable to the Publish Test Result task)
For your second question, I am not aware of any way to filter the output logs directly from the web interface, sorry :(
We ran into this with our cypress tests (you should ditch selenium for cypress, its sweet) and solved it by grabbing the exit code manually. We also found that AzureDevops will hang if there's a background process running, even if there's an error, so be sure to look out for that as well if you start up your web server like we do.
- bash: |
yarn test-ci:e2e 2> /dev/null
if [ $? -eq 0 ]
then
yarn stop
exit 0
else
yarn stop
exit 1
fi
displayName: 'Run Cypress Tests'
For anyone looking for a way to filter logs, if there are multiple services running, you can create new azure build pipeline task (Docker) that runs docker command:
docker logs -f NAME_OF_THE_SERVICE
This way you will only see logs from desired service.

Resources