Running Postman in Azure DevOps - azure

I'm running into a weird issue when running Newman on Azure DevOps Pipeline. Here's a summary of what's happening:
Postman tests run fine locally
Pipeline tests fail only on the first test
Post
Test A
POST XXXXX [500 Internal Server Error, 442B, 8.6s]
1⠄ JSONError in test-script
Test A Copy
POST XXX [200 OK, 692B, 8.9s]
√ Is Successful
√ Status Code
√ Status Message
---
# failure detail
1. JSONError
No data, empty input at 1:1
^
at test-script
It doesn't seem to matter what the exact test is, it always fails if it's the first. As a way to demonstrate this I've copied the test that was failing so that now I had
Test A
Test A Copy
Test B
Test ...
And suddenly Test A Copy works. So it's not the contents of the test but rather the first test to be tested. All of these tests are POST's
Test A Contents:
var jsonData = pm.response.json();
pm.test("Is Successful", function() {
pm.expect(jsonData.IsSuccessful).to.be.true;
})
pm.test("Status Code", function() {
pm.response.to.have.status(200);
})
pm.test("Status Message", function() {
pm.expect(jsonData.StatusMessage).eql("Document insert successful.");
})
Nothing too fancy, so why would this fail on the first run (TEST A) but not the second (TEST A Copy). It doesn't matter which test it is, if I were to run TEST B first this would be the one to fail.
It almost looks like the first request is what's waking up the server and then everything is okay.

I run the Azure Devops Rest API in Postman and use the export json file to run the Postman test in Pipeline
Here are my steps to run newman in azure pipeline, you can refer to them.
Step1: Export the Collection in PostMan.
Step2: Upload the Json file(e.g. APITEST.postman_collection.json) to Azure Repo.
Step3: Create a pipeline and add the install Newman step, run Postman test step.
Example:
steps:
- script: |
npm install -g newman#5.1.2
workingDirectory: '$(System.DefaultWorkingDirectory)'
displayName: 'Command Line Script'
- script: 'newman run TEST.postman_collection.json --reporters cli,junit --reporter-junit-export Results\junitReport.xml '
workingDirectory: '$(build.sourcesdirectory)'
displayName: 'Command Line Script
or run with the Newman the cli Companion for Postman task(This is an extension task).
steps:
- script: |
npm install -g newman#4.6.1
workingDirectory: '$(System.DefaultWorkingDirectory)'
displayName: 'Command Line Script'
- task: NewmanPostman#4
displayName: 'Newman - Postman'
inputs:
collectionFileSource: 'TEST.postman_collection.json'
environmentSourceType: none
ignoreRedirect: false
bail: false
sslInsecure: false
htmlExtraDarkTheme: false
htmlExtraLogs: false
htmlExtraTestPaging: false
Step4: Run the pipeline and it can display the same api results as in postman.

Related

Error publishing test results to Azure Pipeline

I build and test NET 6 app in docker which works well - I see test output in console. When I try to publish test results I encounter an error. Full task output:
Starting: Publish test results
============================================================================== Task : Publish Test Results Description : Publish test
results to Azure Pipelines Version : 2.170.1 Author :
Microsoft Corporation Help :
https://learn.microsoft.com/azure/devops/pipelines/tasks/test/publish-test-results
============================================================================== /usr/bin/dotnet --version
3.1.418 Async Command Start: Publish test results Publishing test results to test run '22493988' Test results remaining: 65. Test run
id: 22493988
> ##[warning]Failed to publish test results: Cannot insert duplicate key row in object 'TestResult.tbl_TestCaseReference' with unique index
'ix_TestCaseReference2'. The duplicate key value is (1, 3559, 0, 0, 0,
0x2b12334e709060df7748bee70b2472ff22c0b671ae3e3b997395858711e5eef5,
0x89d7d75814a51ce876ec351fcb9aec22313feeece07fb1d5a3023a3e7a1f634d,
UnitTest, 0, 0, ). Async Command End: Publish test results Finishing:
Publish test results
Copy test results and publish steps
- pwsh: |
$id=docker images --filter "label=test=$(Build.BuildId)" -q | Select-Object -First 1
docker create --name testcontainer $id
docker cp testcontainer:/testresults ./testresults
docker rm testcontainer
displayName: 'Copy test results'
- task: PublishTestResults#2
inputs:
testResultsFormat: 'VSTest'
testResultsFiles: '**/*.trx'
searchFolder: '$(System.DefaultWorkingDirectory)/testresults'
displayName: 'Publish test results'
Copy tests results without errors:
Starting: Copy test results
============================================================================== Task : PowerShell Description : Run a PowerShell script on
Linux, macOS, or Windows Version : 2.170.1 Author :
Microsoft Corporation Help :
https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/powershell
============================================================================== Generating script.
========================== Starting Command Output =========================== /usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -Command . '/vsts/agent/_work/_temp/20d795dd-e081-4216-b9b1-327d257bbcb2.ps1'
69ba144bfd471794f0e11047a2b214bd7190b96b5b47616957927125c740f266
testcontainer
Finishing: Copy test results
I took code from tutorial: https://www.programmingwithwolfgang.com/run-xUnit-inside-docker-during-ci-build
The xunit test publisher may be an issue here.
Try resolving the error by t by changing Xunit to VSTest.
And also, you don't necessarily need a separate Publish Test Results job in the pipeline because built-in tasks like the Visual Studio Test task automatically publish test results to the pipeline.
Please refer this MSFT doc Publish Test Results task for more information.
I contacted my colleagues and turns out it's a bug in Azure. I replaced VSTest publisher and format with XUnit publisher and XML format - the XML was successfully uploaded. Another issue was that XML was only for one project in solution, so I ended up dropping that step all together because I don't really need this information anyway (if test failed, so fails build).

Unable to run Postman Newman command in Azure pipeline

I am trying to run a series of Postman test in my Azure build pipeline but keep getting errors that Newman is not installed, I have checked by going to the exact location and running the Newman commands without any issue. My screenshots show I have implemented them and the errors.
I see you have used commands as call newman run.
Instead use:
newman run collection.json -e environment_file.json --reporters cli,junit,htmlextra --reporter-junit-export junitReport.xml
It works for me from Azure Pipelines.
This is what mine looks like in YAML - if you click on 'View YAML' on the top right you can see the difference. I think you have this 'call' word that shouldn't be there.
- task: CmdLine#2
displayName: Run newman tests
inputs:
script: 'newman run "$(System.DefaultWorkingDirectory)/${{ parameters.e2eCollectionPath }}" -e "$(System.DefaultWorkingDirectory)/${{ parameters.e2eEnvironmentPath }}" --reporters cli,junit --reporter-junit-export $(System.DefaultWorkingDirectory)/report.xml'

Gitlab CI/CD how to catch curl response in pipeline

I have a pipeline which starts some maven/java app, now I want to add a test stage , where I check if the app starts successfully , for example: when the build stage finishes, I check with curl
127.0.0.1:8080 if response 200 ok , else failed.
How can I create a Gitlab pipline for this use case?
stages:
- build
- deploy
- test
build:
stage: build
script:
- echo Build Stage 1
tags:
- java-run
deploy:
stage: deploy
tags:
- java-run
script:
- "some script"
test:
stage: test
tags:
- java-run
script:
I'm making some assumptions around your use-case here, so let me know if they aren't right. I'm assuming:
You're starting the java app remotely (I.e., your pipeline is deploying it to a cloud provider or non-CI/CD server)
Your server running CI/CD has access to the application via the internet
If so, assuming that you want your job to fail if the service is not accessible, you can simply curl the url using the -f flag, and it will fail if it receives a 404 error. Examples:
test:
image: alpine:latest
script:
- apk add curl
- curl -o /dev/null -s -w "%{http_code}\n" https://httpstat.us/404 -f
The above job will fail, as curl returns exit code 22 when it receives a >= 400 error code and the -f flag is used:
Now, if you're attempting to run the app in your CI/CD (which is why you're referring to 127.0.0.1 in your question), then you can't run the app locally in one job and test in another. The job would only exist and run within the context of the container that's running it, and test is in a separate container because it's a separate job. You have two options if you're attempting to run your app within the context of CI/CD and test it:
You can run your tests in the same job where you start the app (you may need to run the app using nohup to run it in the background)
You can package your app into a docker container, then run it as a service in your test job.

Why is test automation only partially successful with Cypress and Azure DevOps?

I am using Cypress.io (Version 5.1.0) for testing my project.
My project is in azure DevOps. Now i want to include my cypress tests in Azure DevOps so my tests will run automatically.
I set up the JUnit reporter on my Cypress project:
into my “package.json” file i added
"cypress-multi-reporters": "^1.2.4",
"mocha-junit-reporter": "^1.23.3"
then run
npm install
than added
"scripts": {
"scripts": "cypress run",
"test": "npm run scripts"
}
Into my “cypress.json” file i added
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "cypress/reports/junit/test-results.[hash].xml",
"testsuitesTitle": false
}
After this I created a new Pipeline using Azure Repos in Azure DevOps.
For Pipeline Configuration i selected Node.js.
Now I have a YAML file. Here i removed npm build from the first script.
Now I picked npm from the assisstant. On the npm configurations, I selected custom and write the command run test . Now I Select the result format as “JUnit” and set Test results files to “*.xml”
At last I selected the option "Merge test results".
Now I saved and run the pipeline.
This is what my Job does:
Pool: Azure Pipelines
Image: ubuntu-latest
Agent: Hosted Agent
Started: Yesterday at 17:31
Expanded: Object
Result: True
Evaluating: not(containsValue(job['steps']['*']['task']['id'], '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Expanded: not(containsValue(Object, '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Result: True
Evaluating: resources['repositories']['self']['checkoutOptions']
Result: Object
Finished evaluating template 'system-pre-steps.yml'
********************************************************************************
Template and static variable resolution complete. Final runtime YAML document:
steps:
- task: 6d15af64-176c-496d-b583-fd2ae21d4df4#1
inputs:
repository: self
MaxConcurrency: 0
What is wrong with my automation? How can I fix this?
Update:
Thats my yml file:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- task: Npm#1
inputs:
command: 'custom'
customCommand: 'run test'
continueOnError: true
- task: PublishTestResults#2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)/cypress/reports/junit'
mergeTestResults: true
testRunTitle: 'Publish Test Results'
I got an email with this Details
Job 1 error(s), 1 warning(s) Error: Npm failed with return code: 254
The issue may be due to the agent rather than your code and scripts.
You can try the following solutions:
Change your agent image. As you are currently using the ubuntu-latest, it is recommanded to try the ubuntu-20.04 or ubuntu-16.04.
Use a self-hosted agent. If you don't have a self-hosted agent, click Self-hosted Linux agent for detailed steps.
Change the orgnization. Choose another organization that can run the build correctly, and just in case, it is better to create a new organization. Then create a new project and try your tests.
As stated already, the problem most likely lies with the Azure environment. Cypress has a dependency on a browser (electron, chrome) in order to execute. For example, if you are using docker, they provide an official image called cypress/browsers:node14.7.0-chrome84 that has everything you need out of the box. The Dockerfile also has useful info on the environment needed. Make sure to provide a headless configuration as well, something like:
cypress run --headless --browser chrome --spec yourSpecHere.js

How do a generate vscode TypeScript extension coverage report

It seems that coverage report with coveralls is not possible for VSCode extension built with TypeScript.
Currently, I am adding test cases to our project https://github.com/PicGo/vs-picgo/pull/42, I have found several ways to report coverages, but none of them work for me.
Using custom TestRunner
The official documentation mentions little about custom test runners, but I found a post here. It works when I use F5 to launch an Extension Test, but does not work when I run npm run test in the console (Got no coverage output at all).
I have also tried to understand the custom runner (source code) in the blog post, but I found I have nothing to do because I do not know why it works.
Using nyc
nyc with mocha is very powerful, but we cannot take advantage of it. When I run nyc ./node_modules/vscode/bin/test, I will got 0% coverage:
I have searched the issue page of nyc, lots of the same 0% coverage problems about TS projects exist, but none of them are the same with our environment. The main difference is that they are using mocha for testing, not like VSCode's ./node_modules/vscode/bin/test script, it will create a new process to run the test js files. I don't know how to deal with this.
I searched all the issues (mocha, tyc, istanbul, vscode, etc...), and there are few (I did not find any 😭 ) vscode TypeScripts are using coverage report for me to copy from. So my question is: how do I get the coverage report for my VSCode TS extension?
I have struggled with this myself for some time until I got it working properly. There were three main challenges in getting it working:
Proper setup of the nyc instance
Preventing race conditions on startup
Capturing nyc output and displaying it in the debug console
You can find my working test runner here. I'm also sharing additional insights on my blog.
I got everything working with Mocha, NYC, and VSCode!
You can see my solution to this in https://github.com/jedwards1211/vscode-extension-skeleton.
Basically, I use Babel to transpile my .ts code with #babel/preset-typescript and babel-plugin-istanbul before running the tests. This allows me to skip the convoluted extra steps of instrumenting the tsc output and using remap-istanbul.
Then in the test runner, I use the (not really documented) NYC API to write the coverage to disk after tests finish.
Finally, in my package scripts, I run nyc report after the test command finishes.
UPDATE: you need to delete the .nyc_output folder before each test run too.
src/test/index.js
import NYC from 'nyc'
export async function run(): Promise<void> {
const nyc = new NYC()
await nyc.createTempDirectory()
// Create the mocha test
const mocha = new Mocha({
ui: 'tdd',
})
mocha.useColors(true)
const testsRoot = path.resolve(__dirname, '..')
const files: Array<string> = await new Promise((resolve, reject) =>
glob(
'**/**.test.js',
{
cwd: testsRoot,
},
(err, files) => {
if (err) reject(err)
else resolve(files)
}
)
)
// Add files to the test suite
files.forEach(f => mocha.addFile(path.resolve(testsRoot, f)))
const failures: number = await new Promise(resolve => mocha.run(resolve))
await nyc.writeCoverageFile()
if (failures > 0) {
throw new Error(`${failures} tests failed.`)
}
}
Add custom test runner
See this post for more information, you can just copy the test runner code to your project's test/index.ts file.
Demo azure pipeline configurations
variables:
system.debug: true
jobs:
- job: Windows
pool:
name: Hosted VS2017
demands: npm
steps:
- task: NodeTool#0
displayName: 'Use Node 12.3.1'
inputs:
versionSpec: 12.3.1
- task: Npm#1
displayName: 'Install dependencies'
inputs:
verbose: false
- task: Npm#1
displayName: 'Compile sources and run tests'
inputs:
command: custom
verbose: false
customCommand: 'test'
# https://stackoverflow.com/questions/45602358/lcov-info-has-absolute-path-for-sf
- script: 'sed -i -- 's/..\\..\\//g' coverage/lcov.info && npm run coveralls'
displayName: 'Publish code coverage'
env:
COVERALLS_SERVICE_NAME: $(COVERALLS_SERVICE_NAME)
COVERALLS_REPO_TOKEN: $(COVERALLS_REPO_TOKEN)
- script: 'npm install -g vsce && vsce package'
displayName: 'Build artifact'
- task: CopyFiles#2
inputs:
contents: '*.vsix'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: vs-picgo-dev-build
trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
pr:
- dev*
Note that you have to use sed to delete the ..\..\ prefix of SF paths in lcov.info:
Before:
SF:..\..\src\vs-picgo\index.ts
After:
SF:src\vs-picgo\index.ts
Demo project: https://github.com/PicGo/vs-picgo

Resources