I have my e2e tests (using WebDriverIO) running on Gitlab Pipeline. I already setup a hook to connect to Microsoft Teams, so whenever the tests fail, I receive this message in Teams:
So, then I click on the Pipeline ID and check the specs to see which tests are failed and so on.
My question: is it possible (and how) to export also the specs into this message, for example, I would like to have the following specs as part of the message to Teams:
Related
I have a Node.js application and it already has Unit Tests and is using the Mocha framework for the same. It is checking the functions individually. These tests are integrated into the CI/CD pipeline in Bamboo and so if there is an error, it will stop the build job and alert the user who has pushed the change.
Now I have a requirement that I need to validate a JSON file, which is available on one of the S3 buckets. It downloads the file once the Node.js application is started in the local environment. I have unit tests to check whether the downloading functionality is working or not and it is working fine. Now for the validation purpose, I am a little confused about whether I need to add it as a unit test or an integration test. I am new to QA and I would like to do it in the right way. As of now, there are no integration tests are in place(No tests are checking the API endpoints). It will be helpful if someone can point me in the right direction. Also, it will be helpful if someone can suggest the framework I need to use with Node.js for writing integration tests.
I have the following code that is used for testing the download functionality.
it(`Download file from S3`, (done) => {
s3Service.getJSONFile('','',Date.now()).then(data => {
setTimeout(() => {
assert.equal(data, "JSON File Download Success");
done();
}, 1000);
}).catch(function(error){
console.log("Error in getJSONFileFromS3: "+ JSON.stringify(error))
})
});
I have a function validateJSON for validating the JSON file and its contents. Not sure whether I need to call this function from a Unit Test so that it will return true or false. But I think in the case of Unit Test it will check whether the validation function is working or not and not the validity of the file. What I need is for my tests should succeed if the JSON file is valid and fail if it is not so that the build will be stopped. By the way, I don't have an API endpoint for the JSON validation
It will be helpful if someone can show me an example of how these types of scenarios should be addressed in testing.
I'm trying to get a list of all tests ran from a Azure Pipeline build through the Azure REST API. I'm trying to use this API call but get an empty list every time:
GET https://dev.azure.com/{organization}/{project}/_apis/test/runs?buildUri={buildUri}&$top={$top}&api-version=6.0
where I'm using the BuildID as the buildUri. Should I be using something else as the buildUri? I only want the top test runs from one build pipeline, not from all pipelines in the project. Even better if I could only get all the tests from one run of the pipeline.
Currently, the only way I've found to solve this is by using the above call to get all runs (none of them have a Build property with data), so I can then use the ids from that call to make the following API call to get which build it is, and filter it manually from there. However, this is way more API calls and data downloaded than I wanted. I want a simple way to get all test runs from a build id.
GET https://dev.azure.com/{organization}/{project}/_apis/test/runs/{runId}?api-version=6.0
I want a simple way to get all test runs from a build id.
As suggested by Xue Meng:
To get the results of each test run:
GET https://vstmr.dev.azure.com/{Orginazation name}/{Project name}/_apis/testresults/resultsbypipeline?pipelineId={buildID}&%24top=20000?api-version=5.2-preview.1
To get the total number of passed tests:
GET https://vstmr.dev.azure.com/{Orginazation name}/{Project name}/_apis/testresults/resultdetailsbybuild?buildId={buildID}&publishContext=CI&groupBy=TestRun&%24filter=Outcome%20eq%20Failed&%24orderby=&shouldIncludeResults=true&queryRunSummaryForInProgress=false?api-version=5.2-preview.1
According to Chubsdad, this seems to be working to get the test result by buildID:
GET https://vstmr.dev.azure.com{org}/{proj}/_apis/testresults/resultdetailsbybuild?buildId={id}&groupBy=TestRun
I have a unit test written in visual studio with login flow. And It runs perfectly when running locally. But when I push the code in Azure repo and create a new build in azure pipeline. In "VSTest" I face this warning
"Invalid results file. Make sure the result format of the file
'C:\Agents\Build\Agent1_work\45\s\TestResults\MS$_MS_2019-04-21_12_24_59.trx'
matches 'VSTest' test results format."
And after creation of build no test is available in Tests tab.
So I created a Zapier app with an action using the node package mongoose to update a document in a remote db. When I test it locally with Zapier test, it manages to complete the update, but when I test it on zapier.com for a zap, I get the error Cannot find module './drivers/node-mongodb-native/connection'
Clippings from zapier.com
I am having a cruisecontrol.net version 1.5.7256.1 and i integrated project and unit test and the current behaviour is :
1) if code is checked in successfully and all unit test function passed then email notification is build successfull
2)if code is checked in successfully and any unit test function failed then email notification is build failed
the expected behaviour is if code is checked in successfully and any unit test function failed then email notification should be build passed but unit test failed.
In other words i want to customize the build emails.? is that possible
please help
Thanks,
Nilesh
Hi, I did the changes accordingly i added a condition to check test failed but its not showing while email notification.. is i am forgetting something... ?
Yes, it's possible to customize email output by modifying ccnet.exe.config.
In this file, XSL sheets are applied to build email.
dashboard.config file will customize the web dashboard.
Usually a successful build means every actions succeed, from compile to tests and deployment. Maybe you should reconsider the warning only email when test failed.