I'm confused, I see people using both. They're both code coverage reporting tools. So is it just that people are using the Istanbul functionality and want to use coveralls UI instead of the istanbul html output files as just a nicer coverage runner, is that it? is that the reason to use both??
Istanbul generates coverage information and coveralls provides historical coverage reporting. Istanbul provides a snapshot of where you are; coveralls tells you where you have been.
Typically, you use coveralls as part of a CI/CD pipeline: local build, push to Git, Travis build, push results to coveralls, ...
When you build your project, you will look at your lcov html report to review coverage. How do you know if your coverage has increased or reduced? Look at coveralls for the history.
Shields.io provides badges for Coveralls coverage that you can wear on your GitHub README.md which also shows on npmjs.com if you publish there. It is a nice quality indicator for people using your product and equally nice as a note-to-self that your coverage is slipping (badges are colored and show a % coverage).
Related
Do we have a code coverage tool that detects how much code is covered. I don't want to have any testing framework here. Just the way users use it, it should be able to give real time code coverage details. Is it possible?
Well, yes – just run your application under a suitable coverage runner such as nyc:
E.g. if you'd start your app with npm start, have nyc installed and run with
nyc --reporter=lcov npm start
Of course you'll need to run for a while (so your users get to cover your app), and then capture the LCOV/HTML report generated.
I have a node.js project running mocha tests, and I'm generating a coverage report using blanket. I've managed to get the coverage report generated, but I'm not sure how to generate a report that can be consumed by and viewed in Jenkins. Any suggestions? I'm looking for a result similar to the Cobertura plugin (https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin).
Edit: Sorry i misread your question, if the coverage report gets published with the xunit report i dont know. So the following might not help you.
The XUnit reporter should create a report that can be parsed by jenkins.
Check out this blog post.
Also, have a look into the XUnit Plugin, it allows to specify the parser for various kinds of report formats.
Quote for persistence:
Source https://blog.dylants.com/2013/06/21/jenkins-and-node/
Documentation is pretty sparse on doing coverage with istanbul for integration tests. When I run through my mocha tests, I get No coverage information was collected, exit without writing coverage information.
The first thing I do is instrument all my source code:
✗ istanbul instrument . -o .instrument
In my case, this is a REST microservice that is Dockerized which I have written Mocha tests to run against it to validate it once it is deployed. My expectation is istanbul will give me code coverage against the source from that Node service.
The second step I do this command to run node on my instrumented code:
✗ istanbul cover --report none .instrument/server.js
After that, I run my tests using the following from the my main src directory as follows (with results):
✗ istanbul cover --report none --dir coverage/unit node_modules/.bin/_mocha -- -R spec ./.instrument/test/** --recursive
swagger-tests
#createPet
✓ should add a new pet (15226ms)
#getPets
✓ should exist and return an Array (2378ms)
✓ should have at least 1 pet in list (2500ms)
✓ should return error if search not name or id
✓ should be sorted by ID (3041ms)
✓ should be sorted by ID even if no parameter (2715ms)
✓ should be only available pets (2647ms)
#getPetsSortedByName
✓ should be sorted by name (85822ms)
#deletePet
✓ should delete a pet (159ms)
9 passing (2m)
No coverage information was collected, exit without writing coverage information
When I run istanbul report, it obviously has nothing to report on.
What am I missing?
See develop branch of this project to reproduce issue.
The owner of istanbul helped me to resolve this. I was able to get things going by performing the following steps:
Skip instrumenting the code; it's not needed
Call istanbul with --handle-sigint as #heckj recommended and remove the flag --report none
Once your server is up, just run tests as normal: ./node_modules/.bin/_mocha -R spec ./test/** --recursive
Shutdown the server from step 2 to output the coverage
View the HTML file in open coverage/lcov-report/index.html
This looks like you were following the blog post I was just looking at when trying to figure out how to attack this time problem:
Javascript Integration Tests Coverage with Istanbul
I don't what specifically what is different between what you've posted above, and what that blog post identifies. One thing to check is to make sure that there are coverage*.json files getting generated when the code is being executed. I'm not sure when those files are specifically generated by Istanbul, so you may need to terminate the instrumented code running. There's also a mention of a --handle-sigint option on the cover command in the README that hinted at needing to invoke a manual SIGINT interupt to get coverage information on a long running process.
Looking at one of the bugs, there's obviously been some pain with this in the past, and some versions of istanbul had problems with "use strict" mode in the NodeJS code.
So my recommendation is run all the tests, and then make sure the processes are all terminated, before running the report command, and checking to see if the coverage*.json files are written somewhere. Beyond that, might make sense to take this as an issue into the github repo, where there appears to be good activity and answers.
We would like to be able to deploy our code to azure and then run integration/acceptance tests on the deployed instances to validate the functionality, as using the emulator does not always give realistic results.
We would also like to have these tests generate code coverage reports which we could then merge in with the code coverage from our unit tests. We are using TeamCity as our build server with the built in dotcover as our code coverage tool.
Can we do this? Does anyone have any pointers on where to start?
Check out this video
Kudu can be extended to run Unit Tests and much more more using Custom Deployment Scripts.
http://www.windowsazure.com/en-us/documentation/videos/custom-web-site-deployment-scripts-with-kudu/
Are there any extensions to HUnit or QuickCheck that allow a continuous integration system like Bamboo to do detailed reporting of test results?
So far, my best idea is to simply trigger the tests as part of a build script, and rely on the tests to fail with a non-zero exit code. This is effective for getting attention when a test fails, but confuses build failures with test failures and requires wading through console output to determine the problem's source.
If this is the best option with current tools, my thought is to write a reporting module for HUnit that would produce output in the JUnit XML format, then point the CI tool at it as though it were reporting on a Java project. This seems somewhat hackish, though, so I'd appreciate your thoughts both on existing options and directions for new development.
The test-framework package provides tools for integrating tests using different testing paradigms, including HUnit and QuickCheck, and its console test runner can be passed a flag that makes it produce JUnit-compatible XML. We use it with Jenkins for continuous integration.
Invocation example:
$ ./test --jxml=test-results.xml
I've just released a package which generates test-suites based off modules containing quickCheck properties: http://hackage.haskell.org/package/tasty-integrate
This is one step above test-framework/tasty at the moment, as it forcefully pulls/aggregates them off the filesystem, instead of relying upon per-file record keeping. I hope this helps your CI process.