Documentation is pretty sparse on doing coverage with istanbul for integration tests. When I run through my mocha tests, I get No coverage information was collected, exit without writing coverage information.
The first thing I do is instrument all my source code:
✗ istanbul instrument . -o .instrument
In my case, this is a REST microservice that is Dockerized which I have written Mocha tests to run against it to validate it once it is deployed. My expectation is istanbul will give me code coverage against the source from that Node service.
The second step I do this command to run node on my instrumented code:
✗ istanbul cover --report none .instrument/server.js
After that, I run my tests using the following from the my main src directory as follows (with results):
✗ istanbul cover --report none --dir coverage/unit node_modules/.bin/_mocha -- -R spec ./.instrument/test/** --recursive
swagger-tests
#createPet
✓ should add a new pet (15226ms)
#getPets
✓ should exist and return an Array (2378ms)
✓ should have at least 1 pet in list (2500ms)
✓ should return error if search not name or id
✓ should be sorted by ID (3041ms)
✓ should be sorted by ID even if no parameter (2715ms)
✓ should be only available pets (2647ms)
#getPetsSortedByName
✓ should be sorted by name (85822ms)
#deletePet
✓ should delete a pet (159ms)
9 passing (2m)
No coverage information was collected, exit without writing coverage information
When I run istanbul report, it obviously has nothing to report on.
What am I missing?
See develop branch of this project to reproduce issue.
The owner of istanbul helped me to resolve this. I was able to get things going by performing the following steps:
Skip instrumenting the code; it's not needed
Call istanbul with --handle-sigint as #heckj recommended and remove the flag --report none
Once your server is up, just run tests as normal: ./node_modules/.bin/_mocha -R spec ./test/** --recursive
Shutdown the server from step 2 to output the coverage
View the HTML file in open coverage/lcov-report/index.html
This looks like you were following the blog post I was just looking at when trying to figure out how to attack this time problem:
Javascript Integration Tests Coverage with Istanbul
I don't what specifically what is different between what you've posted above, and what that blog post identifies. One thing to check is to make sure that there are coverage*.json files getting generated when the code is being executed. I'm not sure when those files are specifically generated by Istanbul, so you may need to terminate the instrumented code running. There's also a mention of a --handle-sigint option on the cover command in the README that hinted at needing to invoke a manual SIGINT interupt to get coverage information on a long running process.
Looking at one of the bugs, there's obviously been some pain with this in the past, and some versions of istanbul had problems with "use strict" mode in the NodeJS code.
So my recommendation is run all the tests, and then make sure the processes are all terminated, before running the report command, and checking to see if the coverage*.json files are written somewhere. Beyond that, might make sense to take this as an issue into the github repo, where there appears to be good activity and answers.
Related
We use Cypress for thorough e2e testing on our site.
The Tech stack is React + Node(koa.js).
We have a high test coverage since we tend to mock most of the user actions (most of the crud methods as well).
It happens sometimes that a test suite fails during the execution (or is interrupted by something) so we have a duplicated entry on the next run (create test fails). Then I need to manually delete testing entries from the site and re-run the pipeline.
We want to make sure that we have a clean database for testing on each run. I could use an advice. Thanks in advance!
I really love cargo and how easy it is to write unit tests.
However, it seems like it's testing functionality is fairly basic. What I'd like to be able to do is have named groups of tests somehow. What I am trying to accomplish is to have a default set of tests that execute when you run the basic cargo test. However, some of my tests take much longer to run, so I'd like to be able to move these to another group of extended tests that I can run with some command like cargo test --extended, and also the ability to be able to run all the tests at once easily. I also have a third group of tests that I have currently implemented as ignored tests so I can run them separately.
Even though all my tests are effectively unit tests, I tried to accomplish this by creating a tests directory as you would do with integration tests. However it seems that the basic cargo test command wants to run the all these tests, i.e. the normal tests that are part of my crate as well as the extended tests in the tests crate.
Does anyone know how to accomplish this or whether there is some crate that provides this functionality?
You could use a combination of feature flags and the #ignore macro as mentioned here: https://www.reddit.com/r/rust/comments/3i1nki/how_to_skip_expensive_tests_with_cargo_test/
Do we have a code coverage tool that detects how much code is covered. I don't want to have any testing framework here. Just the way users use it, it should be able to give real time code coverage details. Is it possible?
Well, yes – just run your application under a suitable coverage runner such as nyc:
E.g. if you'd start your app with npm start, have nyc installed and run with
nyc --reporter=lcov npm start
Of course you'll need to run for a while (so your users get to cover your app), and then capture the LCOV/HTML report generated.
I have a test suite and because it contains some expensive tests, I disable some of them for our CI. However once a day, I'd like to run the whole test suite.
The issue is that running against the same set of test files, it causes snapshot failures because when running the whole test suite it is missing some. If I generate them, then the CI fails because it complains about snapshots being removed (i.e. the one from the whole test suite that are not being checked on the CI.)
What would be the proper way to handle this with jest?
Thanks!
I have a setup basically described here - http://karma-runner.github.io/0.8/plus/RequireJS.html
Problem is that I can't see source files of my tests in Chrome dev tools. So I can't debug it. Adding debugger; works but it is very uncomfortable, almost unusable since I can't browse any other file except the one with debugger; currently fired
Seems like karma load files, parse them, wrap each test and then unload files before run.
ng-boilerplate has a grunt build that will put all your plain js files into a build directory for testing and debugging.
Take a look at the Gruntfile and karma/karma-unit.tpl.js for how this is done.
Running grunt watch will leave your browser in a state where you can debug all your tests. Just click the debug button, set your break point(s) and reload the page.
Suddenly, you are debugging any or all your js files.
If you need to debug your test deeply, this is generally an indicator of badly organized code or badly made unit test. If you follow a TDD workflow, taking small step will help you prevent any major issue with your code. I warmly recommend you watch this video: http://blog.testdouble.com/posts/2013-10-03-javascript-testing-tactics.html?utm_source=javascriptweekly&utm_medium=email (it doesn't use Karma, but you should watch it for the workflow/the principles presented)
Then, if you really want to debug your test code, nothing beat the browser. As so, you should set up your test in a manner it can be runned both in Karma and the browser. We implemented this for QUnit, Jasmine and Mocha on the Backbone-Boilerplate. Feel free to base yourself on these settings to set up your own environment.