Code coverage with Mocha - node.js

I am using Mocha for testing my NodeJS application. I am not able to figure out how to use its code coverage feature. I tried googling it but did not find any proper tutorial. Please help.

You need an additional library for code coverage, and you are going to be blown away by how powerful and easy istanbul is. Try the following, after you get your mocha tests to pass:
npm install nyc
Now, simply place the command nyc in front of your existing test command, for example:
{
"scripts": {
"test": "nyc mocha"
}
}

Now (2023) the preferred way to use istanbul is via its "state of the art command line interface" nyc.
Setup
First, install it in your project with
npm i nyc --save-dev
Then, if you have a npm based project, just change the test script inside the scripts object of your package.json file to execute code coverage of your mocha tests:
{
"scripts": {
"test": "nyc --reporter=text mocha"
}
}
Run
Now run your tests
npm test
and you will see a table like this in your console, just after your tests output:
Customization
Html report
Just use
nyc --reporter=html
instead of text. Now it will produce a report inside ./coverage/index.html.
Report formats
Istanbul supports a wide range of report formats. Just look at its reports library to find the most useful for you.
Just add a --reporter=REPORTER_NAME option for each format you want.
For example, with
nyc --reporter=html --reporter=text
you will have both the console and the html report.
Don't run coverage with npm test
Just add another script in your package.json and leave the test script with only your test runner (e.g. mocha):
{
"scripts": {
"test": "mocha",
"test-with-coverage": "nyc --reporter=text mocha"
}
}
Now run this custom script
npm run test-with-coverage
to run tests with code coverage.
Force test failing if code coverage is low
Fail if the total code coverage is below 90%:
nyc --check-coverage --lines 90
Fail if the code coverage of at least one file is below 90%:
nyc --check-coverage --lines 90 --per-file

Blanket.js works perfect too.
npm install --save-dev blanket
in front of your test/tests.js
require('blanket')({
pattern: function (filename) {
return !/node_modules/.test(filename);
}
});
run mocha -R html-cov > coverage.html

The accepted answer (nyc) does not work if you are using ESM modules.
C8 appears to be the best solution now, which leverages built-in NodeJS capabilities and utilizes istanbul (like nyc, and shares the same config files).
npm install -g c8
c8 mocha
It will use .nycrc for configuration. A sample configuration I'm using is:
{
"all": true,
"exclude": ["test"],
"output": "reports",
"reporter" : [
"html",
"text"
]
}
(Note: I was pointed to c8 by an answer to another question https://stackoverflow.com/a/69846825/1949430)

Related

Get combined code coverage for cucumber-js and mochaJS test cases using NYC

In our codebase, we have written acceptance test cases in cucumber-js and unit tests in mocha js. The individual way to run these is by:
Mocha - "test:mocha-coverage": "nyc npm run test"
Cucumber - "test:cucumber-coverage": "nyc npm run test:cucumber"
Now we want to get a combined code coverage by these two. I have tried running both test cases together inside NYC but that did not work. Has someone done this?

Testing via npm command line

I have two types of test suites - normal and coverage.
At present, I am allowing one of them via npm test and one of them via npm start:
"scripts": {
"test": "node scripts/run-truffle-tests.js",
"start": "node scripts/run-sol-coverage.js"
}
I have a feeling that npm start was not originally designated for this purpose.
Is there a better way to implement this?
I was thinking of passing an argument to npm test, but I'm not sure that npm would pass it on to the script which it is set to invoke.
Add more scripts.
I usually have the tests for actual, full, single run unit tests to work with CI and other scripts for variations:
{
"scripts": {
"test": "node scripts/run-truffle-tests.js && npm run test:coverage",
"test:continuous": "karma start some.config --run-continuous",
"test:coverage": "node scripts/run-sol-coverage.js"
"start": "node index.js"
}
}
You can also chain commands with && which will cause the script to be run in sequence and the "total" error code will propagate. In other words, using the test I have above, will run both the unit test and the coverage test. If either of them return a non-zero exit code, npm will consider the whole test process to have failed.
Bear in mind that for custom scripts not named exactly start and test as well as the other designated scripts found in the docs here: npm#scripts, must be run with
npm run scriptname
instead of just
npm scriptname
So in my above example, you would test coverage with:
npm run test:coverage
Also, the : is just a convention. As far as I know it's not special.
Additionally, you can
pass [argument] on to the script which it is set to invoke
Whenever use use npm test, what is basically happening is that npm runs whatever String value is set in the package.json's scripts.test as a process, as though you had typed that String into the shell yourself. It then looks at the return code and if it's 0, it reports as everything being ok; if it's non-zero, it prints an error.

jest running tests appending s to the command

I want to ask the community if they have ever seen this issue before.
when running a simple "npm t" command, I am getting the following error
GTaylor#slayer MINGW64 /d/slayer/packages/components (feature/HUBBLU-54)
$ npm t
> fl-components#1.0.0 test D:\slayer\packages\components
> jest --config=jest.json
s" --config=jest.json was unexpected at this time.
D:\slayer\packages\components>)s" --config=jest.json
npm ERR! Test failed. See above for more details.
the script is
"scripts": {
"build": "tsc -d",
"test": "jest --config=jest.json"
}
If I run the command directly this work. I also have other project (in another solution) that works.
This is a Lerna project with multiple packages and they all suffer from the same issue. Note that this was working on one point.
I have no idea where the "s" is coming from. It feels like it trying to run a following command .. but where the hell is the "s" and new lines coming from????
"jest --config=jest.json\n\ns -config=jest.json"
please help before I go mad.... :)

How to separate unit and integration/long running tests on Jest?

Currently, I have two folders: __tests__ for unit (fast) tests and __integration__ for slow tests.
Then, in package.json:
{
"scripts": {
"test": "jest",
"test:integration": "jest -c '{}'",
...
},
"jest": {
"testPathIgnorePatterns": ["/node_modules/", "__integration__"]
}
}
So, when I want to do TDD, I'm running just npm test and when I want to test the entire project, npm run test:integration.
As Jest is offered as a "no configuration" test framework, I was thinking if there's a better (or proper) way to configure this.
Thank you.
Quoting from this post.
You can try name files like:
index.unit.test.js and api.int.test.js
And with Jest’s pattern matching feature, it makes it simple to run
them separately as well. For unit testing run jest unit and for
integration testing run jest int.
File structure/location you can define based on your preferences as the pattern matching based on the file name is how jest knows what to run.
Also see jest cli documentation about npm scripts:
If you run Jest via npm test, you can still use the command line
arguments by inserting a -- between npm test and the Jest arguments
Have you tried jest --watch for TDD? It runs only files related to your git changes, runs errors first and heavily utilise cache for speed.
Other than that, jest -c accepts a path, not a string. You should be good with jest -c jest-integration-config.json, provided that jest-integration-config.json sits in your project's root.

Send data to Coveralls only from Travis, not when testing locally

I have an app (https://github.com/idmillington/dendry) that uses Travis CI to monitor build status. I use Istanbul to general a coverage report, and I'd like to send this to Coveralls, to generate a coverage button for the README.
All of this I can get working. But...
When I run npm test locally, I don't want to send coveralls the coverage data. I'm typically running npm test dozens of times for each commit. But when I push and Travis does its thing, I'd like Travis to update the coverage for me.
I could have something like this in my package.json:
"scripts": {
"test": "./node_modules/.bin/istanbul test ./node_modules/.bin/_mocha",
}
Which is fine for locally, and doesn't update Coveralls, but Travis won't update Coveralls either. Or I could do:
"scripts": {
"test": "./node_modules/.bin/istanbul test ./node_modules/.bin/_mocha && ./node_modules/coveralls/bin/coveralls.js < ./coverage/lcov.info",
}
Which is perfect for Travis, but tries to push data to Coveralls every time I run npm test locally.
As far as I can tell, I can't ask Travis to run something other than npm test.
I am unwilling to ask any potential users or contributors to remember to test using
$ npm run-script test-local
or some such, especially as running npm test would generate an upload error without the correct private key for coveralls.
Is there a way to get the right behavior here?
The answer, as it turns out, was frighteningly simple. Travis does allow you to call whatever script you like as it runs, so I added this to my .travis.yml file:
script: npm run-script test-on-travis
so in package.json I could define:
"scripts": {
"test": "./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha",
"test-on-travis": "./node_modules/.bin/istanbul cover --report lcovonly ./node_modules/.bin/_mocha && cat ./coverage/lcov.info | ./node_modules/coveralls/bin/coveralls.js"
}
and everything works fine.

Resources