Code Coverage for Istanbul Wrong when using Sandbox in Nodeunit - node.js

I have written a bunch of tests using nodeunit to test my code. In doing so I wanted to mock out modules being required by the code under test. Instead of changing the code to make it more easily testable with mocks, inversion of control, when it wasn't needed, I instead used nodeunits sandbox function.
Example
var nodeunit = require("nodeunit");
export.MyTest = {
test1(test) {
var fakeGlobals = {
require: function(filename) {
if (filename == "CoolUtil.js") {
return { doit: function wasCool() { return true; } };
} else {
return require(filename);
}
}
};
var testSubject = nodeunit.utils.sandbox("ModuleUnderTest.js", fakeGlobals);
test.equals(42, testSubject.doSomethingCoolUsingCoolUtil(), "Worked");
test.done();
}
}
Istanbul is giving me the wrong coverage report numbers. I tried using the flag --post-require-hook which is said to be used with RequireJS, which I'm fine with switching to but haven't learned yet.
test/node_modules/.bin/istanbul cover --v --hook-run-in-context --root test/node_modules/.bin/nodeunit -- --reporter junit --output target/results/unit_tests test
Has anybody been successful with nodeunit, istanbul and using the sandbox feature in nodeunit?

Related

Jest Run All Tests(include only/skip) in CI

While in development we occasionally use skip or only to debug a particular test or test suit. Accidentally, we might forget to revert the cases and push the code for PR. I am looking for a way to detect or automatically run all tests even for skip and only tests in our CI pipeline(using Github action). It can be in either case as follow.
Fail the test when there are skip or only tests.
Run all tests even for skip and only.
Very much appreciate any help.
I came up with a solution for the second part of the question about running all tests even for skip and only. I don't think it's elegant solution, but it works and it's easy to implement.
First of all you need to change test runner to jest-circus if you work with jest bellow 27.x version. We need it so our custom test environment will use handleTestEvent function to watch for setup events. To do so, install jest-circus with npm i jest-circus and then in your jest.config.js set testRunner property:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
...
}
From Jest 27.0 they changed default test runner to jest-circus so you can skip this step if you have this or higher version.
Then you have to write custom test environment. I suggest to write it based on jsdom so for example we also have access to window object in tests and etc. To do so run in terminal npm i jest-environment-jsdom and then create custom environment like so:
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
class CustomJsDomEnvironment extends JsDomEnvironment {
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'setup') {
this.global.describe.only = this.global.describe
this.global.describe.skip = this.global.describe
this.global.fdescribe = this.global.describe
this.global.xdescribe = this.global.describe
this.global.it.only = this.global.it
this.global.it.skip = this.global.it
this.global.fit = this.global.it
this.global.xit = this.global.it
this.global.test.only = this.global.test
this.global.test.skip = this.global.test
this.global.ftest = this.global.test
this.global.xtest = this.global.test
}
}
}
module.exports = CustomJsDomEnvironment
And inform jest to properly use it:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
testEnvironment: 'path/to/custom/jsdom/environment.js',
...
}
Then you just have to setup custom environment value IS_CI in your CI pipeline and from now on all your skipped tests will run.
Also in custom test environment you could watch for skipped test and throw an error when your runner find skip/only. Unfortunately throwing an error in this place won't fail a test. You would need to find a way to fail a test outside of a test.
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
const path = require('path')
class CustomJsDomEnvironment extends JsDomEnvironment {
constructor(config, context) {
super(config, context)
const testPath = context.testPath
this.testFile = path.basename(testPath)
}
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'add_test') {
if(event.mode === 'skip' || event.mode === 'only') {
const msg = `Run ${event.mode} test: '${event.testName}' in ${this.testFile}`
throw new Error(msg)
}
}
}
}
module.exports = CustomJsDomEnvironment

Run ava test.before() just once for all tests

I would like to use test.before() to bootstrap my tests. The setup I have tried does not work:
// bootstrap.js
const test = require('ava')
test.before(t => {
// do this exactly once for all tests
})
module.exports = { test }
// test1.js
const { test } = require('../bootstrap')
test(t => { ... {)
AVA will run the before() function before each test file. I could make a check within the before call to check if it has been called but I'd like to find a cleaner process. I have tried using the require parameter with:
"ava": {
"require": [
"./test/run.js"
]
}
With:
// bootstrap,js
const test = require('ava')
module.exports = { test }
// run.js
const { test } = require('./bootstrap')
test.before(t => { })
// test1.js
const { test } = require('../bootstrap')
test(t => { ... {)
But that just breaks with worker.setRunner is not a function. Not sure what it expects there.
AVA runs each test file in its own process. test.before() should be used to set up fixtures that are used just by the process it's called in.
It sounds like you want to do setup that is reused across your test files / processes. Ideally that's avoided since you can end up creating hard-to-detect dependencies between the execution of different tests.
Still, if this is what you need then I'd suggest using a pretest npm script, which is run automatically when you do npm test.
In your package.json you could run a setup script first...
"scripts": {
"test": "node setup-test-database.js && ava '*.test.js'"
}
Then...
In that setup-test-database.js file, have it do all your bootstrappy needs, and save a test-config.json file with whatever you need to pass to the tests.
In each test you just need to add const config = require('./test-config.json'); and you'll have access to the data you need.

How to generate multiple reports with mocha?

I want to have the following reports:
coverage
spec
xunit
all running in a single mocha execution from my grunt
Currently - I have to run the tests 3 times, each time to generate a different report(!).
So I use grunt-mocha-test with 2 configuration where only the reporter is different (once xunit-file and once spec).
And then I have grunt-mocha-istanbul that runs the tests yet again,and generates the coverage report.
I tried using
{
options: {
reporters : ['xunit-file', 'spec']
}
}
for grunt-mocha-test at least to bring it down to 2, but that doesn't work as well.
reading grunt-mocha-istanbul documentation, i can't seem to find any info about reporter configuration.
How can I resolve this?
Maybe this can help:
https://github.com/glenjamin/mocha-multi
AFAIK this is not supported in Mocha yet, but it is on its way:
https://github.com/mochajs/mocha/pull/1360
Hope this helps,
György
I ran into the same problem recently, and found nothing after looking around SO as well as GH issues. It seems that topic of officially supporting multiple reporters are getting postponed over and over.
Having said that having a custom solution is quite easy, assuming the reporters you want to combine already exist. What I did is to create a small and naive custom reporter, and used the reporter in .mocharc.js config.
// junit-spec-reporter.js
const mocha = require("mocha");
const JUnit = require("mocha-junit-reporter");
const Spec = mocha.reporters.Spec;
const Base = mocha.reporters.Base;
function JunitSpecReporter(runner, options) {
Base.call(this, runner, options);
this._junitReporter = new JUnit(runner, options);
this._specReporter = new Spec(runner, options);
return this;
}
JunitSpecReporter.prototype.__proto__ = Base.prototype;
module.exports = JunitSpecReporter;
// .mocharc.js
module.exports = {
reporter: './junit-spec-reporter.js',
reporterOptions: {
mochaFile: './tests-results/results.xml'
}
};
The example above shows how to use both spec and junit reporter.
More info on custom reporter: https://mochajs.org/api/tutorial-custom-reporter.html
Note that this is just a proof of concept and can be made prettier and more robust using more generic approach (and TypeScript).
Update 14.9.2021
I have created a utility package for this: https://www.npmjs.com/package/#netatwork/mocha-utils
For simultaneously reporting for spec and x-unit, there's also an NPM package called spec-xunit-file.
In grunt:
grunt.initConfig({
mochaTest: {
test: {
options: {
reporter: 'spec-xunit-file',
...
},
...
}
}
...
});

Is there a way to know that nodeunit has finished all tests?

I need to run some code after nodeunit successfully passed all tests.
I'm testing some Firebase wrappers and Firebase reference blocks exiting nodeunit after all test are run.
I am looking for some hook or callback to run after all unit tests are passed. So I can terminate Firebase process in order nodeunit to be able to exit.
Didn't found a right way to do it.
There is my temporary solution:
//Put a *LAST* test to clear all if needed:
exports.last_test = function(test){
//do_clear_all_things_if_needed();
setTimeout(process.exit, 500); // exit in 500 milli-seconds
test.done();
}
In my case, this is used to make sure DB connection or some network connect get killed any way. The reason it works is because nodeunit run tests in series.
It's not the best, even not the good way, just to let the test exit.
For nodeunit 0.9.0
For a recent project, we counted the tests by iterating exports, then called tearDown to count the completions. After the last test exits, we called process.exit().
See the spec for full details. Note that this went at the end of the file (after all the tests were added onto exports)
(function(exports) {
// firebase is holding open a socket connection
// this just ends the process to terminate it
var total = 0, expectCount = countTests(exports);
exports.tearDown = function(done) {
if( ++total === expectCount ) {
setTimeout(function() {
process.exit();
}, 500);
}
done();
};
function countTests(exports) {
var count = 0;
for(var key in exports) {
if( key.match(/^test/) ) {
count++;
}
}
return count;
}
})(exports);
As per nodeunit docs I can't seem to find a way to provide a callback after all tests have ran.
I suggest that you use Grunt so you can create a test workflow with tasks, for example:
Install the command line tool: npm install -g grunt-cli
Install grunt to your project npm install grunt --save-dev
Install the nodeunit grunt plugin: npm install grunt-contrib-nodeunit --save-dev
Create a Gruntfile.js like the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
grunt.registerTask('test', [
'nodeunit'
]);
};
Create your custom task that will be run after the tests by changing your grunt file to the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
//this is just an example you can do whatever you want
grunt.registerTask('generate-build-json', 'Generates a build.json file containing date and time info of the build', function() {
fs.writeFileSync('build.json', JSON.stringify({
platform: os.platform(),
arch: os.arch(),
timestamp: new Date().toISOString()
}, null, 4));
grunt.log.writeln('File build.json created.');
});
grunt.registerTask('test', [
'nodeunit',
'generate-build-json'
]);
};
Run your test tasks with grunt test
I came across another solution how to deal with this solution. I have to say the all answers here are correct. However when inspecting grunt I found out that Grunt is running nodeunit tests via reporter and the reporter offers a callback option when all tests are finished. It could be done something like this:
in folder
test_scripts/
some_test.js
test.js can contain something like this:
//loads default reporter, but any other can be used
var reporter = require('nodeunit').reporters.default;
// safer exit, but process.exit(0) will do the same in most cases
var exit = require('exit');
reporter.run(['test/basic.js'], null, function(){
console.log(' now the tests are finished');
exit(0);
});
the script can be added to let's say package.json in script object
"scripts": {
"nodeunit": "node scripts/some_test.js",
},
now it can be done as
npm run nodeunit
the tests in some_tests.js can be chained or it can be run one by one using npm

Can Blanket.js work with Jasmine tests if the tests themselves are loaded with RequireJS?

We've been using Jasmine and RequireJS successfully together for unit testing, and are now looking to add code coverage, and I've been investigating Blanket.js for that purpose. I know that it nominally supports Jasmine and RequireJS, and I'm able to successfully use the "jasmine-requirejs" runner on GitHub, but this runner is using a slightly different approach than our model -- namely, it loads the test specs using a script tag in runner.html, whereas our approach has been to load the specs through RequireJS, like the following (which is the callback for a requirejs call in our runner):
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 1000;
var htmlReporter = new jasmine.TrivialReporter();
var jUnitReporter = new jasmine.JUnitXmlReporter('../JasmineTests/');
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.addReporter(jUnitReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
var specs = [];
specs.push('spec/models/MyModel');
specs.push('spec/views/MyModelView');
$(function () {
require(specs, function () {
jasmineEnv.execute();
});
});
This approach works fine for simply doing unit testing, if I don't have blanket or jasmine-blanket as dependencies for the function above. If I add them (with require.config paths and shim), I can verify that they're successfully fetched, but all that appears to happen is that I get jasmine-blanket's overload of jasmine.getEnv().execute, which simply prints "waiting for blanket..." to the console. Nothing is triggering the tests themselves to be run anymore.
I do know that in our approach there's no way to provide the usual data-cover attributes, since RequireJS is doing the script loading rather than script tags, but I would have expected in this case that Blanket would at least calculate coverage for everything, not nothing. Is there a non-attribute-based way to specify the coverage pattern, and is there something else I need to do to trigger the actual test execution once jasmine-blanket is in the mix? Can Blanket be made to work with RequireJS loading the test specs?
I have gotten this working by requiring blanket-jasmine then setting the options
require.config({
paths: {
'jasmine': '...',
'jasmine-html': '...',
'blanket-jasmine': '...',
},
shim: {
'jasmine': {
exports: 'jasmine'
},
'jasmine-html': {
exports: 'jasmine',
deps: ['jasmine']
},
'blanket-jasmine': {
exports: 'blanket',
deps: ['jasmine']
}
}
});
require([
'blanket-jasmine',
'jasmine-html',
], function (blanket, jasmine) {
blanket.options('filter', '...'); // data-cover-only
blanket.options('branchTracking', true); // one of the data-cover-flags
require(['myspec'], function() {
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 250;
var htmlReporter = new jasmine.HtmlReporter();
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
jasmineEnv.addReporter(new jasmine.BlanketReporter());
jasmineEnv.currentRunner().execute();
});
});
The key lines are the addition of the BlanketReporter and the currentRunner execute. Blanket jasmine adapter overrides jasmine.execute with a no-op that just logs a line, because it needs to halt the execution until it is ready to begin after it has instrumented the code.
Typically the BlanketReport and currentRunner execute would be done by the blanket jasmine adapter but if you load blanket-jasmine itself in require, the event for starting blanket test runner will not get fired as subscribes to the window.load event (which by the point blanket-jasmine is loaded has already fired) therefore we need to add the report and execute the "currentRunner" as it would usually execute itself.
This should probably be raised as a bug, but for now this workaround works well.

Resources