Is it possible to run Mocha programmatically, yet run it in a "verbose" mode and use the results programmatically?
Right now I am using it in NodeJS via mocha modules (using chai internally in the suite for the assertions).
What I want is to get more data about the failed tests than the generic errors of style: "expected true and got false".
Is there for instance a way to check which assertion failed and why, in case of multiple assertions within a test, or receive more information about a specific test, and if yes, how?
When you run Mocha programmatically, the mocha.run() method returns a Runner object. If you listen to fail events, you'll be able to learn of all test failures. Here's an example adapted from the page I linked to above:
var Mocha = require('mocha');
var fs = require('fs');
var path = require('path');
var mocha = new Mocha();
var testDir = '.'
fs.readdirSync(testDir).filter(function(file){
// This gets all files that end with the string test.js (so
// footest.js, bar_test.js, baz.test.js, etc.
return file.substr(-7) === 'test.js';
}).forEach(function(file){
mocha.addFile(path.join(testDir, file));
});
var runner = mocha.run(function(failures){
process.on('exit', function () {
process.exit(failures);
});
});
// This is how we get results.
runner.on('fail', function(test, err){
console.log(err);
});
Is there for instance a way to check which assertion failed and why, in case of multiple assertions within a test, or receive more information about a specific test, and if yes, how?
Mocha does not provide any facility to relate the exception you get through err in the fail handler to a specific assertion. This is due to the fact that Mocha is designed to be used with whatever assertion library you want. It only detects that an assertion has failed when it catches the exception raised by the failing assertion. It does not know anything about the assertions that were successful or the assertions that were not executed because the test ended early on the failing assertion. You may be able to reverse engineer something from the stack trace but Mocha won't help you towards this goal.
Related
While writing integration tests in jest I would like to reproduce the same behaviour I have achieved in mocha by:
mocha -r ts-node/register tests/integration/topLevelTest.test.ts 'tests/integration/**/*.test.ts'.
topLevelTest.test.ts :
let importantVariable;
describe("should do something with my variable", () => {
importantVariable = returnSomethingImportant();
it("should important variable exists", () => {
should.exist(importantVariable)
})
})
after(() => {
importantVariable.cleanUp()
})
Behaviour was simple: firstly topLevelTest executed describe, then other test suites executed themselves, and in the end after within topLevelTest were executed.
In my attempt of rewriting it to jest I wrote something very similar. Only difference is I used afterAll instead of after. The result is: firstly topLevelTest executed describe, then afterAll, and then other test suites. Is it possible to make afterAll run after other test suites?
This is what setup files are for, more specifically setupFilesAfterEnv because Jest environment is already initialized there with globals being available.
Top-level afterAll that wasn't grouped with describe applies to all tests within current test suite. Since Jest tests run in parallel (unless runInBand option was specified) in different threads, it obviously won't affect other test suites.
In case tests need to not proceed if a setup failed and data from setup needs to not be, globalSetup and globalTeardown configuration options should be used for that. This is not a test but the main difference is that describe and separate test blocks are unavailable. Global expect is not available but can be imported, this results in meaningful errors in case a setup fails:
// setup.js
let expect = require('expect');
module.exports = async () => {
let server = ...;
expect(server)...;
global.__MYSERVER__ = server;
};
// teardown.js
module.exports = async function () {
// close __MYSERVER__
};
Since global setup and teardown run in parent process, __MYSERVER__ cannot be accessed in tests.
Most of Jest's expect(arg1).xxxx() methods will throw an exception if the comparison fails to match expectations. One exception to this pattern seems to be the toMatchSnapshot() method. It seems to never throw an exception and instead stores the failure information for later Jest code to process.
How can we cause toMatchSnapshot() to throw an exception? If that's not possible, is there another way that our tests can detect when the snapshot comparison failed?
This will work! After running your toMatchSnapshot assertion, check the global state: expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
Just spent the last hour trying to figure it out for our own tests. This doesn't feel hacky to me either, though a maintainer of Jest may be able to tell me whether accessing Symbol.for('$$jest-matchers-object') is a good idea or not. Here's a full code snippet for context:
const GLOBAL_STATE = Symbol.for('$$jest-matchers-object');
describe('Describe test', () => {
it('should test something', () => {
try {
expect({}).toMatchSnapshot(); // replace with whatever you're trying to test
expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
} catch (e) {
console.log(`\x1b[31mWARNING!!! Catch snapshot failure here and print some message about it...`);
throw e;
}
});
});
If you run a test (e.g. /Foobar.test.js) which contains a toMatchSnapshot matcher jest by default will create a snapshot file on the first run (e.g. /__snapshots__/Foobar.test.js.snap).
This first run that creates the snapshot will pass.
If you want the test to fail you need to commit the snapshot alongside with your test.
The next test builds will compare the changes you make to the committed snapshot and if they differ the test will fail.
Here is the official link to the Documentation on 'Snapshot Testing' with Jest.
One, less than ideal, way to cause toMatchSnapshot to throw an exception when there is a snapshot mismatch is to edit the implementation of toMatchSnapshot. Experienced Node developers will consider this to be bad practice, but if you are very strongly motivated to have that method throw an exception, this approach is actually easy and depending on how you periodically update your tooling, only somewhat error-prone.
The file of interest will be named something like "node_modules/jest-snapshot/build/index.js".
The line of interest is the first line in the method:
const toMatchSnapshot = function (received, testName) {
this.dontThrow && this.dontThrow(); const
currentTestName = ....
You'll want to split that first line and omit the calling of this.dontThrow(). The resulting code should look similar to this:
const toMatchSnapshot = function (received, testName) {
//this.dontThrow && this.dontThrow();
const
currentTestName = ....
A final step you might want to take is to send a feature request to the Jest team or support an existing feature request that is of your liking like the following: link
I've got a problem whereas sometimes my tests fail due to the asserts being run before the response comes back here in my integration tests:
it('should find all countries when no id is specified', co.wrap(function *(){
var uri = '/countries';
var response = yield request.get(uri);
should.exist(response.body);
response.status.should.equal(200);
}));
Not sure how to tweak mocha or supertest to wait or if I should use a promise here or what. I'm using generators with mocha here.
Adding a longer timeout to mocha does absolutely nothing. The query behind this particular test takes a bit longer than my other tests. My other tests run just fine.
I'm trying to understand how Sinon is used properly in a node project. I have gone through examples, and the docs, but I'm still not getting it. I have setup a directory with the following structure to try and work through the various Sinon features and understand where they fit in
|--lib
|--index.js
|--test
|--test.js
index.js is
var myFuncs = {};
myFuncs.func1 = function () {
myFuncs.func2();
return 200;
};
myFuncs.func2 = function(data) {
};
module.exports = myFuncs;
test.js begins with the following
var assert = require('assert');
var sinon = require('sinon');
var myFuncs = require('../lib/index.js');
var spyFunc1 = sinon.spy(myFuncs.func1);
var spyFunc2 = sinon.spy(myFuncs.func2);
Admittedly this is very contrived, but as it stands I would want to test that any call to func1 causes func2 to be called, so I'd use
describe('Function 2', function(){
it('should be called by Function 1', function(){
myFuncs.func1();
assert(spyFunc2.calledOnce);
});
});
I would also want to test that func1 will return 200 so I could use
describe('Function 1', function(){
it('should return 200', function(){
assert.equal(myFuncs.func1(), 200);
});
});
but I have also seen examples where stubs are used in this sort of instance, such as
describe('Function 1', function(){
it('should return 200', function(){
var test = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
});
});
How are these different? What does the stub give that a simple assertion test doesn't?
What I am having the most trouble getting my head around is how these simple testing approaches would evolve once my program gets more complex. Say I start using mysql and add a new function
myFuncs.func3 = function(data, callback) {
connection.query('SELECT name FROM users WHERE name IN (?)', [data], function(err, rows) {
if (err) throw err;
names = _.pluck(rows, 'name');
return callback(null, names);
});
};
I know when it comes to databases some advise having a test db for this purpose, but my end-goal might be a db with many tables, and it could be messy to duplicate this for testing. I have seen references to mocking a db with sinon, and tried following this answer but I can't figure out what's the best approach.
You have asked so many different questions on one post... I will try to sort out.
Testing myFuncs with two functions.
Sinon is a mocking library with wide features. "Mocking" means you are supposed to replace some part of what is going to be tested with mocks or stubs. There is a good article among Sinon documentation which describes the difference well.
When you created a spy in this case...
var spyFunc1 = sinon.spy(myFuncs.func1);
var spyFunc2 = sinon.spy(myFuncs.func2);
...you've just created a watcher. myFuncs.func1 and myFuncs.func2 will be substituted with a spy-function, but it will be used to record the calling arguments and call real function after that. This is a possible scenario, but mind that all possibly complicated logic of myFuncs.func1/func2 will run after being called in the test (ex.: database query).
2.1. The describe('Function 1', ...) test suite looks too contrived to me.
It is not obvious which question you actually mean.
A function that returns a constant value is not a real life example.. In most cases there will be some parameters and the function under test will implement some algorithm of transmuting the input arguments. So in your test you're going to implement the same algorithm partly to check that the function works correctly. That is where TDD comes in place, which actually supposes you start implementation from a test and take parts of unit test code to implement the method being tested.
2.2. Stub. The second version of unit test looks useless in the given example. func1 is not accepting any parameter.
var test = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
Even if you replace the return part with 100 the test will run successfully.
The one that makes sense is, for example, replacing func2 with a stub to avoid heavy calculation/remote request (DB query, http or other API request) being launched in a test.
myFuncs.func2 = sinon.spy();
assert.equal(myFuncs.func1(test), 200);
assert(myFuncs.func2.calledOnce);
The basic rule for unit testing is that unit test should be kept as simple as possible, providing check for as smallest possible fragment of code. In this test func1 is being tested so we can neglect the logic of func2. Which should be tested in another unit test.
Mind that doing the following attempt is useless:
myFuncs.func1 = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
Because in this case you masked the real func1 logic with a stub and you're actually testing sinon.stub().return(). Believe me it works well! :D
Mocking database queries.
Mocking a database has always been a hurdle. I could provide some advises.
3.1. Have well fragmented environment. Even for a small project there better exist a development, stage and production completely independent environments. Including databases. Which means you have an automated way of DB creation: scripts or ORM.
In this scenario you will be easily maintaining test DB in your test engine using before()/beforeEach() to have a clean structure for your tests.
3.2. Have well fragmented code. There'd better exist several layers. The lowest (DAL) should be separated from business logic. In this case you will write code for business class, simply mocking DAL. To test DAL you can have the approach you mentioned (sinon.mock the whole module) or some specific libraries (ex.: replace db engine with SQLite for tests as described here)
Conclusion. "how these simple testing approaches would evolve once my program gets more complex".
It is hard to maintain unit tests unless you develop your application with tests in mind and, thus, well fragmented. Stick to the main rule - keep each unit test as small as possible. Otherwise you're right, it will eventually get messy. Because the constantly evolving logic of your application will be involved in your test code.
I'm writing unit-tests with the Jasmine-framework.
I use Grunt and Karma for running the Jasmine testfiles.
I simply want to load the content of a file on my local file-system (e.g. example.xml).
I thought I can do this:
var fs = require('fs');
var fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
This works well in my Gruntfile.js and even in my karma.conf.js file, but not in my
Jasmine-file. My Testfile looks like this:
describe('Some tests', function() {
it('load xml file', function() {
var fs = require("fs");
fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
});
});
The first error I get is:
'ReferenceError: require is not defined'.
Does not know why I cannot use RequireJS here, because I can use it
in Gruntfiel.js and even in karma.conf.js?!?!?
Okay, but when manually add require.js to the files-property in karma.conf.js-file,
then I get the following message:
Module name "fs" has not been loaded yet for context: _. Use require([])
With the array-syntax of requirejs, nothing happens.
I guess that is not possible to access Node.js functionality in Jasmine when running the
testfiles with Karma. So when Karma runs on Node.js, why is it not possible to access the 'fs'-framework of Nodejs?
Any comment/advice is welcome.
Thanks.
Your test do not work because karma - is a testrunner for client-side JavaScript (javascript who run in browser), but you want to test node.js code with it (which run on the server part). So karma just can't run server-side tests. You need different testrunner, for example take a look to jasmine-node.
Since this comes up first in the Google search, I received a similar error but wasn't using any node.js-style code in my project. Turns out the error was one of my bower components had a full copy of jasmine in it including its node.js-style code, and I had
{ pattern: 'src/**/*.js', included: false },
in my karma.conf.js.
So unfortunately Karma doesn't provide the best debugging for this sort of thing, dumping you out without telling you which file caused the issue. I had to just tear that pattern down to individual directories to find the offender.
Anyway, just be wary of bower installs, they bring a lot of code down into your project directory that you might not really care to have.
I think you're missing the point of unit testing here, because it seems to me that you're copying application logic into your test suite. This voids the point of a unit test because what it is supposed to do is run your existing functions through a test suite, not to test that fs can load an XML file. In your scenario if your XML handling code was changed (and introduced a bug) in the source file it would still pass the unit test.
Think of unit testing as a way to run your function through lots of sample data to make sure it doesn't break. Set up your file reader to accept input and then simply in the Jasmine test:
describe('My XML reader', function() {
beforeEach(function() {
this.xmlreader = new XMLReader();
});
it('can load some xml', function() {
var xmldump = this.xmlreader.loadXML('inputFile.xml');
expect(xmldump).toBeTruthy();
});
});
Test the methods that are exposed on the object you are testing. Don't make more work for yourself. :-)