Cleaning out test database before running tests - node.js

What is the best way to clean out a database before running a test suite (is there a npm library or recommended method of doing this).
I know about the before() function.
I'm using node/express, mocha and sequelize.

The before function is about as good as you will do for cleaning out your database. If you only need to clean out the database once i.e. before you run all your tests, you can have a global before function in a separate file
globalBefore.js
before(function(done) {
// remove database data here
done()
})
single-test-1.js
require('./globalBefore')
// actual test 1 here
single-test-2.js
require('./globalBefore')
// actual test 2 here
Note that the globalBefore will only run once even though it has been required twice
Testing Principles
Try to limit the use of external dependecies such as databases in your tests. The less external dependencies the easier the testing. You want to be able to run all your unit tests in parallel and a shared resource such as a database makes this difficult.
Take a look at this Google Tech talk about writing testable javascript
http://www.youtube.com/watch?v=JjqKQ8ezwKQ
Also take look at the rewire module. It works quite well for stubbing out functions.

I usually do it like this (say for a User model):
describe('User', function() {
before(function(done) {
User.sync({ force : true }) // drops table and re-creates it
.success(function() {
done(null);
})
.error(function(error) {
done(error);
});
});
describe('#create', function() {
...
});
});
There's also sequelize.sync({force: true}) which will drop and re-create all tables (.sync() is described here).

Add this in your test file . This would undo all the migrations and create new tables while adding the seeded data to let you do testing on the seeded data.
const SCRIPT_TO_TRUNCATE_AND_SEED_DATABASE = 'cd apps/backend && npx sequelize-cli db:migrate:undo:all && npx sequelize-cli db:migrate && cd ../.. && npx sequelize-cli db:seed:all'
test(
'TRUNCATE_AND_SEED_DATABASE',
done => {
exec(SCRIPT_TO_TRUNCATE_AND_SEED_DATABASE, (err, out) => {
try {
console.log(out);
expect(err).toBe(null);
done();
} catch (e) {
done(e);
}
});
},
TIME_CONSTANT.ONE_MINUTE,
);

I made this lib to clean and import fixtures for your test.
This way, you can import fixtures, test and then clean your database.
Take a look at the following:
before(function (done) {
prepare.start(['people'], function () {
done();
});
});
after(function () {
prepare.end();
});
https://github.com/diogolmenezes/test_prepare

Related

Jest extremely slow on failed tests

I have a jest test file that looks like this:
import Client from "socket.io-client"
describe("my awesome project", () => {
let clientSocket;
beforeAll((done) => {
clientSocket = new Client(`http://localhost:3000`);
clientSocket.on("connect", done);
});
afterAll(() => {
clientSocket.close();
});
it("should work", (done) => {
clientSocket.on("redacted", (message) => {
expect(2 + 2).toBe(56);
//expect(message === "foobar").toEqual(true);
done();
});
clientSocket.emit("redacted", "world");
});
});
This is a POC and this currently the entire implementation.
The jest.config looks like this:
export default {
// Automatically clear mock calls, instances, contexts and results before every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,
// The directory where Jest should output its coverage files
coverageDirectory: "coverage",
// Indicates which provider should be used to instrument code for coverage
coverageProvider: "v8",
// A preset that is used as a base for Jest's configuration
preset: "ts-jest",
};
Which is just the file the --intit command generated.
The core of my problem is that any expect I use that results in a failed test, no matter how trivial takes an absurd amount of time to complete. I accidentally left it running as above overnight and it eventually completed in 14 hours.
But with a passing test Jest is absolutely fine and completes rapidly.expect(2 + 2).toBe(4); for example runs perfectly. On the failed tests I see the data come back from the socket as expected in the time expected. Its only when the expect is hit that it stalls out. So I don't believe the problem is in the socket setup or some sort of communication problem.
I've tried the config based solutions I've found to no effect - eg Jest - Simple tests are slow
This is being run on my local windows machine from a terminal I'm fully starting and stopping for each test on my IDE.
OK so now I see the problem, I needed a try catch.
Can't quite believe none of the examples or docs I looked at as much as hinted this is necessary to handle something so basic.
test("should work", (done) => {
clientSocket.on("redacted", (message: string) => {
try {
expect(2 + 2).toBe(56);
//expect(message === "foobar").toEqual(true);
done();
} catch (err) {
done(err)
}
});
clientSocket.emit("redacted", "world");
});

Populating mongodb in one unit test intereferes with another unit test

I'm trying to run all of my unit tests asynchronously, but calling a function to populate the database with some dummy data interferes with the other unit tests that run at the same time and that make use of the same data.
collectionSeed.js file:
const {ObjectID} = require('mongodb');
import { CollectionModel } from "../../models/collection";
const collectionOneId = new ObjectID();
const collectionTwoId = new ObjectID();
const collections = [{
_id: collectionOneId
}, {
_id: collectionTwoId
}];
const populateCollections = (done) => {
CollectionModel.remove({}).then(() => {
var collectionOne = new CollectionModel(collections[0]);
collectionOne.save(() =>{
var collectionTwo = new CollectionModel(collections[1]);
collectionTwo.save(() => {
done();
});
});
});
};
unitTest1 file:
beforeEach(populateCollections);
it('Should run', (done) => {
//do something with collection[0]
})
unitTest2 file:
beforeEach(populateCollections);
it('Should run', (done) => {
//do something with collection[0]
})
I'm running unit tests that change, delete, and add data to the database, so using beforeEach is preferable to keep all of the data consistent, but the CollectionModel.remove({}) functions often run in between an it function from one file and a second it function inside the other unit test file, so one unit test is working fine, while the second it is trying to use data that doesn't exist.
Is there anyway to prevent the different unit test files from interfering with each other?
I recommend you create a database per test file, for example adding to the DB name the name of the file. So you just have to take care of tests not interfering inside the same file, but you can forget about tests in other files.
I think that managing fixtures is one the most troublesome parts of unit testing, so with this, creating and fixing unit tests is going to become smoother.
As a trade off, each test file will take more execution time; but in my opinion in most of the cases it is worth enough.
Ideally each test should be independent of the rest, but, in general, that would take way too much overhead, so I recommended the once per test file approach.

mocha test failing with " MongoError: server sockets closed"

My mocha tests are failing with:
MongoError: server XXXX sockets closed
I have a workaround how to fix them:
const https = require('https');
const server = https.createServer(..);
close() {
mongoose.disconnect(); // <-------- I will comment this line
this.server.close();
};
I would comment out the line mongoose.disconnect(); and my test suite starts working. I would like to clean up after my tests too. Each of my test files recreates server and starts from the scratch. It seems like the error appears because there needs to be some 'waiting' before the next test file executes.
How can I correct this error?
Solution - Captain Hook to the rescue!
If I understand correctly, you wish to startup and cleanup your server after the tests. You also have a series of repetitive tasks you need to do before and after each test.
Mocha has the perfect solution for you: Say hello to Mr. Hook!
Mocha hooks are functions that you can run both before all tests, after all tests, or before each test and after each test:
https://mochajs.org/#hooks
The documentation is pretty complete and I really do recommend it. I your case however, since you are dealing with databases, you probably will be dealing with async hooks.
Sounds complex? Don't worry!
This is how normal sync hooks work:
describe('hooks', function() {
before(function() {
// runs before all tests in this block
});
after(function() {
// runs after all tests in this block
});
beforeEach(function() {
// runs before each test in this block
});
afterEach(function() {
// runs after each test in this block
});
//tests
it("This is a test", () => {
assert.equal(1, 1);
});
});
async hooks only have one difference: they have a parameter done, which is called once your task is finished. Lets assume that we are setting up a DB that takes 1.5 seconds to setup. We want to do this before all the tests, and we only want to do it once.
Let's assume this is our listen function from our DB:
const listen = callback => {
setTimeout(callback, 1500);
};
So after 1.5 seconds, it calls the callback function signalizing it is ready for action.
Now lets see how we would make an async hook:
describe('hooks', function() {
let myDB;
before( done => {
myDB = newDB();
myDB(done);
});
//tests
});
And that's it! Hope it helps!

How can I build my test suite asynchronously?

I'm trying to create mocha tests for my controllers using a config that has to be loaded async. Below is my code. However, when the mocha test is run, it doesn't run any tests, displaying 0 passing. The console.logs are never even called. I tried doing before(next => config.build().then(next)) inside of the describe, but even though the tests run, before is never called. Is there a way to have the config be loaded one time before any tests are run?
'use strict';
const common = require('./common');
const config = require('../config');
config
.build()
.then(test);
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
require('./auth');
});
}
You should run Mocha with the --delay option, and then use run() once you are done building your test suite. Here is an example derived from the code you show in the question:
'use strict';
function test() {
console.log(1);
describe('Unit Testing', () => {
console.log(2);
it("test", () => {
console.log(3);
});
});
// You must use --delay for `run()` to be available to you.
run();
}
setTimeout(test, 1000);
I'm using setTimeout to simulate an asynchronous operation. Using --delay and run() allows you to build a suite that is the result of an asynchronous computation. Note, however, that the suite must be built in one shot. (You cannot have an asynchronous process inside describe that will make calls to it. This won't work.)
One thing you should definitely not do is what rob3c suggests: calling describe or it (or both) from inside a hook. This is a mistake that every now and then people make so it is worth addressing in details. The problem is that it is just not supported by Mocha, and therefore there are no established semantics associated with calling describe or it from inside a hook. Oh, it is possible to write simple examples that work as one might expect but:
When the suite becomes more complex, the suite's behavior no longer corresponds to anything sensible.
Since there are no semantics associated with this approach, newer Mocha releases may handle the erroneous usage differently and break your suite.
Consider this simple example:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]);
describe("top", () => {
let flag;
before(() => {
flag = true;
return p.then((names) => {
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
});
});
after(() => {
flag = false;
});
it("regular test", () => {
assert(flag);
});
});
When we run it, we get:
top
✓ regular test
embedded
1) foo
2) bar
3) baz
1 passing (32ms)
3 failing
// [stack traces omitted for brevity]
What's going on here? Shouldn't all the tests pass? We set flag to true in the before hook for the top describe. All tests we create in it should see flag as true, no? The clue is in the output above: when we create tests inside a hook, Mocha will put the tests somewhere but it may not be in a location that reflects the structure of the describe blocks in the code. What happens in this case is that Mocha just appends the tests created in the hook the the very end of the suite, outside the top describe, so the after hook runs before the dynamically created tests, and we get a counter-intuitive result.
Using --delay and run(), we can write a suite that behaves in a way concordant with intuition:
const assert = require("assert");
const p = Promise.resolve(["foo", "bar", "baz"]).then((names) => {
describe("top", () => {
let flag;
before(() => {
flag = true;
});
after(() => {
flag = false;
});
describe("embedded", () => {
for (const name of names) {
it(name, () => {
assert(flag);
});
}
});
it("regular test", () => {
assert(flag);
});
});
run();
});
Output:
top
✓ regular test
embedded
✓ foo
✓ bar
✓ baz
4 passing (19ms)
In modern environments, you can use top-level await to fetch your data up front. This is a documented approach for mocha: https://mochajs.org/#dynamically-generating-tests
Slightly adapting the example from the mocha docs to show the general idea:
function fetchData() {
return new Promise((resolve) => setTimeout(resolve, 5000, [1, 2, 3]));
}
// top-level await: Node >= v14.8.0 with ESM test file
const data = await fetchData();
describe("dynamic tests", function () {
data.forEach((value) => {
it(`can use async data: ${value}`, function () {
// do something with data here
});
});
});
This is nice as it is on a per-file basis, and doesn't involve you taking on management responsibility of the test runner as you do with --delay.
The problem with using the --delay command line flag and run() callback that #Louis mentioned in his accepted answer, is that run() is a single global hook that delays the root test suite. Therefore, you have to build them all at once (as he mentioned), which can make organizing tests a hassle (to say the least).
However, I prefer to avoid magic flags whenever possible, and I certainly don't want to have to manage my entire test suite in a single global run() callback. Fortunately, there's a way to dynamically create the tests on a per-file basis, and it doesn't require any special flags, either :-)
To dynamically create It() tests in any test source file using data obtained asynchronously, you can (ab)use the before() hook with a placeholder It() test to ensure mocha waits until before() is run. Here's the example from my answer to a related question, for convenience:
before(function () {
console.log('Let the abuse begin...');
return promiseFn().
then(function (testSuite) {
describe('here are some dynamic It() tests', function () {
testSuite.specs.forEach(function (spec) {
it(spec.description, function () {
var actualResult = runMyTest(spec);
assert.equal(actualResult, spec.expectedResult);
});
});
});
});
});
it('This is a required placeholder to allow before() to work', function () {
console.log('Mocha should not require this hack IMHO');
});

Unit testing unavailable global function (couchapp, mocha)

I'm trying to unit test a CouchDB design doc (written using couchapp.js), example:
var ddoc = {
_id: '_design/example',
views: {
example: {
map: function(doc) {
emit(doc.owner.id, contact);
}
}
}
}
module.exports = contacts
I can then require this file into a mocha test very easily.
The problem is CouchDB exposes a few global functions that the map functions use ("emit" function above) which are unavailable outside of CouchDB (i.e. in these unit tests).
I attempted to declare a global function in each test, for example:
var ddoc = require('../example.js')
describe('views', function() {
describe('example', function() {
it('should return the id and same doc', function() {
var doc = {
owner: {
id: 'a123456789'
}
}
// Globally-scoped mocks of unavailable couchdb 'emit' function
emit = function(id, doc) {
assert.equal(contact.owner.id, id);
assert.equal(contact, doc);
}
ddoc.views.example.map(doc);
})
})
})
But Mocha fails with complaints of global leak.
All of this together started to "smells wrong", so wondering if there's better/simpler approach via any libraries, even outside of Mocha?
Basically I'd like to make mock implementations available per test which I can call asserts from.
Any ideas?
I'd use sinon to stub and spy the tests. http://sinonjs.org/ and https://github.com/domenic/sinon-chai
Globals are well, undesirable but hard to eliminate. I'm doing some jQuery related testing right now and have to use --globals window,document,navigator,jQuery,$ at the end of my mocha command line so... yeah.
You aren't testing CouchDb's emit, so you should stub it since a) you assume that it works and b) you know what it will return
global.emit = sinon.stub().returns(42);
// run your tests etc
// assert that the emit was called
This part of the sinon docs might be helpful:
it("makes a GET request for todo items", function () {
sinon.stub(jQuery, "ajax");
getTodos(42, sinon.spy());
assert(jQuery.ajax.calledWithMatch({ url: "/todo/42/items" }));
});
Hope that helps.

Resources