Populating mongodb in one unit test intereferes with another unit test - node.js

I'm trying to run all of my unit tests asynchronously, but calling a function to populate the database with some dummy data interferes with the other unit tests that run at the same time and that make use of the same data.
collectionSeed.js file:
const {ObjectID} = require('mongodb');
import { CollectionModel } from "../../models/collection";
const collectionOneId = new ObjectID();
const collectionTwoId = new ObjectID();
const collections = [{
_id: collectionOneId
}, {
_id: collectionTwoId
}];
const populateCollections = (done) => {
CollectionModel.remove({}).then(() => {
var collectionOne = new CollectionModel(collections[0]);
collectionOne.save(() =>{
var collectionTwo = new CollectionModel(collections[1]);
collectionTwo.save(() => {
done();
});
});
});
};
unitTest1 file:
beforeEach(populateCollections);
it('Should run', (done) => {
//do something with collection[0]
})
unitTest2 file:
beforeEach(populateCollections);
it('Should run', (done) => {
//do something with collection[0]
})
I'm running unit tests that change, delete, and add data to the database, so using beforeEach is preferable to keep all of the data consistent, but the CollectionModel.remove({}) functions often run in between an it function from one file and a second it function inside the other unit test file, so one unit test is working fine, while the second it is trying to use data that doesn't exist.
Is there anyway to prevent the different unit test files from interfering with each other?

I recommend you create a database per test file, for example adding to the DB name the name of the file. So you just have to take care of tests not interfering inside the same file, but you can forget about tests in other files.
I think that managing fixtures is one the most troublesome parts of unit testing, so with this, creating and fixing unit tests is going to become smoother.
As a trade off, each test file will take more execution time; but in my opinion in most of the cases it is worth enough.
Ideally each test should be independent of the rest, but, in general, that would take way too much overhead, so I recommended the once per test file approach.

Related

How to persist() when using nockNack with jest and Node?

I am currently working on some unit tests for an express app.
I am using "jest": "^29.4.1", "nock": "^13.3.0",.
The tests I am writing use nockBack.
Imagine I have 3 separate test files that run the code below. The first 2 properly run, save a nock fixture in the proper directory and then re-run just fine. As soon as I introduce a 3rd test; it runs and passes the first time (and saves a fixture etc...) but if I re-run the 3rd test it fails with this error error: Error [NetworkingError]: Nock: No match for request.... I read in the docs that a way to alleviate this is to use the persist() method BUT this is not documented for nockBack only for methods using nock to make calls to pseudo endpoints. I am testing 3rd party api calls that need to go out initially on the netowrk and then subsequent calls will be pulled from the fixtures.
I tried clearing interceptor use by adding these to all my tests:
beforeEach(() => nock.cleanAll());
afterEach(() => nock.cleanAll());
But this does not help to make the 3rd test pass when re-running.
I also tried adding persist() like so: const { nockDone } = await nockBack('post-data.json').persist(); <---- but this fails since it's not a recognized method.
Is there a way to make this work when using nockBack?
Test 1
const nockBack = require('nock').back;
const path = require('path');
const { getPosts } = require('./post');
nockBack.fixtures = path.join(__dirname, '__nock-fixtures__');
nockBack.setMode('record');
test('return a list of posts by a user', async () => {
const userId = 1;
const { nockDone } = await nockBack('post-data.json');
const data = await getPosts(userId);
expect(data.length).toBeGreaterThan(0);
data.forEach((post) => {
expect(post).toEqual(
expect.objectContaining({
userId,
})
);
});
nockDone();
});

NestJS Jest Unit Testing: How to write unit tests for a flow with functions needed to be called in a sequence?

For example, I need to write unit tests for a function publishBook() from a publish.service.ts. publishBook(book, author) takes two parameters. Book object is saved in DB after function writeBook() from service book.service.ts is called and author is the logged in user of the system.
In order to test publishBook() it is essential to have instances of book and author(user) saved in the db. What would be the best practice to test the this flow?
Keep in mind all these functions are complex, i.e. have other functions called inside them.
I am using MongoMemoryServer. So this is what came in my mind, provide correct data and write one unit test to create author and book, and then for publishBook, write multiple tests.
describe("publishBook", () => {
it("Author should be created", async () => {
const author = await usersService.createUser(Author());
expect(learner.data.emailAddress).toBe(Author().emailAddress);});
it("Course should be created")
const book = await booksService.writeBook(Book());
expect(book.title).toBe(Book().title);
it("Book should be published")
const publishedBook = await publishService.publishBook(Book());
expect(publishedBook).toBeDefined()
});

How to pass data from a test to a reporter in jest?

I'm using a custom jest reporter to populate data in testrail (a test case management software) and would like my jest tests to be the source of truth for all data being fed into the test case management software.
I've been struggling a bit to understand how I could pass additional data from the test to the reporter. I'm testing a GraphQL API, and would like the actual API payload to make its way, from the test to testrail, plus eventually additional metadata later on.
The only data elements I'm able to use are:
ancestorTitles: [Array],
duration: 52,
failureMessages: [],
fullName: 'Test suite - test case',
location: null,
numPassingAsserts: 0,
status: 'passed',
title: 'test case'
For example, a test case looks like this:
describe('My Test Suite', () => {
test('My test case', async done => {
const query = `
{
query {
documents {
totalCount
}
}
}`
const response = await graphQL(query, global.apiConfig)
const hits = response.data.documents.totalCount
expect(hits).toHaveLength(4)
done();
}
)
How could I have query passed down to the reporter?
#FrancoiG Until I find something better I did following.
I am using TestRail as well. So my each test name starts with case number from testRail.
import testCustomData from ('./config/testCustomData.json')
describe('My Test Suite', () => {
test('C111: My test case', async done => {
const customData = testCustomData.C111;
...
...
}
)
With this same code I can access test data from Jest TestRail Reporter as case id is present in test name. In my situation data was not dynamically generated so I can use this approach.
In your case if query id dynamically created then you can generate file on the fly where you will store it with testrail or any other unique identifier key. Here in my case was: C111.
Then you can access it from Jest Reporter.
It worked or me, but I hope there is some better solution, like extending testResult with custom data, etc.

Mocha ignores some tests although they should be run

I'm correctly working on refactoring my clone of the express-decorator NPM package. This includes refactoring the unit tests that were previously done using AVA. I decided to rewrite them using Mocha and Chai because I like the way they define tests a lot more.
So, what is my issue? Take a look at this code (I broke it down to illustrate the problem):
test('express', (t) => {
#web.basePath('/test')
class Test {
#web.get('/foo/:id')
foo(request, response) {
/* The test in question. */
t.is(parseInt(request.params.id), 5);
response.send();
}
}
let app = express();
let controller = new Test();
web.register(app, controller);
t.plan(1);
return supertest(app)
.get('/test/foo/5')
.expect(200);
});
This code works.
Here's (basically) the same code, now using Mocha and Chai and multiple tests:
describe('The test express server', () => {
#web.basePath('/test')
class Test {
#web.get('/foo/:id')
foo(request, response) {
/* The test in question. */
it('should pass TEST #1',
() => expect(toInteger(request.params.id)).to.equal(5))
response.send()
}
}
const app = express()
const controller = new Test()
web.register(app, controller)
it('should pass TEST #2', (done) => {
return chai.request(app)
.get('/test/foo/5')
.end((err, res) => {
expect(err).to.be.null
expect(res).to.have.status(200)
done()
})
})
})
The problem is that the TEST #1 is ignored by Mocha although that part of the code is run during the tests. I tried to console.log something there and it appeared in the Mocha log where I expected it to appear.
So how do I get that test to work? My idea would be to somehow pass down the context (the test suite) to the it function, but that's not possible with Mocha, or is it?
It looks like you are moving from tape or some similar test runner to Mocha. You're going to need to significantly change your approach because Mocha works significantly differently.
tape and similar runners don't need to know ahead of time what tests exist in the suite. They discover tests as they go along executing your test code, and a test can contain another test. Mocha on the other hand requires that the entire suite be discoverable before running any test.* It needs to know each and every test that will exist in your suite. It has some disadvantages in that you cannot add tests while the Mocha is running the test. You could not have a before hook for instance do a query from a database and from that create tests. You'd have instead to perform the query before the suite has started. However, this way of doing things also has some advantages. You can use the --grep option to select only a subset of tests and Mocha will do it without any trouble. You can also use it.only to select a single test without trouble. Last I checked, tape and its siblings have trouble doing this.
So the reason your Mocha code is not working is because you are creating a test after Mocha has started running the tests. Mocha won't right out crash on you but when you do this, the behavior you get is undefined. I've seen cases where Mocha would ignore the new test, and I've seen cases where it executed it in an unexpected order.
If this were my test what I'd do is:
Remove the call to it from foo.
Modify foo to simply record the request parameters I care about on the controller instance.
foo(request, response) {
// Remember to initialize this.requests in the constructor...
this.requests.push(request);
response.send()
}
Have the test it("should pass test #2" check the requests recorded on the controller:
it('should pass TEST #2', (done) => {
return chai.request(app)
.get('/test/foo/5')
.end((err, res) => {
expect(err).to.be.null
expect(res).to.have.status(200)
expect(controler.requests).to.have.lengthOf(1);
// etc...
done()
})
})
And would use a beforeEach hook to reset the controller between tests so that tests are isolated.

Unit testing with Bookshelf.js and knex.js

I'm relatively new to Node and am working on a project using knex and bookshelf. I'm having a little bit of trouble unit testing my code and I'm not sure what I'm doing wrong.
Basically I have a model (called VorcuProduct) that looks like this:
var VorcuProduct = bs.Model.extend({
tableName: 'vorcu_products'
});
module.exports.VorcuProduct = VorcuProduct
And a function that saves a VorcuProduct if it does not exist on the DB. Quite simple. The function doing this looks like this:
function subscribeToUpdates(productInformation, callback) {
model.VorcuProduct
.where({product_id: productInformation.product_id, store_id: productInformation.store_id})
.fetch()
.then(function(existing_model) {
if (existing_model == undefined) {
new model.VorcuProduct(productInformation)
.save()
.then(function(new_model) { callback(null, new_model)})
.catch(callback);
} else {
callback(null, existing_model)
}
})
}
Which is the correct way to test this without hitting the DB? Do I need to mock fetch to return a model or undefined (depending on the test) and then do the same with save? Should I use rewire for this?
As you can see I'm a little bit lost, so any help will be appreciated.
Thanks!
I have been using in-memory Sqlite3 databases for automated testing with great success. My tests take 10 to 15 minutes to run against MySQL, but only 30 seconds or so with an in-memory sqlite3 database. Use :memory: for your connection string to utilize this technique.
A note about unit tesing - This is not true unit testing, since we're still running a query against a database. This is technically integration testing, however it runs within a reasonable time period and if you have a query-heavy application (like mine) then this technique is going to prove more effective at catching bugs than unit testing anyway.
Gotchas - Knex/Bookshelf initializes the connection at the start of the application, which means that you keep the context between tests. I would recommend writing a schema create/destroy script so that you and build and destroy the tables for each test. Also, Sqlite3 is less sensitive about foreign key constraints than MySQL or PostgreSQL, so make sure you run your app against one of those every now and then to ensure that your constraints will work properly.
This is actually a great question which brings up both the value and limitations of unit testing.
In this particular case the non-stubbed logic is pretty simple -- just a simple if block, so it's arguable whether it's this is worth the unit testing effort, so the accepted answer is a good one and points out the value of small scale integration testing.
On the other hand the exercise of doing unit testing is still valuable in that it points out opportunities for code improvements. In general if the tests are too complicated, the underlying code can probably use some refactoring. In this case a doesProductExist function can likely be refactored out. Returning the promises from knex/bookshelf instead of converting to callbacks would also be a helpful simplification.
But for comparison here's my take on what true unit-testing of the existing code would look like:
var rewire = require('rewire');
var sinon = require('sinon');
var expect = require('chai').expect;
var Promise = require('bluebird');
var subscribeToUpdatesModule = rewire('./service/subscribe_to_updates_module');
var subscribeToUpdates = subscribeToUpdatesModule.__get__(subscribeToUpdates);
describe('subscribeToUpdates', function () {
before(function () {
var self = this;
this.sandbox = sinon.sandbox.create();
var VorcuProduct = subscribeToUpdatesModule.__get__('model').VorcuProduct;
this.saveStub = this.sandbox.stub(VorcuProduct.prototype, 'save');
this.saveStub.returns(this.saveResultPromise);
this.fetchStub = this.sandbox.stub()
this.fetchStub.returns(this.fetchResultPromise);
this.sandbox.stub(VorcuProduct, 'where', function () {
return { fetch: self.fetchStub };
})
});
afterEach(function () {
this.sandbox.restore();
});
it('calls save when fetch of existing_model succeeds', function (done) {
var self = this;
this.fetchResultPromise = Promise.resolve('valid result');
this.saveResultPromise = Promise.resolve('save result');
var callback = function (err, result) {
expect(err).to.be.null;
expect(self.saveStub).to.be.called;
expect(result).to.equal('save result');
done();
};
subscribeToUpdates({}, callback);
});
// ... more it(...) blocks
});

Resources