How to run 'After' code on Mocha only on specific tests? - node.js

Edit: This question was answered, but I have another, similar question which I didn't want to open a new thread for.
I'm using Mocha and Chai to test my project.
As part of my code, I create a new user and save him in our DB (so the test user can perform various methods on our app).
Generally, after every test I would like to run a code block that deletes the user from the DB, which I did using the "AfterEach" hook.
My problem is that I have 1 test (might be more in the future) which doesn't create a user (e.g, 'try to login without signing up'), so my AfterEach code receives an error (can't delete something that doesn't exist).
Does Mocha supply a way to disable the 'AfterEach' on some tests? Or some other solution to my problem.
Edit: Added question: my AfterEach hook involves an async method which returns a promise. On the Mocha documentation I only saw an example for async hooks that work with callbacks. How am I supposed to use an afterEach hook that returns a promise

You can nest describe blocks, so you can group user interaction tests and also group the "with user" and "without user" tests:
describe('user interaction', () => {
describe('with user in database', () => {
// these will run only for the tests in this `describe` block:
beforeEach(() => createUser(...));
afterEach (() => deleteUser(...));
it(...);
});
describe('without user in database', () => {
it(...);
});
});

Related

"top level" test in jest

While writing integration tests in jest I would like to reproduce the same behaviour I have achieved in mocha by:
mocha -r ts-node/register tests/integration/topLevelTest.test.ts 'tests/integration/**/*.test.ts'.
topLevelTest.test.ts :
let importantVariable;
describe("should do something with my variable", () => {
importantVariable = returnSomethingImportant();
it("should important variable exists", () => {
should.exist(importantVariable)
})
})
after(() => {
importantVariable.cleanUp()
})
Behaviour was simple: firstly topLevelTest executed describe, then other test suites executed themselves, and in the end after within topLevelTest were executed.
In my attempt of rewriting it to jest I wrote something very similar. Only difference is I used afterAll instead of after. The result is: firstly topLevelTest executed describe, then afterAll, and then other test suites. Is it possible to make afterAll run after other test suites?
This is what setup files are for, more specifically setupFilesAfterEnv because Jest environment is already initialized there with globals being available.
Top-level afterAll that wasn't grouped with describe applies to all tests within current test suite. Since Jest tests run in parallel (unless runInBand option was specified) in different threads, it obviously won't affect other test suites.
In case tests need to not proceed if a setup failed and data from setup needs to not be, globalSetup and globalTeardown configuration options should be used for that. This is not a test but the main difference is that describe and separate test blocks are unavailable. Global expect is not available but can be imported, this results in meaningful errors in case a setup fails:
// setup.js
let expect = require('expect');
module.exports = async () => {
let server = ...;
expect(server)...;
global.__MYSERVER__ = server;
};
// teardown.js
module.exports = async function () {
// close __MYSERVER__
};
Since global setup and teardown run in parent process, __MYSERVER__ cannot be accessed in tests.

Is there any way to add callbacks to Jest when all tests succeed or fail?

I'd like to run a callback when all the tests in a describe block pass (or fail), is there some hook or something in the Jest API to do this? I could not find anything applicable in the docs.
I'm making several API requests to collect data in order to compare it to data in a CSV file, in order to diff the contents. When the tests have all passed, I would like to save all the API responses in a file, therefore I need some sort of 'all tests passed' callback
You can run jest programmatically. Note that this approach is "hack" because there is no official support for running jest like this.
see: https://medium.com/web-developers-path/how-to-run-jest-programmatically-in-node-js-jest-javascript-api-492a8bc250de
There is afterAll that is aware of describe but runs regardless of test results. It can be used as a part of function to aggregate data from tests:
let responses;
testAndSaveResponses((name, fn) => {
if (!responses) {
responses = [];
} else {
afterAll(async () => {
if (!responses.includes(null)) {
// no errors, proceed with response processing
}
});
}
test(name, async () => {
try {
responses.push(await fn());
} catch (err) {
responses.push(null);
throw err;
}
});
});
It's supposed to be used instead of Jest test and be enhanced to support multiple describe scopes.
There is custom environment. Circus runner allows to hook test events, finish_describe_definition in particular. It is applied to all tests, unaware of custom data (e.g. responses that need to be saved) and should interact with them through global variables.
There is custom reporter, it receives a list of passed and failed tests. It is applied to all tests, unaware of custom data defined in tests and doesn't have access to globals from test scope so cannot be used to collect responses.

Jest clean up after all tests have run

Is it possible in Jest to run cleanup or teardown tasks that run after all other tests have completed? Similar to how setupFiles allows one to set up tasks after before any test has run. Bonus points if this can also run regardless if the test had any errors.
Putting afterAll(() => {}) at the top level of a file (outside any describe function) appears only to run after tests from that particular file have finished.
The use case is I have many test files that will create users in a a development database, and I don't want to make each test file responsible for cleaning up and removing the user afterwards. Errors can also happen while writing tests, so if the cleanup happens regardless of errors that would be preferable.
There's a sibling hook to setupFiles that will too fire before every test suite but right after your test runner (by default Jasmine2) has initialised global environment.
It's called setupFilesAfterEnv. Use it like this:
{
"setupFilesAfterEnv": ["<rootDir>/setup.js"]
}
Example setup.js:
beforeAll(() => console.log('beforeAll'));
afterAll(() => console.log('afterAll'));
setup.js doesn't need to export anything. It will be executed before every test suite (every test file). Because test runner is already initialised, global functions like beforeAll and afterAll are in the scope just like in your regular test file so you can call them as you like.
In jest.config.js:
module.exports = {
// ...
setupFilesAfterEnv: [
"./test/setup.js",
// can have more setup files here
],
}
In ./test/setup.js:
afterAll(() => { // or: afterAll(async () => { }); to support await calls
// Cleanup logic
});
Note:
I am using Jest 24.8
Reference:
setupFilesAfterEnv
To do some tasks after all test suites finish, use globalTeardown. Example:
In package.json:
{
"jest": {
"globalTeardown": "<rootDir>/teardownJest.js"
},
}
In teardownJest.js:
const teardown = async () => {
console.log('called after all test suites');
}
module.exports = teardown;
Keep in mind that jest imports every module from scratch for each test suit and teardown file. From official documentation:
By default, each test file gets its own independent module registry
So, you cannot share the same DB module's instance for each test suite or teardown file. Therefore, If you wanted to close db connection after all test suits, this method would not work
There looks like there is a feature called a reporter that just does exactly this:

Frisby Functional Test standards

I'm new to this and I have been searching for ways (or standards) to write proper functional tests but I still I have many unanswered questions. I'm using FrisbyJS to write functional tests for my NodeJS API application and jasmine-node to run them.
I have gone through Frisby's documentation, but it wasn't fruitful for me.
Here is a scenario:
A guest can create a User. (No username duplication allowed, obviously)
After creating a User, he can login. On successful login, he gets an Access-Token.
A User can create a Post. Then a Post can have Comment, and so on...
A User cannot be deleted once created. (Not from my NodeJS Application)
What Frisby documentation says is, I should write a test within a test.
For example (full-test.spec.js):
// Create User Test
frisby.create('Create a `User`')
.post('http://localhost/users', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// User Login Test
frisby.create('Login `User`')
.post('http://localhost/users/login', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// Another Test (For example, Create a post, and then comment)
})
.toss();
})
.toss();
Is this the right way to write a functional test? I don't think so... It looks dirty.
I want my tests to be modular. Separate files for each test.
If I create separate files for each test, then while writing a test for Create Post, I'll need a User's Access-Token.
To summarize, the question is: How should I write tests if things are dependent on each other?
Comment is dependent on Post. Post is dependent on User.
This is the by product to using NodeJS. This is a large reason I regret deciding on frisby. That and the fact I can't find a good way to load expected results out of a database in time to use them in the tests.
From what I understand - You basically want to execute your test cases in a sequence. One after the other.
But since this is javascript, the frisby test cases are asynchronous. Hence, to make them synchronous, the documentation suggested you to nest the test cases. Now that is probably OK for a couple of test cases. But nesting would go chaotic if there are hundreds of test cases.
Hence, we use sequenty - another nodejs module, which uses call back to execute functions(and test cases wrapped in these functions) in sequence. In the afterJSON block, after all the assertions, you have to do a call back - cb()[which is passed to your function by sequenty].
https://github.com/AndyShin/sequenty
var sequenty = require('sequenty');
function f1(cb){
frisby.create('Create a `User`')
.post('http://localhost/users', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
//Do some assertions
cb(); //call back at the end to signify you are OK to execute next test case
})
.toss();
}
function f2(cb){
// User Login Test
frisby.create('Login `User`')
.post('http://localhost/users/login', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// Some assertions
cb();
})
.toss();
}
sequenty.run(f1,f2);

How do you instruct mocha/nodejs to wait till all db operations are over

I'm trying to test if some of my db operations are executed properly. The flow is as follows (I'm using mocha for testing)
Call code which loops through data and saves it to redis
Get data from redis (in my testcase) to see if it saves the right data.
I'm noticing that the get data from db gets executed much before anything is saved. I was looking at the done() option in mocha, however that seems to work only if data is saved through mocha (setup etc).
So how do I instruct mocha to wait for all db to be saved before trying to retrieve from db?
Thanks for any help
dankohn is correct. Here's what you need to do, a bit more fleshed out:
describe('Your test', function () {
before(function (done) {
Your.redis.db.call.here(your, parameters, function (err) {
...you may want to check for errors first...
done();
});
});
it('should do what you wanted...', function (done) {
...your test case...
done();
});
});
Your redis call most likely provides a callback function as a parameter. That callback function is executed when the redis call is completed. Within that callback function, call done(). The data you wrote will be there throughout your tests.
You just need to write a before function with the parameter done in mocha to load the data into the database. As a callback for when your data is loaded, call done(). Now, all your data will load before your first test.

Resources