Node.JS: Refreshing the nodemailer-mock after each test in Jest - node.js

I am building a Node JS application using Express.js framework. My application is sending email using the nodemailer package, https://nodemailer.com/about/. I am writing test using supertest and Jest. I am using nodemailer-mock package, https://www.npmjs.com/package/nodemailer-mock to mock my application.
I can mock the logic for sending email for Jest as mentioned in the nodemailer-mock documentation. The only issue is that it is not resetting the data/ count after each test. For example, I have two tests and both are sending emails and when I do the following assertion, it is failing.
const sentEmails = mock.getSentMail();
// there should be one
expect(sentEmails.length).toBe(1);
It is failing because an email was sent in the previous test. How can I reset it?

Assuming your mock variable is set (or even imported) at the module level, this should cover it:
afterEach(() => {
mock.reset();
});
Keep in mind that this method also seems to reset anything else you have configured on the mock (success response, fail response, etc.), so you may need to perform that setup each time too. That would be a good fit for a beforeEach block.

Related

how to run test in certain cases?

I'm new to TDD and I wrote a few test functions that check the sign up and deletion of the user , but before each running i go to the database and delete the user before testing the sign up and i go to the database to put a dummy user info before deletion so my question is how does this thing run in actual production environment , like every time I want to run the tests , I go to the database and make all these modification , what if user signed up with the below credentials then the test would return 200 ?? (i use jest with nodejs e2e)
describe("given user is not found", () => {
it("should return 404", async () => {
await request(app)
.post("/api/v1/auth/signIn")
.send({
email: "s#gmail.com",
password: "s",
})
.expect(404);
});
});```
Other opinions are available but here's what I'd do.
Let's assume your backend is in java. Before I wrote any code I would have a test that called the API endpoint with the same input and expected a 404.
Some will tell you to mock the database with a library like Mockito but I don't think that's necessary. For our purposes a database is nothing but a map. A map takes an id and returns an object. So make an interface that describes interactions with your database, saveUser(), loadUser() - that kind of thing. Implement the interface with your real implementation. And implement it with your test one which is just a class with those same methods and a map for actually doing real work. This is called a fake.
Your api could return a 404 if the user is not found or a 200 if it is. Your test is in the backend and its quicker than it was before because you're not hitting a real database.
As far as your example goes I really don't think you need to set anything up in the database at all. But if you absolutely must then you could have an endpoint which tears down the database setup and sets it up again, which is executed at the start of the test. Or a database image that you stand up just for the test. Both are probably overkill.

How to run multiple integration tests using mocha recursively without getting their hooks all combined?

I have made a few integration test using mocha which run fine when run independently but when i try to run them using : mocha test --recursively .
The behaviour I noticed here is that all the after hooks (probably the before too) are getting combined.
I drop my db in the after hook of each test so I check in between tests and I can find data from the previous tests.
It gets cleared up after the last test somehow.
I have already tried importing them into one file but even that won't serve the purpose.
Here are my hooks.
before(async () => {
app.set('port', SERVER_PORT);
server = http.createServer(app);
server.listen(SERVER_PORT, () => console.log(`API running on localhost:${SERVER_PORT}`));
// Initial feeding of the database
await dookie.push('mongodb://localhost:27017/tests', SEEDDATA);
});
after(async () => {
await mongoose.connection.db.dropDatabase();
server.close();
process.exit(0);
});
THANKS
Use jest as it provides the functionality you're looking for inbuilt.
It's hard to tell what is wrong with your tests without having a closer look at the code, so I'm going to drop here a few ideas that come to my mind, with no guarantee that anything will help.
Possibility 1
Use beforeEach and afterEach rather than before and after. This will ensure that your DB cleanup code is executed after each test, rather than after the last test in a describe function block. Details here.
Possibility 2
You are running your tests in multiple threads with mocha-parallel-tests or some other tool. Make sure that the tests where the DB is being accessed are not being parallelized.
Possibility 3
Your db.dropDatabase call returns before the database is actually dropped, while the request is still pending. You'll have to check your connection or database settings.
If nothing helps, try inserting log statements at the start of each unit test and before/after hook, this will help you understand when the code is actually being run and see what is happening in the wrong order.

Check in nightwatchjs if E2E test correctly saved to database?

For a meteor project with typescript and react I use nightwatch testing which work's great:
https://github.com/arichter83/meteor-react-typescript-nightwatch
a.) Checking database results via Client
Now I want to check in the database if the end2end test successfully added the data and that turned out surprisingly difficult. I can go via the client and look in the Mongo.Collection (on github):
browser
.execute(function() {
return (Meteor as any).connection._stores['links']._getCollection()
.insert({title:"new link"})
}, [], (result) => {
const newid = result.value
browser
.assert.containsText('#' + newid, 'new link')
.execute(function(newid) {
return (Meteor as any).connection._stores['links']._getCollection()
.remove({_id: newid})
}, [newid], () => {
browser
.assert.elementNotPresent('#' + newid)
})
})
With this approach it is quite difficult to use my existing models and interacting with nightwatch.
b.) Checking database results in test
But I'd would instead use nightwatch's unit test capability in between, but from the docs it seems that E2E and unit tests can't be mixed.
Furthermore, when importing my models in the test on the server:
import { Links } from '../../imports/api/links'
console.log(Links.findOne())
Typescript throws an error that it can't resolve the atmosphere package meteor/mongo - so #types/meteor seems not to be loaded (probably meteor specific):
Cannot find module 'meteor/mongo'
Questions
Is it generally advisable to check database results for E2E tests?
What is the most elegant way to do this with nightwatch (+ meteor)? (I also created a Feature Request there)
How to use meteor libraries in nightwatch tests?

Unit testing for loopback model

I have a Loopback API with a model Student.
How do I write unit tests for the node API methods of the Student model without calling the REST API? I can't find any documentation or examples for testing the model through node API itself.
Can anyone please help?
Example with testing the count method
// With this test file located in ./test/thistest.js
var app = require('../server');
describe('Student node api', function(){
it('counts initially 0 student', function(cb){
app.models.Student.count({}, function(err, count){
assert.deepEqual(count, 0);
});
});
});
This way you can test the node API, without calling the REST API.
However, for built-in methods, this stuff is already tested by strongloop so should pretty useless to test the node API. But for remote (=custom) methods it can still be interesting.
EDIT:
The reason why this way of doing things is not explicited is because ultimately, you will need to test your complete REST API to ensure that not only the node API works as expected, but also that ACLs are properly configured, return codes, etc. So in the end, you end up writing 2 different tests for the same thing, which is a waste of time. (Unless you like to write tests :)

Adding a default before() function for all the test cases (Mocha)

I'm writing functions for my node.js server using TDD(Mocha). For connecting to the database I'm doing
before(function(done){
db.connect(function(){
done();
});
});
and I'm running the test cases using make test and have configured my makefile to run all the js files in that particular folder using mocha *.js
But for each js file I'll have to make a separate connection to the database, otherwise my test cases fail since they do not share common scope with other test files.
So the question is, Is there anything like beforeAll() that would just simply connect once to the database and then run all the test cases? Any help/suggestion appreciated.
You can setup your db connection as a module that each of the Mocha test modules imports.
var db = require('./db');
A good database interface will queue commands you send to it before it has finished connecting. You can use that to your advantage here.
In your before call, simply do something that amounts to a no op. In SQL that would be something simple like a raw query of SELECT 1. You don't care about the result. The return of the query just signifies that the database is ready.
Since each Mocha module uses the same database module, it'll only connect once.
Use this in each of your test modules:
before(function(done) {
db.no_op(done);
});
Then define db.no_op to be a function that performs the no op and takes a callback function.

Resources