Retries in mocha do not work - node.js

I'm trying to use retries in mocha. Here is my code:
'use strict';
const assert = require('chai').assert;
describe('retries', function() {
it('should not be equal', function () {
this.retries(10);
assert.equal(1, 2);
});
});
I'm expecting, that test would be retried 10 times, but it didn't.
Version of mocha is 3.3.0

Mocha does in fact retry your code. However, Mocha does not show you each individual attempt. It only reports the final result as to whether the test passed at some point (after some number of tries), or failed (because all the tries failed). If you add console.log("something") to your test, you'll see it retries your test as you specified.

Related

mocha test failing with " MongoError: server sockets closed"

My mocha tests are failing with:
MongoError: server XXXX sockets closed
I have a workaround how to fix them:
const https = require('https');
const server = https.createServer(..);
close() {
mongoose.disconnect(); // <-------- I will comment this line
this.server.close();
};
I would comment out the line mongoose.disconnect(); and my test suite starts working. I would like to clean up after my tests too. Each of my test files recreates server and starts from the scratch. It seems like the error appears because there needs to be some 'waiting' before the next test file executes.
How can I correct this error?
Solution - Captain Hook to the rescue!
If I understand correctly, you wish to startup and cleanup your server after the tests. You also have a series of repetitive tasks you need to do before and after each test.
Mocha has the perfect solution for you: Say hello to Mr. Hook!
Mocha hooks are functions that you can run both before all tests, after all tests, or before each test and after each test:
https://mochajs.org/#hooks
The documentation is pretty complete and I really do recommend it. I your case however, since you are dealing with databases, you probably will be dealing with async hooks.
Sounds complex? Don't worry!
This is how normal sync hooks work:
describe('hooks', function() {
before(function() {
// runs before all tests in this block
});
after(function() {
// runs after all tests in this block
});
beforeEach(function() {
// runs before each test in this block
});
afterEach(function() {
// runs after each test in this block
});
//tests
it("This is a test", () => {
assert.equal(1, 1);
});
});
async hooks only have one difference: they have a parameter done, which is called once your task is finished. Lets assume that we are setting up a DB that takes 1.5 seconds to setup. We want to do this before all the tests, and we only want to do it once.
Let's assume this is our listen function from our DB:
const listen = callback => {
setTimeout(callback, 1500);
};
So after 1.5 seconds, it calls the callback function signalizing it is ready for action.
Now lets see how we would make an async hook:
describe('hooks', function() {
let myDB;
before( done => {
myDB = newDB();
myDB(done);
});
//tests
});
And that's it! Hope it helps!

Can I use before in between tests as shown in following code?

I want to compute some values before some tests. what is the best way to do that? To use "before" or call functions?
// Way 1
var expect = require('chai').expect;
describe("a test", function() {
var val1, val2;
before(function() {
val1 = computeVal1();
});
it("should return hi1", function() {
expect(val1).to.equal('hi1');
});
before(function() {
val2 = computeVal2();
});
it("should return hi2", function() {
expect(val2).to.equal('hi2');
});
});
Is the above way is better or the below way is better?
// Way 2
var expect = require('chai').expect;
describe("a test", function() {
var val1, val2;
val1 = computeVal1();
it("should return hi1", function() {
expect(val1).to.equal('hi1');
});
val2 = computeVal2();
it("should return hi2", function() {
expect(val2).to.equal('hi2');
});
});
It really makes no difference in this case because ultimately in the second example computeVal1() will run before the tests do. However, given this is exactly what the before / beforeEach hooks were designed for, I would be inclined to stay consistent with the framework and use them.
This is from the mocha documentation:
describe('hooks', function() {
before(function() {
// runs before all tests in this block
})
after(function(){
// runs after all tests in this block
})
beforeEach(function(){
// runs before each test in this block
})
afterEach(function(){
// runs after each test in this block
})
// test cases
})
So, there is not much difference in using before or calling functions directly under the describe. In your second example, val2's value won't be available to the first test, but in the first example, val2's value will be.
However, it is better practice IMO, to use beforeEach. This way state from one test does not affect another test.
Using the hooks (before and beforeEach) to initialize the data that your test uses is usually the best thing to do. Consider this:
In your 2nd snippet, if computeVal1 or computeVal2 fail, then Mocha won't run any test whatsoever. In the 1st snippet, Mocha may run some tests before it tries to call them and they fail. If you have many test files that Mocha loads, it may matter to you that Mocha runs as many test as it can before a failure occurs.
In your 2nd snippet, Mocha will almost always run computeVal1 and computeVal2 even if they are not needed. If they are computationally costly, you'll pay this cost every time. If you run mocha --grep=foo so that it does not select any of your tests, both functions will be called.
In your first snippet, Mocha will run them only if they are needed. If you run mocha --grep=foo, Mocha won't call your functions.
In your 2nd snippet, even using describe.skip for your describe call won't make Mocha skip calling the functions.

How to make empty placeholder tests intentionally fail in Mocha?

I'm writing an API in NodeJS and testing using Mocha, Chai and SuperTest. I'm using a typical test-driven approach of writing the tests first then satisfying those tests with working code. However, because of the number of tests for all the different permutations, I've started writing empty placeholder tests so that I have all the it('should...') descriptions in place to remind me what to test when I get to that feature. For example:
it 'should not retrieve documents without an authorized user', (done) ->
done()
The problem with this is that done() is called without any assertion so the test is considered passing, so I've added the following assertion.
false.should.equal true # force failure
but it's a hack and the reason for failure that Mocha displays can seem confusing, especially when other full tests might be failing.
Is there any official way to intentionally fail placeholder tests like this in Mocha?
As of 05/19/2018 this is the official way: https://mochajs.org/#pending-tests
An unimplemented test shouldn't fail, it should be marked as pending.
A succinct method of marking a mocha test as not yet implemented is to not pass a callback function to the it handler.
describe("Traverse", function(){
describe("calls the visitor function", function(){
it("at every element inside an array")
it("at every level of a tree")
})
})
Running mocha test will display your unimplemented tests as pending.
$ mocha test
Traverse
calls the visitor function
- at every element inside an array
- at every level of a tree
0 passing (13ms)
2 pending
The official way to mark tests as not being ready for testing yet is to use skip, which is a method that appears as a field of describe and it. Here's an example:
describe("not skipped", function () {
it("bar", function () {
throw new Error("fail");
});
it.skip("blah", function () {
throw new Error("fail");
});
});
describe.skip("skipped", function () {
it("something", function () {
throw new Error("fail");
});
});
The above code, when put in a test.js file and run with $ mocha --reporter=spec test.js, produces:
not skipped
1) bar
- blah
skipped
- something
0 passing (4ms)
2 pending
1 failing
1) not skipped bar:
Error: fail
at Context.<anonymous> (/tmp/t33/test.js:3:15)
at callFn (/home/ldd/local/lib/node_modules/mocha/lib/runnable.js:223:21)
at Test.Runnable.run (/home/ldd/local/lib/node_modules/mocha/lib/runnable.js:216:7)
at Runner.runTest (/home/ldd/local/lib/node_modules/mocha/lib/runner.js:374:10)
at /home/ldd/local/lib/node_modules/mocha/lib/runner.js:452:12
at next (/home/ldd/local/lib/node_modules/mocha/lib/runner.js:299:14)
at /home/ldd/local/lib/node_modules/mocha/lib/runner.js:309:7
at next (/home/ldd/local/lib/node_modules/mocha/lib/runner.js:247:23)
at Object._onImmediate (/home/ldd/local/lib/node_modules/mocha/lib/runner.js:276:5)
at processImmediate [as _immediateCallback] (timers.js:354:15)
The test names that are preceded by - are skipped. Also, in a terminal that supports colors the tests that are skipped appear in blue (by opposition to red for failed tests and green for passing). A skipped test is said to be "pending" so Mocha reports the number of skipped tests as "2 pending".
If you pass a string or error to done() it will report it as an error. So:
it 'should not retrieve documents without an authorized user', (done) ->
done('not implemented')
would cause the test to fail with the output:
done() invoked with non-Error: not implemented
I like #Canyon's solution of just not passing a callback to mark the tests "pending", but in my case I want these placeholders to fail my CI builds, so making them actual failing tests like this was easier.
This is actually a good question because I would also find this super useful. After looking at it, I would think to just create your own wrapper "todo" or "fail" function that you can reuse throughout your codebase.
Examples below use a function called todo, which will print out the text "not yet implemented". This might be useful as a separate module even so you can import and use with all your tests. Might need to modify the code a bit...
Example in chai assert
var assert = require('chai').assert;
function todo() {
assert(false, "method not yet implemented");
};
describe("some code", function(){
it("should fail something", function(){
todo();
});
});
Example using chai assert , with the fail option (although it looks unncessary)
var assert = require('chai').assert;
function todo() {
assert.fail(true, true, "method not yet implemented"); //1st 2 args can be anything really
};
describe("some code", function(){
it("should fail something", function(){
todo();
});
});

Mocha before hook not executing correctly

I'm setting up a test suite with Mocha, on an ExpressJS app. In the test, I want to drop all models in a collection before the suite runs, and I"m doing the following:
var Users = require("../models/Users").model;
before(function(done){
Users.remove({}, function(){
console.log("removed");
done();
});
//rest of the test suite here
The trouble is, this before hook is timing out. What am I missing here? BTW, if I change this to be an after hook, the result is the same - it never drops the models, and times out.
Mocha timeouts all tests that take more than 2 sec by default.
Try this:
var Users = require("../models/Users").model;
this.timeout(5000); //sets timeout to 5 sec
before(function(done){
Users.remove({}, function(){
console.log("removed");
done();
});
Disable timeout as specified in the manual.

Mocha tests timeout if more than 4 tests are run at once

I've got a node.js + express web server that I'm testing with Mocha. I start the web server within the test harness, and also connect to a mongodb to look for output:
describe("Api", function() {
before(function(done) {
// start server using function exported from another js file
// connect to mongo db
});
after(function(done) {
// shut down server
// close mongo connection
});
beforeEach(function(done) {
// empty mongo collection
});
describe("Events", function() {
it("Test1", ...);
it("Test2", ...);
it("Test3", ...);
it("Test4", ...);
it("Test5", ...);
});
});
If Mocha runs more than 4 tests at a time, it times out:
4 passing (2s)
1 failing
1) Api Test5:
Error: timeout of 2000ms exceeded
at null.<anonymous> (C:\Users\<username>\AppData\Roaming\npm\node_modules\moch\lib\runnable.js:165:14)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
If I skip any one of the 5 tests, it passes successfully. The same problem occurs if I reorder the tests (it's always the last one that times out). Splitting the tests into groups also doesn't change things.
From poking at it, the request for the final test is being sent to the web server (using http module), but it's not being received by express. Some of the tests make one request, some more than one. It doesn't affect the outcome which ones I skip. I haven't been able to replicate this behaviour outside mocha.
What on earth is going on?
With Mocha, if you declare a first argument to your (test) functions' callback (usually called done), you must call it, otherwise Mocha will wait until it's called (and eventually time out). If you're not going to need it in a test, don't declare it:
it('test1', function(done) {
..
// 'done' declared, so call it (eventually, usually when some async action is done)
done();
});
it('test2', function() {
// not an async test, not declaring 'done', obviously no need to call it
});
And since you're using http, try increasing http.globalAgent.maxSockets (which defaults to 5):
var http = require('http');
http.globalAgent.maxSockets = 100;
(I believe 0 turns it off completely but I haven't tried).

Resources