Nodeunit - Explicit ending of tests - node.js

I have 2 nodeunit test cases in which if first fails i don't want to run 2nd test case.(Process should skip testing 2nd test case with proper result.) How do I do the same.
I have written sample code where In first test case I have specified number of expects should be one. Even if first test case fails, second test case still executes. What could be the issue here.
exports.testSomething = function(test){
test.expect(1);
test.ok(false, "this assertion should fail");
test.done();
};
exports.testSomethingElse = function(test){
test.ok(true, "this assertion should pass");
test.done();
};

Generally it is a bad idea to make separate tests dependent on eachother, and testing frameworks don't support this kind of thing because of that. Nodeunit tests are declared and processed for execution before any one test has started so there is no way to change which tests run.
For that specific example, if the add fails, then it makes sense that delete fails. It might be slightly annoying to get more output, but it's doing just what it is supposed to do.
It may not be applicable in this cause, but one option is to have a separate section for your delete tests where you use setUp to set up everything that delete needs without having to actually call add.
Update
test.expect is used to specify how many assertions were expected in a single test. You only really need it if your test has asynchronous logic. For example:
'test': function(test){
test.expect(5);
test.ok(true, 'should pass');
for (var i = 0; i < 4; i++){
setTimeout(function(){
test.ok(false, 'should fail');
}, 5);
}
test.done();
},
Without the test.expect(5), the test would pass because done is called before the line in the setTimeout has run, so the test has no way of knowing that is has missed something. By having the expect, it makes sure the test will properly fail.

Related

Is there a good way to print the time after each run with `mocha -w`?

I like letting mocha -w run in a terminal while I work on test so I get immediate feedback, but I can't always tell from a glance if it's changed or not when the status doesn't change - did it run, or did it get stuck (it's happened)?
I'd like to have a way to append a timestamp to the end of each test run, but ideally only when run in 'watch' mode - if I'm running it manually, of course I know if it ran or not.
For now, I'm appending an asynchronous console log to the last test that runs:
it('description', function () {
// real test parts.should.test.things();
// Trick - schedule the time to be printed to the log - so I can see when it was run last
setTimeout(() => console.log(new Date().toDateString() + " # " + new Date().toTimeString()), 5);
});
Obviously this is ugly and bad for several reasons:
It's manually added to the last test - have to know which that is
It is added every time that test is run, but never others - so if I run a different file or test -> no log; if I run only that test manually -> log
It's just kind of an affront to the purpose of the tests - subverting it to serve my will
I have seen some references to mocha adding a global.it object with the command line args, which could be searched for the '-w' flag, but that is even uglier, and still doesn't solve most of the problems.
Is there some other mocha add-in module which provides this? Or perhaps I've overlooked something in the options? Or perhaps I really shouldn't need this and I'm doing it all wrong to begin with?
Mocha supports root level hooks. If you place an after hook (for example) outside any describe block, it should run at the end of all tests. It won't run only in watch mode, of course, but should otherwise be fit for purpose.

Jest toMatchSnapshot not throwing an exception

Most of Jest's expect(arg1).xxxx() methods will throw an exception if the comparison fails to match expectations. One exception to this pattern seems to be the toMatchSnapshot() method. It seems to never throw an exception and instead stores the failure information for later Jest code to process.
How can we cause toMatchSnapshot() to throw an exception? If that's not possible, is there another way that our tests can detect when the snapshot comparison failed?
This will work! After running your toMatchSnapshot assertion, check the global state: expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
Just spent the last hour trying to figure it out for our own tests. This doesn't feel hacky to me either, though a maintainer of Jest may be able to tell me whether accessing Symbol.for('$$jest-matchers-object') is a good idea or not. Here's a full code snippet for context:
const GLOBAL_STATE = Symbol.for('$$jest-matchers-object');
describe('Describe test', () => {
it('should test something', () => {
try {
expect({}).toMatchSnapshot(); // replace with whatever you're trying to test
expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
} catch (e) {
console.log(`\x1b[31mWARNING!!! Catch snapshot failure here and print some message about it...`);
throw e;
}
});
});
If you run a test (e.g. /Foobar.test.js) which contains a toMatchSnapshot matcher jest by default will create a snapshot file on the first run (e.g. /__snapshots__/Foobar.test.js.snap).
This first run that creates the snapshot will pass.
If you want the test to fail you need to commit the snapshot alongside with your test.
The next test builds will compare the changes you make to the committed snapshot and if they differ the test will fail.
Here is the official link to the Documentation on 'Snapshot Testing' with Jest.
One, less than ideal, way to cause toMatchSnapshot to throw an exception when there is a snapshot mismatch is to edit the implementation of toMatchSnapshot. Experienced Node developers will consider this to be bad practice, but if you are very strongly motivated to have that method throw an exception, this approach is actually easy and depending on how you periodically update your tooling, only somewhat error-prone.
The file of interest will be named something like "node_modules/jest-snapshot/build/index.js".
The line of interest is the first line in the method:
const toMatchSnapshot = function (received, testName) {
this.dontThrow && this.dontThrow(); const
currentTestName = ....
You'll want to split that first line and omit the calling of this.dontThrow(). The resulting code should look similar to this:
const toMatchSnapshot = function (received, testName) {
//this.dontThrow && this.dontThrow();
const
currentTestName = ....
A final step you might want to take is to send a feature request to the Jest team or support an existing feature request that is of your liking like the following: link

Sinon with multiple mocha test files

I have multiple Mocha test files using one shared base file known as testBase.js. It's responsible for setting up all stubs and spies.
If I run individual file through mocha all test cases pass but when it run tests through mocha *.js, test cases begin to fail and raise error
TypeError: Attempted to wrap send which is already wrapped
Here are my beforeEach and afterEach blocks
beforeEach(function (done) {
context.alexaSpy = sinon.spy(alexa, "send");
}
beforeEach(function (done) {
context.alexaSpy.restore();
}
I actually printed out logs in both blocks and there is a strange thing I noticed. I see logs this way
-- BeforeEach Fired Test1
-- BeforeEach Fired Test1
-- AfterEach Fired Test1
-- AfterEach Fired Test1
I don't know why it's calling twice and its the root cause of the issue. BefireEach must not call twice for one mocha test.
Does importing multiple files call beforeEach twice? Can someone suggest any possible solution to this? I tried sinon.sandbox too but it does not work
We need to see how you require in the base file to be certain.
My guess is simply that you require the file from multiple files, and each time you do this you add the setup and teardown functions. That happens because all the tests share the same outer scope. Requiring the Base file ten times will add the beforeEach ten times too.
The right way to do this would be using sinon.sandbox or sinon-test. Much easier to avoid one test interfering with the next.
But no matter what you do, you would need to export the function and run that in a beforeEach in each file
Typically like this
const base = require('./base')
describe('module one', ()=> {
beforeEach(base.commonStubs);
it('should.... ',..);
})

How to Completely End a Test in Node Mocha Without Continuing

How do I force a Mochajs test to end completely without continuing on to the next tests. A scenario could be prevent any further tests if the environment was accidentally set to production and I need to prevent the tests from continuing.
I've tried throwing Errors but those don't stop the entire test because it's running asynchronously.
The kind of "test" you are talking about --- namely checking whether the environment is properly set for the test suite to run --- should be done in a before hook. (Or perhaps in a beforeEach hook but before seems more appropriate to do what you are describing.)
However, it would be better to use this before hook to set an isolated environment to run your test suite with. It would take the form:
describe("suite", function () {
before(function () {
// Set the environment for testing here.
// e.g. Connect to a test database, etc.
});
it("blah", ...
});
If there is some overriding reason that makes it so that you cannot create a test environment with a hook and you must perform a check instead you could do it like this:
describe("suite", function () {
before(function () {
if (production_environment)
throw new Error("production environment! Aborting!");
});
it("blah", ...
});
A failure in the before hook will prevent the execution of any callbacks given to it. At most, Mocha will perform the after hook (if you specify one) to perform cleanup after the failure.
Note that whether the before hook is asynchronous or not does not matter. (Nor does it matter whether your tests are asynchronous.) If you write it correctly (and call done when you are done, if it is asynchronous), Mocha will detect that an error occurred in the hook and won't execute tests.
And the fact that Mocha continues testing after you have a failure in a test (in a callback to it) is not dependent on whether the tests are asynchronous. Mocha does not interpret a failure of a test as a reason to stop the whole suite. It will continue trying to execute tests even if an earlier test has failed. (As I said above, a failure in a hook is a different matter.)
I generally agree with Glen, but since you have a decent use case, you should be able to trigger node to exit with the process.exit() command. See http://nodejs.org/api/process.html#process_process_exit_code. You can use it like so:
process.exit(1);
As of the mocha documentation, you can add --exit flag when you are executing the tests.
It will stop the execution whenever all the tests have been executed successfully or not.
ex:
mocha **/*.spec.js --exit

How to create a data driven test in Node.js

In Node.js unit tests, what is the way to create data driven unit tests?
For Example, I've a common function / method, which I want to re-use in multiple unit tests with different sets of data. I tried looking into nodeunit, vows, whiskey, qunit, expresso; But I wasn't able to figure out a way to achieve this functionality.
I was not looking at calling the function literally in multiple tests, but rather use the common method in a loop to pick up the data in each iteration and execute it, as a unittest
The reason for this is, I've atleast 1000 rows of parameterized data, for which I want to write unittest. Obviously I cannot go on writing 1000 unittests physically.
Anyone could you please point me a way to achieve the above.
There is qunit addon which allows to run parameterized quint tests
https://github.com/AStepaniuk/qunit-parameterize
So you can separate test data and test method and run the same test method against different data sets.
This is a pretty old post, but I just hit this problem myself and wasn't able to find a clean solution for QUnit without using the plugin referenced by the other comment (qunit-parameterize). Honestly, I couldn't figure out how to integrate the plugin with my company's project and gave up after about an hour.
This is how I ended up solving it:
Just define an array with your inputs (and expected outputs, if needed), iterate over your array, and define the QUnit test in the callback! Super simple, really, but worked quite well.
const testCases = [
{ input: "01/01/2015", expected: "2015-01-01" },
{ input: "09/25/2015", expected: "2015-09-01" },
{ input: "12/31/2015", expected: "2015-12-01" }
];
testCases.forEach(testCase => {
QUnit.test("gets first of month",
() => {
const actual = new classUnderTest().getFirstOfMonth(testCase.input);
strictEqual(actual, testCase.expected);
});
});
I wasn't sure that QUnit would discover the test if it were nested as such, but it does just fine.
Enjoy!

Resources