How to retest same URL using Mocha and Nock? - node.js

Am using Mocha, Chai, Sinon, Proxyquire and Nock.
For this particular test scenario (for which this question is being asked), wish to test the exact same URL several times, each in a separate test that expects a different response.
For example, a response with no merchant feeds, 1 merchant feed, and yet again with 2 merchant feeds.
The existing code all works, furthermore if I run tests individually they pass.
However, if i run them together using Mocha in a single suite they fail. Believe the issue is that Nock hijacks the global http object for a given URL and each test (running asynchronously at the same time) is competing for the same global response reference.
In the scenario above, a response prepared with a canned reply of 1 merchant is getting say overwritten by the setup to respond with 2 merchants etc.
Is there a mechanism to avoid this happening, for instance guarantees around serial execution of async Mocha testcases (which I believed was the default behaviour).

In response to your point 2), you can use nock.cleanAll() to cleanup all prepared nocks:
https://github.com/pgte/nock#cleanall
You can put this in an afterEach block to make sure you don't have leftover nocks between tests.
afterEach ->
nock.cleanAll()

Ok, so this works (sample code):
beforeEach(function (done) {
nock(apiUrl)
.get('/dfm/api/v1/feeds?all=false')
.reply(200, [
{'merchantId': 2, 'id': 2, 'disabled': false}
], { server: 'Apache-Coyote/1.1',
'set-cookie': [ 'JSESSIONID=513B77F04A3A3FCA7B0AE1E99B57F237; Path=/dfm/; HttpOnly' ],
'content-type': 'application/json;charset=UTF-8',
'transfer-encoding': 'chunked',
date: 'Thu, 03 Jul 2014 08:46:53 GMT' });
batchProcess = proxyquire('./batchProcess', {
'./errorHandler': errorHandler.stub,
'./batchTask': batchTask.stub
});
winston.info('single valid feed beforeEach completed');
done();
});
There were lots of complicating factors. Two things to be aware of:
1). I had async testcases but was using beforeEach() without the done param. This was then causing the URL collisions. By explcitly declaring each beforeEach(done) and invoking done() Mocha will run in serial order and there is no longer an issue.
2). Be sure that if you have more than one test in same testsuite file, that any Nock fixtures you set in a previous test actually get executed IF you have declared the same URL in a subsequent test with alternate response. If the prior nock fixture doesn't get invoked then nock STILL retains the response from the wrong test (the previous one). This was my primary problem. You could argue that tests should not have any fixtures declared if they don't get run - but you could also argue this is still a bug in the way Nock works. The test fixtures were each isolated in their own describe / beforeEach(done) functions..
Update 2 days later... OK point 2). just bit me again, and I was pleased I wrote the note above to remind myself about this hard to debug issue. If you are using Mocha and Nock together be aware of this issue!!
Did eventually implement a nock helper function too, to assist with this (coffeescript here):
global.resetNock = ->
global.nock.cleanAll()
global.nock.disableNetConnect()
Then at the start of beforeEach just apply resetNock()

Related

Testing Involving Database

well before specifying my problem, i want to tell that i'm new to the field of testing, so here is my problem:
i developed a rest api using express + sequelize(mysql), and i want to write some test for my api. i choosed to use jasmine library for testing.
so right know i want to test the create and update rest endpoint, i will need access to a database, but the problem is that the test cases are run in parallel, and there is only one database, so if i want to delete all item from a table in a test case, and another test case have create a row in that table, there will be a problem.
const request = require('superagent');
const models = require('../../src/models');
const Station = models.Station;
describe("station testing", function () {
before(() => {
// delete and recreate all database table
// before running any test
});
describe("crud station", function () {
it('should create model', () => {
Station.create({
'name': 'test',
lat: 12,
long: 123,
}).then( model => {
expect(model).toBeTruthy();
});
});
it('should delete evrything', () => {
Station.deleteAll().then( () => {
// problem here if after the first model is created and before create model except is executed
expect(Station.Count()).toEqual(0);
}
});
});
});
Your problem is that you are not writing unit tests here.
You need to understand the most important rule of unit testing - only test one unit at a time. A unit can be thought of as an area of your code. In a traditional desktop project (Java, C#, etc), a unit would be one class. In the case of Javascript, a unit is harder to define, but it certainly will only include the Javacript. If you are including any server code (for example, the database) in your tests, then you are not unit testing, you are doing integration testing (which is also very important, but much harder).
Your Javascript will have dependencies (ie other code that it calls, say via Ajax calls), which in your case will include the server code that is called. In order to unit test, you need to make sure that you are only testing the Javascript, which means that when running the tests, you don't want the server code to be called at all. That way, you isolate any errors in that unit of code, and can be confident that any problems found are indeed in that unit. If you include other units, then it could be the other units that have the problem.
In a strongly-typed language (like Java, C#, etc), there are frameworks that allow you to set up a mock for each dependency. Whilst I haven't tried any myself (that's this week's job), there are mocking frameworks for Javascript, and you would probably need to use one of them to do real unit testing. You mock out the server code, so when you run the test, it doesn't actually hit the database at all. Apart from solving your problem, it avoids a whole load of other issues that you will likely hit at some point with your current approach.
If you don't want to use a mocking framework, one other way to do it is to change your Javascript so that the function you are testing takes an extra parameter, which is a function that does the actual server call. So, instead of...
deleteCustomer(42);
deleteCustomer(id) {
validate(id);
$.ajax(...);
}
...your code would look like this...
deleteCustomer(42, callServer);
deleteCustomer(id, serverCall) {
validate(id);
serverCall(id);
}
...where serverCall() contains the Ajax call.
Then, to unit test, you would test something like this...
deleteCustomer(42, function(){});
...so that instead of calling the server, nothing is actually done.
This is obviously going to require some rewriting of your code, which could be avoided by mocking, but would work. My advice would be to spend some time learning how to use a mocking framework. It will pay off in the long run.
Sorry this has been a bit long. Unfortunately, you're getting into a complex area of unit testing, and it's important to understand what you're doing. I strongly recommend you read up about unit testing before you go any further, as a good understanding of the basics will save you a lot of trouble later on. Anything by Robert Martin (aka Uncle Bob) on the subject will be good, but there are plenty of resources around the web.
Hope this helps. If you want any more info, or clarification, ask away.
Jasmine supports a function for beforeEach which run before each spec in a describe block.
You can use that.
describe("A spec using beforeEach and afterEach", function() {
var foo = 0;
beforeEach(function() { foo += 1; });
afterEach(function() { foo = 0; });
it("is just a function, so it can contain any code", function() {
expect(foo).toEqual(1);
});
it("can have more than one expectation", function() {
expect(foo).toEqual(1)
expect(true).toEqual(true);
});
});
So you could let the beforeEach take care of the delete operation.

Frisby Functional Test standards

I'm new to this and I have been searching for ways (or standards) to write proper functional tests but I still I have many unanswered questions. I'm using FrisbyJS to write functional tests for my NodeJS API application and jasmine-node to run them.
I have gone through Frisby's documentation, but it wasn't fruitful for me.
Here is a scenario:
A guest can create a User. (No username duplication allowed, obviously)
After creating a User, he can login. On successful login, he gets an Access-Token.
A User can create a Post. Then a Post can have Comment, and so on...
A User cannot be deleted once created. (Not from my NodeJS Application)
What Frisby documentation says is, I should write a test within a test.
For example (full-test.spec.js):
// Create User Test
frisby.create('Create a `User`')
.post('http://localhost/users', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// User Login Test
frisby.create('Login `User`')
.post('http://localhost/users/login', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// Another Test (For example, Create a post, and then comment)
})
.toss();
})
.toss();
Is this the right way to write a functional test? I don't think so... It looks dirty.
I want my tests to be modular. Separate files for each test.
If I create separate files for each test, then while writing a test for Create Post, I'll need a User's Access-Token.
To summarize, the question is: How should I write tests if things are dependent on each other?
Comment is dependent on Post. Post is dependent on User.
This is the by product to using NodeJS. This is a large reason I regret deciding on frisby. That and the fact I can't find a good way to load expected results out of a database in time to use them in the tests.
From what I understand - You basically want to execute your test cases in a sequence. One after the other.
But since this is javascript, the frisby test cases are asynchronous. Hence, to make them synchronous, the documentation suggested you to nest the test cases. Now that is probably OK for a couple of test cases. But nesting would go chaotic if there are hundreds of test cases.
Hence, we use sequenty - another nodejs module, which uses call back to execute functions(and test cases wrapped in these functions) in sequence. In the afterJSON block, after all the assertions, you have to do a call back - cb()[which is passed to your function by sequenty].
https://github.com/AndyShin/sequenty
var sequenty = require('sequenty');
function f1(cb){
frisby.create('Create a `User`')
.post('http://localhost/users', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
//Do some assertions
cb(); //call back at the end to signify you are OK to execute next test case
})
.toss();
}
function f2(cb){
// User Login Test
frisby.create('Login `User`')
.post('http://localhost/users/login', { ... }, {json: true})
.expectStatus(200)
.afterJSON(function (json) {
// Some assertions
cb();
})
.toss();
}
sequenty.run(f1,f2);

Mocha & SuperTest Tests Timing out

I've got a problem whereas sometimes my tests fail due to the asserts being run before the response comes back here in my integration tests:
it('should find all countries when no id is specified', co.wrap(function *(){
var uri = '/countries';
var response = yield request.get(uri);
should.exist(response.body);
response.status.should.equal(200);
}));
Not sure how to tweak mocha or supertest to wait or if I should use a promise here or what. I'm using generators with mocha here.
Adding a longer timeout to mocha does absolutely nothing. The query behind this particular test takes a bit longer than my other tests. My other tests run just fine.

Sinon - when to use spies/mocks/stubs or just plain assertions?

I'm trying to understand how Sinon is used properly in a node project. I have gone through examples, and the docs, but I'm still not getting it. I have setup a directory with the following structure to try and work through the various Sinon features and understand where they fit in
|--lib
|--index.js
|--test
|--test.js
index.js is
var myFuncs = {};
myFuncs.func1 = function () {
myFuncs.func2();
return 200;
};
myFuncs.func2 = function(data) {
};
module.exports = myFuncs;
test.js begins with the following
var assert = require('assert');
var sinon = require('sinon');
var myFuncs = require('../lib/index.js');
var spyFunc1 = sinon.spy(myFuncs.func1);
var spyFunc2 = sinon.spy(myFuncs.func2);
Admittedly this is very contrived, but as it stands I would want to test that any call to func1 causes func2 to be called, so I'd use
describe('Function 2', function(){
it('should be called by Function 1', function(){
myFuncs.func1();
assert(spyFunc2.calledOnce);
});
});
I would also want to test that func1 will return 200 so I could use
describe('Function 1', function(){
it('should return 200', function(){
assert.equal(myFuncs.func1(), 200);
});
});
but I have also seen examples where stubs are used in this sort of instance, such as
describe('Function 1', function(){
it('should return 200', function(){
var test = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
});
});
How are these different? What does the stub give that a simple assertion test doesn't?
What I am having the most trouble getting my head around is how these simple testing approaches would evolve once my program gets more complex. Say I start using mysql and add a new function
myFuncs.func3 = function(data, callback) {
connection.query('SELECT name FROM users WHERE name IN (?)', [data], function(err, rows) {
if (err) throw err;
names = _.pluck(rows, 'name');
return callback(null, names);
});
};
I know when it comes to databases some advise having a test db for this purpose, but my end-goal might be a db with many tables, and it could be messy to duplicate this for testing. I have seen references to mocking a db with sinon, and tried following this answer but I can't figure out what's the best approach.
You have asked so many different questions on one post... I will try to sort out.
Testing myFuncs with two functions.
Sinon is a mocking library with wide features. "Mocking" means you are supposed to replace some part of what is going to be tested with mocks or stubs. There is a good article among Sinon documentation which describes the difference well.
When you created a spy in this case...
var spyFunc1 = sinon.spy(myFuncs.func1);
var spyFunc2 = sinon.spy(myFuncs.func2);
...you've just created a watcher. myFuncs.func1 and myFuncs.func2 will be substituted with a spy-function, but it will be used to record the calling arguments and call real function after that. This is a possible scenario, but mind that all possibly complicated logic of myFuncs.func1/func2 will run after being called in the test (ex.: database query).
2.1. The describe('Function 1', ...) test suite looks too contrived to me.
It is not obvious which question you actually mean.
A function that returns a constant value is not a real life example.. In most cases there will be some parameters and the function under test will implement some algorithm of transmuting the input arguments. So in your test you're going to implement the same algorithm partly to check that the function works correctly. That is where TDD comes in place, which actually supposes you start implementation from a test and take parts of unit test code to implement the method being tested.
2.2. Stub. The second version of unit test looks useless in the given example. func1 is not accepting any parameter.
var test = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
Even if you replace the return part with 100 the test will run successfully.
The one that makes sense is, for example, replacing func2 with a stub to avoid heavy calculation/remote request (DB query, http or other API request) being launched in a test.
myFuncs.func2 = sinon.spy();
assert.equal(myFuncs.func1(test), 200);
assert(myFuncs.func2.calledOnce);
The basic rule for unit testing is that unit test should be kept as simple as possible, providing check for as smallest possible fragment of code. In this test func1 is being tested so we can neglect the logic of func2. Which should be tested in another unit test.
Mind that doing the following attempt is useless:
myFuncs.func1 = sinon.stub().returns(200);
assert.equal(myFuncs.func1(test), 200);
Because in this case you masked the real func1 logic with a stub and you're actually testing sinon.stub().return(). Believe me it works well! :D
Mocking database queries.
Mocking a database has always been a hurdle. I could provide some advises.
3.1. Have well fragmented environment. Even for a small project there better exist a development, stage and production completely independent environments. Including databases. Which means you have an automated way of DB creation: scripts or ORM.
In this scenario you will be easily maintaining test DB in your test engine using before()/beforeEach() to have a clean structure for your tests.
3.2. Have well fragmented code. There'd better exist several layers. The lowest (DAL) should be separated from business logic. In this case you will write code for business class, simply mocking DAL. To test DAL you can have the approach you mentioned (sinon.mock the whole module) or some specific libraries (ex.: replace db engine with SQLite for tests as described here)
Conclusion. "how these simple testing approaches would evolve once my program gets more complex".
It is hard to maintain unit tests unless you develop your application with tests in mind and, thus, well fragmented. Stick to the main rule - keep each unit test as small as possible. Otherwise you're right, it will eventually get messy. Because the constantly evolving logic of your application will be involved in your test code.

Intern.io Single Page Application Functional Testing

I have a single page application that uses Dojo to navigate between pages.
I am writing some functional tests using intern and there are some niggly issues I am trying to weed out.
Specifically I am having trouble getting intern to behave with timeouts. None of the timeouts seem to have any effect for me. I am trying to set the initial load timeout using "setPageLoadTimeout(30000)" but this seems to get ignored. I also call "setImplicitWaitTimeout(10000)" but again this seems to have no effect.
The main problem I have is that it may take a couple of seconds in my test environment for the request to be sent and the response parsed and injected into the DOM. The only way I have been able to get around this is by explicitly calling "sleep(3000)" for example but this can be a bit hit & miss and sometimes the DOM elements are not ready by the time I query them. (as mentioned setImplicitWaitTimeout(10000) doesn't seem to have an effect for me)
With the application I fire an event when the DOM has been updated. I use dojo.subscribe to hook into this in the applictaion. Is it possible to use dojo.subscribe within intern to control the execution of my tests?
Heres a sample of my code. I should have also mentioned that I use Dijit so there is also a slight delay when the response comes back and the widgets are being created (via data-dojo-type declarations)...
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/node_modules/dojo/topic'
], function (registerSuite, assert, require, topic) {
registerSuite({
name: 'Flow1',
// login to the application
'Login': function(remote) {
return remote
.setPageLoadTimeout(30000)
.setImplicitWaitTimeout(10000)
.get(require.toUrl('https://localhost:8080/'))
.elementById('username').clickElement().type('user').end()
.elementById('password').clickElement().type('password').end()
.elementByCssSelector('submit_button').clickElement().end();
},
// check the first page
'Page1':function() {
return this.remote
.setPageLoadTimeout(300000) // i've tried these calls in various places...
.setImplicitWaitTimeout(10000) // i've tried these calls in various places...
.title()
.then(function (text) {
assert.strictEqual(text, 'Page Title');})
.end()
.active().type('test').end()
.elementByCssSelector("[title='Click Here for Help']").clickElement().end()
.elementById('next_button').clickElement().end()
.elementByCssSelector("[title='First Name']").clear().type('test').end()
.elementByCssSelector("[title='Gender']").clear().type('Female').end()
.elementByCssSelector("[title='Date Of Birth']").type('1/1/1980').end()
.elementById('next_button').clickElement().end();
},
// check the second page
'Page2':function() {
return this.remote
.setImplicitWaitTimeout(10000)
.sleep(2000) // need to sleep here to wait for request & response injection and DOM parsing etc...
.source().then(function(source){
assert.isTrue(source.indexOf('test') > -1, 'Should contain First Name: "test"');
}).end()
// more tests etc...
}
});
});
I'm importing the relevant Dojo module from the intern dojo node module but I'm unsure of how to use it.
Thanks
Your test is timing out, because Intern tests have an explicit timeout set to 30s that is not accessible through their API. It can be changed by adding 'intern/lib/Test' to your define array, and then overwriting the timeout from the Test's object, e.g. Test.prototype.timeout = 60000;.
For example:
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/node_modules/dojo/topic',
'intern/lib/Test'
], function (registerSuite, assert, require, topic, Test) {
Test.prototype.timeout = 60000;
...
}
This should change the timeout to one minute instead of 30s, to prevent your test timing out.

Resources