Sinon not stubbing due to multiple test cases - node.js

GitHub Repo
When I run my test cases using mocha, sinon will set up the stubs based on the first app.js that gets called. Looking at other people having the same problems it looks like the problem is that when I require('../../../app') it is pulling in a cached version so it is using the same stubs as whatever was set up the first time I called it.
What I've tries in the beforeEach section on both test cases:
decache('../../../app'); app = require('../../../app')
Using Sinon Sandbox and restoring
that
delete require.cache[require.resolve('../../../app')]; app = require('../../../app')
Using mockery to reset the cache
I dont know if its calling the cached version of the requirement or if I'm not stubbing it out correctly.
Thanks in advance for any help that can be provided.

By default, modules are imported in a Singleton-type behaviour in Node e.g.
const app = require('./app');
app.someProperty = 'x';
If I require app again in another file after this, then you can expect someProperty to still be x because, as you have worked out, Node will cache the result and return the same one every-time.
In your case this is less an issue with Node, and more your usage of Sinon - when you stub something, the common practise is to restore the value back after the test is finished e.g.
const app = require('...');
before(() => sinon.stub(app,'someFunction'));
after(() => app.someFunction.restore());

Related

Assign/remove values from process.env several times during Jest tests

I have read and tried the options described in every stackoverflow thread related to this issue but I'm tempted to believe they're all out of date and no longer reflect jest behaviour.
I have a configuration service which returns a default value or a value from the environment.
During tests, I need to overwrite process.env values such as:
process.env.config_CORS_ENABLED = overwrittenAllConfig;
// expecting them to be overwritten
const corsEnabled = allConfigs.get('CORS_ENABLED');
expect(corsEnabled).toStrictEqual(overwrittenAllConfig);
Everything works fine on windows but on WSL and linux workers during pipelines, the value from the environment is never set.
I have beforeEach and afterEach hooks:
afterEach(async () => {
process.env = env;
});
beforeEach(async () => {
jest.resetModules();
process.env = { ...env };
and at the beginning of the describe block:
const env = process.env;
I have also tried the Object.assign() strategy for the whole process.env object but that didn't work either, and upon logging the process.env object after assigning it has a ton of values unrelated to what I've assigned to it.
I've also tried the --runInBand and the --maxWorkers 1 option to make sure that there aren't conflicts, but that didn't do anything.
I can't be setting up env variables using .dotEnv() as I need to assign multiple different values between expectations in some cases.
This is a very reasonable real-world usage and I'm just shocked at the mountain of issues I've had trying to get this working so far.
Happy to try any suggestions. An unreasoanble amount of time has already been spent reading threads and blogs and documentation attempting to get this working.
Dynamic imports might be the answer, as described in Dynamically import module in TypeScript.
When you import the module to be tested (allConfigs) at the top of the file, it will import process before you've intercepted it. Therefore remove that import at the top, and do something like this:
const allConfigs = await import('<correctPath>/allConfigs');
const corsEnabled = allConfigs.get('CORS_ENABLED');
Please note that dynamic imports require an await, so you'll need to mark your test with async.

Jest mock knex fn.now(), but keep rest of implementation

I am writing tests for a service that uses knex, however, since the knex calls has several uses of knex.fn.now() my tests will produce varied results over time. I'm wondering if it's possible to mock/spy/hijack the inner calls to knex.fn.now() to something I can control, while letting the rest of the code stay in its 'real' implementation. I can only find examples of mocking knex completely which would make the purpose of my testing pointless.
So I'm wondering if it's possible to have jest listen for a specific function call and insert another value in it's stead.
You can mock Knex package by creating a folder __mocks__/knex/index.js.
Inside this file u can require the real knex implementation, change it, and export.
It should look something like this:
// __mocks__/knex/index.js
const knex = require('knex');
const fixedTime = new Date();
knex.fn.now = () => fixedTime;
module.exports = knex;

Using Lucid outside of AdonisJS controller

I'm building a project with AdonisJS, and I want to build it as a modular, two-part application: The AdonisJS server runs a control panel, and a custom script outside that server runs an IRC bot. I've been trying to load Lucid into the second script so that I can interface with my database, but it only ever returns an empty object, {}. Some things to note:
I've made sure my database is populated.
I've tested code in my controllers that works and fetches results as expected.
The secondary script boots up all the same parts of Adonis as server.js, sans the actual HTTP server.
I have tried attaching this script to an HTTP server but it made no difference.
I have also tried creating raw QueryBuilder objects with the same results.
Here's the least amount of code I can put together as an example:
#!/usr/bin/node
'use strict'
const fs = require('fs')
const bootstrap = require('./bootstrap/bot')
bootstrap(() =>
{
const AppConfig = use('AppConfig')
const Settings = use('App/Model/Settings')
const get_settings = function * () {
yield Settings.all()
}
console.log(get_settings())
})
console.log() prints {}, even though the same code called within a controller prints all entries from the settings table. bootstrap/bot.js is almost an exact replica of bootstrap/http.js. The only difference is that it doesn't start an HTTP server.
I've scoured the source code looking for things that might happen between starting the server and running controller code to see if there's something critical I'm missing, but I'm lost.
Does anyone know how I can use my Lucid models outside the confines of AdonisJS controllers?
It's because your function is a generator and you can only call a generator with the yield keyword.
So your console.log() should looks like console.log(yield get_settings()).
You may use the package co to create the root generator function.
bootstrap(co(function * () {
// ...
}))

require.main.require works but not inside Mocha test

I have written a global function for requiring certain files of my app/framework:
global.coRequireModel = function(name) {
// CRASH happens here
return require.main.require('./api/_co' + name + '/_co' + name + '.model');
}
This module is in /components/coGlobalFunctions.
It is required in my main app app.js like this:
require('./components/coGlobalFunctions');
Then in other modules using "something" from the framework I use:
var baseScheme = coRequireModel('Base');
This works but not in the Mocha tests which give me a "Error: Cannot find module" right before the require.main.require call.
It seems that the test is coming from another source folder. But I thought the require.main.require would take out the aspect of having to relatively linking to modules.
EDIT:
An example test file living in api/user:
var should = require('should');
var app = require('../../app');
var User = require('./user.model');
...
require.main points to the module that was run directly from node. So, if you run node app.js, then require.main will point to app.js. If, on the other hand, you ran it using mocha, then require.main will point to mocha. This is likely why your tests are failing.
See the node docs of more details.
Because require.main was not index.html in my node-webkit app when running mocha tests, it threw errors left and right about not being able to resolve modules. Hacky fix in my test-helper.js (required first thing in all tests) fixed it:
var path = require('path')
require.main.require = function (name) {
// navigate to main directory
var newPath = path.join(__dirname, '../', name)
return require(newPath)
}
This feels wrong, though it worked. Is there a better way to fix this? It's like combining some of the above solutions with #7 to get mocha testing working, but modifying main's require just to make everything work when testing feels really wrong.
For other avoid-the-".."-mess solutions, see here:
https://gist.github.com/branneman/8048520
This is pretty old, but here is my solution.
I needed a test harness module to be published to a private registry and required by the mocha test suite. I wanted the calling test code to pass the code under test to the harness rather than requiring it directly:
var harness = require('test-harness');
var codeUnderTest = harness('../myCode');
Inside harness (which was found in the project node_modules directory), I used the following code to make require find the correct file:
if (!path.isAbsolute(target)) {
target = path.join(path.dirname(module.parent.paths[0]), target);
}
var codeUndertest = require(target);
...
return codeUnderTest;
This relies on the require path resolution that always starts with looking for a node_modules subdirectory relative to the calling file. Couple that with module.parent and you can get access to that search path. Then just remove the trailing node_modules part and concatenate the relative filename.
For other scenarios not using relative paths, this could be accomplished with the options parameter to require:
var codeUndertest = require(target, {paths: module.parent.paths});
...
return codeUnderTest;
And the two could be combined as well. I used the first form because I was actually using proxyquire which does not offer the paths option.

Grunt-Karma: Use Node.js fs-framework in Jasmine Testfile

I'm writing unit-tests with the Jasmine-framework.
I use Grunt and Karma for running the Jasmine testfiles.
I simply want to load the content of a file on my local file-system (e.g. example.xml).
I thought I can do this:
var fs = require('fs');
var fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
This works well in my Gruntfile.js and even in my karma.conf.js file, but not in my
Jasmine-file. My Testfile looks like this:
describe('Some tests', function() {
it('load xml file', function() {
var fs = require("fs");
fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
});
});
The first error I get is:
'ReferenceError: require is not defined'.
Does not know why I cannot use RequireJS here, because I can use it
in Gruntfiel.js and even in karma.conf.js?!?!?
Okay, but when manually add require.js to the files-property in karma.conf.js-file,
then I get the following message:
Module name "fs" has not been loaded yet for context: _. Use require([])
With the array-syntax of requirejs, nothing happens.
I guess that is not possible to access Node.js functionality in Jasmine when running the
testfiles with Karma. So when Karma runs on Node.js, why is it not possible to access the 'fs'-framework of Nodejs?
Any comment/advice is welcome.
Thanks.
Your test do not work because karma - is a testrunner for client-side JavaScript (javascript who run in browser), but you want to test node.js code with it (which run on the server part). So karma just can't run server-side tests. You need different testrunner, for example take a look to jasmine-node.
Since this comes up first in the Google search, I received a similar error but wasn't using any node.js-style code in my project. Turns out the error was one of my bower components had a full copy of jasmine in it including its node.js-style code, and I had
{ pattern: 'src/**/*.js', included: false },
in my karma.conf.js.
So unfortunately Karma doesn't provide the best debugging for this sort of thing, dumping you out without telling you which file caused the issue. I had to just tear that pattern down to individual directories to find the offender.
Anyway, just be wary of bower installs, they bring a lot of code down into your project directory that you might not really care to have.
I think you're missing the point of unit testing here, because it seems to me that you're copying application logic into your test suite. This voids the point of a unit test because what it is supposed to do is run your existing functions through a test suite, not to test that fs can load an XML file. In your scenario if your XML handling code was changed (and introduced a bug) in the source file it would still pass the unit test.
Think of unit testing as a way to run your function through lots of sample data to make sure it doesn't break. Set up your file reader to accept input and then simply in the Jasmine test:
describe('My XML reader', function() {
beforeEach(function() {
this.xmlreader = new XMLReader();
});
it('can load some xml', function() {
var xmldump = this.xmlreader.loadXML('inputFile.xml');
expect(xmldump).toBeTruthy();
});
});
Test the methods that are exposed on the object you are testing. Don't make more work for yourself. :-)

Resources