Unit testing Mongoose models in separate files causes issues (using Mockgoose & Lab) - node.js

Whenever a Mongoose model is attempted to load after it's already loaded, an error is thrown, such as:
error: uncaughtException: Cannot overwrite Account model once compiled. date=Fri Feb 26 2016 10:13:40 GMT-0700 (MST), pid=19231, uid=502, gid=20, cwd=/Users/me/PhpstormProjects/project, execPath=/usr/local/Cellar/node/0.12.4/bin/node, version=v5.2.0, argv=[/usr/local/Cellar/node/0.12.4/bin/node, /usr/local/Cellar/node/0.12.4/bin/lab], rss=73306112, heapTotal=62168096, heapUsed=29534752, loadavg=[1.6005859375, 1.84716796875, 1.8701171875], uptime=648559
OverwriteModelError: Cannot overwrite Account model once compiled.
Which I'm fine with, but now that I'm writing unit tests for my models, I'm running into an issue.
Just some basic info about the file structure...
I have all the Mongoose models in separate files, located inside the src/models/ folder, and to load these models, one simply has to require the folder, passing a Mongoose object to it, and the src/models/index.js file will load all the models, and return an object of the models. The index.js file can be seen here (And not that its relevant, but the model names are basically the filename, without the .js)
Now the Unit tests for the models are also split up into separate files. Theres one test file for each model. And even though each unit test file focuses on a specific model, some of them use other models as well (for before/after tasks).
Initial Problem
I just created the 2nd unit test file, and when I execute each one independently, they work just fine. But when I execute all of them, I receive the above error, stating that I'm attempting to load the models more than once. Which since I require the ./models in each unit test case, I am loading them more than once.
First Resolution Attempt
I thought that maybe I could clear all of the loaded models via after() in each separate unit test file, like so:
after(function(done) {
mongoose.connection.close(function() {
mongoose.connection.models = {}
done()
})
})
Which didn't work at all (No new errors, but the same Cannot overwrite Account model once compiled error(s) persisted)
Second Resolution Attempt (semi-successful)
Instead of the models throwing an error on the last line, when it attempts to return the Mongoose.model(), I insert some logic in the top of the model, to check if the model is loaded, and if so, return that model object:
const thisFile = path.basename( __filename ).match( /(.*)\.js$/ )[ 1 ]
const modelName = _.chain( thisFile ).toLower().upperFirst().value()
module.exports = Mongoose => {
// Return this model, if it already exists
if( ! _.isUndefined( Mongoose.models[ modelName ] ) )
return Mongoose.models[ modelName ]
const Schema = Mongoose.Schema
const appSchema = new Schema( /* ..schema.. */)
return Mongoose.model( modelName, appSchema )
}
I'm trying that out in my models right now, and it seems to work alright, (alright meaning I don't get the errors listed above, saying I'm loading models multiple times)
New Problem
Now whenever the unit tests execute, I receive an error, the error displays once per a model, but its the same error:
$ lab
..................................................
...
Test script errors:
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
There were 3 test script error(s).
53 tests complete
Test duration: 1028 ms
No global variable leaks detected
There isn't too much details to go off of in that stack trace..
I'm not sure if its caused by the code I added into each model, checking if it's already loaded, if it was, it would either show up when I execute a single unit test, or it would only show that Cannot set property '0' of undefined twice (Once for a successful initial model load, then twice for the next two... I would think)
If anyone has any input, I would very much appreciate it! Thanks
Updates
I tried running lab --debug to get more info, and while it doesn't show any stack traces around the errors showing up, it doubles them... which is odd. So if there was 2 when executing just lab, lab --debug shows 4
Also, I use Winston to do my logging. If I change the log level to debug, which shows a lot of debug entries in the console, it doesn't show any entries around these errors... So that makes me think it may not be caused by my scripts, but rather something in the unit testing dependencies?
The errors say they originate from the error.js file, but don't say much else. I tried to find an error.js via find . -name 'events.js', with no results.. Odd

I think the code you placed into each model is a hack. During the normal execution, require has "global" effect - once you import the module, it will not be imported second time.
Probably this normal flow is changed during the tests, but that means that it is better to find a solution which can be locally implemented inside the tests.
It also looks like you have the problem similar to what is discussed in this issue - OverwriteModelError with mocha 'watch'.
There are some solutions to try:
1) Create new mongoose connection each time:
var db = mongoose.createConnection()
2) Run the mocha via nodemon. This one looks puzzling for me, but still worth trying, maybe it makes each test to run completely independently. I also assume you use mocha for tests:
nodemon --exec "mocha -R min" test
3) Clear mongoose models and schemes after each test:
after(function(done){
mongoose.models = {};
mongoose.modelSchemas = {};
mongoose.connection.close();
done();
});

Related

Jest ToMatchSnapshot fails on unit test after aws-cdk-lib update from 2.27.0 to 2.28.0

I am working on some code that is using jest ToMatchSnapshot to compare an aws stack created from the current unit test, with one that was created prior to any code changes. The code that the unit test is running looks like this.
test('adds lambda function widgets', () => {
const stack = testStack()
new lambda.Function(stack, 'Function', {
-- some code
})
const template = Template.fromStack(stack)
expect(template.toJSON()).toMatchSnapshot()
})
when I update the reference for aws-cdk-lib from 2.27 to 2.28 the unit test fails. The error is a mismatch of the S3Key property of an S3Bucket. I looked in the AWS account and I can find the S3Bucket name that is in the template, but not the S3Key, neither the old one nor the new one. I need to know whether this is a substantial change that somehow needs to be fixed so the snapshot pre-update matches the post-update, or if this is something trivial and I can just update the snapshot.
I updated the version of aws-cdk-lib
I am expecting no errors in the unit test
I am getting and error of a mismatched S3Key property

How to debug Storyshots test generation issues

I'm working on a project that uses Storybook with Storyshots addon. The Jest tests contain a crawler that generates tests based on Storybook stories. When test generation process goes wrong Jest tells me Your test suite must contain at least one test. Is there anyway to get more accurate information about what went wrong? At one point I might have a substantial amount of working tests and in the next moment one problematic story might take that back to zero.
See full error with stack trace below
FAIL ./storyshots.test.ts
● Test suite failed to run
Your test suite must contain at least one test.
at onResult (node_modules/#jest/core/build/TestScheduler.js:175:18)
at node_modules/#jest/core/build/TestScheduler.js:304:17
at node_modules/emittery/index.js:260:13
at Array.map (<anonymous>)
at Emittery.Typed.emit (node_modules/emittery/index.js:258:23)
The initStoryshots call looks as follows
initStoryshots({
framework: 'react',
configPath: path.join(__dirname, '.storybook'),
integrityOptions: { cwd: path.join(__dirname, 'src') },
test: multiSnapshotWithOptions(),
});
That message can be thrown for many reasons, try the --verbose option of jest for more feedback

Assign/remove values from process.env several times during Jest tests

I have read and tried the options described in every stackoverflow thread related to this issue but I'm tempted to believe they're all out of date and no longer reflect jest behaviour.
I have a configuration service which returns a default value or a value from the environment.
During tests, I need to overwrite process.env values such as:
process.env.config_CORS_ENABLED = overwrittenAllConfig;
// expecting them to be overwritten
const corsEnabled = allConfigs.get('CORS_ENABLED');
expect(corsEnabled).toStrictEqual(overwrittenAllConfig);
Everything works fine on windows but on WSL and linux workers during pipelines, the value from the environment is never set.
I have beforeEach and afterEach hooks:
afterEach(async () => {
process.env = env;
});
beforeEach(async () => {
jest.resetModules();
process.env = { ...env };
and at the beginning of the describe block:
const env = process.env;
I have also tried the Object.assign() strategy for the whole process.env object but that didn't work either, and upon logging the process.env object after assigning it has a ton of values unrelated to what I've assigned to it.
I've also tried the --runInBand and the --maxWorkers 1 option to make sure that there aren't conflicts, but that didn't do anything.
I can't be setting up env variables using .dotEnv() as I need to assign multiple different values between expectations in some cases.
This is a very reasonable real-world usage and I'm just shocked at the mountain of issues I've had trying to get this working so far.
Happy to try any suggestions. An unreasoanble amount of time has already been spent reading threads and blogs and documentation attempting to get this working.
Dynamic imports might be the answer, as described in Dynamically import module in TypeScript.
When you import the module to be tested (allConfigs) at the top of the file, it will import process before you've intercepted it. Therefore remove that import at the top, and do something like this:
const allConfigs = await import('<correctPath>/allConfigs');
const corsEnabled = allConfigs.get('CORS_ENABLED');
Please note that dynamic imports require an await, so you'll need to mark your test with async.

gulp-eslint not outputting to file - unable to properly configure writableStream

Issue - User cannot get output to print to file for gulp-lint process
Documentation Reference It is observed in the documentation that a writeableStream is a valid configuration, but regrettably it does not denote or provide clarification on how to do this....and I have tried the solution below, along with others to no avail...so am seeking any insight / support that can be provided
Observations - other users had published guidance suggesting a stream similar to this, but when attempting this 2 things are observed....
The IntelliJ IDE notes that the parameter "writable" should be updated to "writableStream"
The build output generates the file, but the file is empty, therefore I am obviously missing something with respect to configuring / establishing the stream properly
Sample Code Block
'use strict';
const {src, task} = require('gulp');
const eslint = require('gulp-eslint');
const fs =require('fs');
task('lint', () => {
return src(['**/*.js', '!**/node_modules/**', '!**/handlebars.runtime-v4.1.2.js', '!**/parsley.js', '!**/slick.js','!*SampleTests.js'])
// Runs eslint
.pipe(eslint())
// Sets the format of the console
.pipe(eslint.format('table',fs.createWriteStream('eslint-result.xml')))
// To have the process exit with an error code (1) on lint error, return the stream and pipe to failAfterError last
.pipe(eslint.failOnError()
.pipe(eslint.results(results => {
// Called once for all ESLint results.
console.log(`Total Results: ${results.length}`);
console.log(`Total Warnings: ${results.warningCount}`);
console.log(`Total Errors: ${results.errorCount}`);
}))
)
});
A new day brings new results I guess....after running a build with this configuration again it worked..much to my surprise.
Note, I am using maven as a build process, and am invoking this using the maven-frontend plugin....and it is important to note that the result file will NOT appear until AFTER the build process has finished

Jest toMatchSnapshot not throwing an exception

Most of Jest's expect(arg1).xxxx() methods will throw an exception if the comparison fails to match expectations. One exception to this pattern seems to be the toMatchSnapshot() method. It seems to never throw an exception and instead stores the failure information for later Jest code to process.
How can we cause toMatchSnapshot() to throw an exception? If that's not possible, is there another way that our tests can detect when the snapshot comparison failed?
This will work! After running your toMatchSnapshot assertion, check the global state: expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
Just spent the last hour trying to figure it out for our own tests. This doesn't feel hacky to me either, though a maintainer of Jest may be able to tell me whether accessing Symbol.for('$$jest-matchers-object') is a good idea or not. Here's a full code snippet for context:
const GLOBAL_STATE = Symbol.for('$$jest-matchers-object');
describe('Describe test', () => {
it('should test something', () => {
try {
expect({}).toMatchSnapshot(); // replace with whatever you're trying to test
expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);
} catch (e) {
console.log(`\x1b[31mWARNING!!! Catch snapshot failure here and print some message about it...`);
throw e;
}
});
});
If you run a test (e.g. /Foobar.test.js) which contains a toMatchSnapshot matcher jest by default will create a snapshot file on the first run (e.g. /__snapshots__/Foobar.test.js.snap).
This first run that creates the snapshot will pass.
If you want the test to fail you need to commit the snapshot alongside with your test.
The next test builds will compare the changes you make to the committed snapshot and if they differ the test will fail.
Here is the official link to the Documentation on 'Snapshot Testing' with Jest.
One, less than ideal, way to cause toMatchSnapshot to throw an exception when there is a snapshot mismatch is to edit the implementation of toMatchSnapshot. Experienced Node developers will consider this to be bad practice, but if you are very strongly motivated to have that method throw an exception, this approach is actually easy and depending on how you periodically update your tooling, only somewhat error-prone.
The file of interest will be named something like "node_modules/jest-snapshot/build/index.js".
The line of interest is the first line in the method:
const toMatchSnapshot = function (received, testName) {
this.dontThrow && this.dontThrow(); const
currentTestName = ....
You'll want to split that first line and omit the calling of this.dontThrow(). The resulting code should look similar to this:
const toMatchSnapshot = function (received, testName) {
//this.dontThrow && this.dontThrow();
const
currentTestName = ....
A final step you might want to take is to send a feature request to the Jest team or support an existing feature request that is of your liking like the following: link

Resources