How to debug Storyshots test generation issues - jestjs

I'm working on a project that uses Storybook with Storyshots addon. The Jest tests contain a crawler that generates tests based on Storybook stories. When test generation process goes wrong Jest tells me Your test suite must contain at least one test. Is there anyway to get more accurate information about what went wrong? At one point I might have a substantial amount of working tests and in the next moment one problematic story might take that back to zero.
See full error with stack trace below
FAIL ./storyshots.test.ts
● Test suite failed to run
Your test suite must contain at least one test.
at onResult (node_modules/#jest/core/build/TestScheduler.js:175:18)
at node_modules/#jest/core/build/TestScheduler.js:304:17
at node_modules/emittery/index.js:260:13
at Array.map (<anonymous>)
at Emittery.Typed.emit (node_modules/emittery/index.js:258:23)
The initStoryshots call looks as follows
initStoryshots({
framework: 'react',
configPath: path.join(__dirname, '.storybook'),
integrityOptions: { cwd: path.join(__dirname, 'src') },
test: multiSnapshotWithOptions(),
});

That message can be thrown for many reasons, try the --verbose option of jest for more feedback

Related

BeforeEach step is repeated with cy.session using cypress-cucumber-preprocessor

I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.

How to prevent Jest from running tests when an error occurred

I wonder if there is any way to prevent tests from running when we have an error.
For example, in the beforeAll() function. I have tried "return" or "throw" an error but after that, Jest runs all of my tests.
So when my code in the beforeAll() function has an error that can affect other test results, I would like to be able to prevent Jest from running all the tests.
Jest tries to run all the tests even though we already know all the tests would fail.
You can try to use bail in config:
bail: 2 // finish after 2 failed tests
or
bail: true // finish after first
https://jestjs.io/docs/cli#--bail
To fail your test use:
fail('something wrong');

how to remove console.warn message when i ran npm run test?

I am new to react when i run npm run test i get flood of below warnings please give your inputs
console.warn node_modules/react-intl-universal/lib/index.js:101
react-intl-universal locales data "null" not exists
You likely need to initialize react-intl-universal somewhere in your test setup.
A good place to start is how react-intl-universal tests their own code.
Basically you'll need this executed before your tests run that depend on it (sounds like a lot of them?):
import intl from 'react-intl-universal';
// common locale data
require('intl/locale-data/jsonp/en.js');
// app locale data
const locales = {
"en-US": require('./locales/en-US.js'),
};
intl.init({ locales, currentLocale: "en-US" });
If it's just a few places then I'd do it how they do, but if it's your entire codebase you're better off moving it into your test configs/setup.

Unit testing Mongoose models in separate files causes issues (using Mockgoose & Lab)

Whenever a Mongoose model is attempted to load after it's already loaded, an error is thrown, such as:
error: uncaughtException: Cannot overwrite Account model once compiled. date=Fri Feb 26 2016 10:13:40 GMT-0700 (MST), pid=19231, uid=502, gid=20, cwd=/Users/me/PhpstormProjects/project, execPath=/usr/local/Cellar/node/0.12.4/bin/node, version=v5.2.0, argv=[/usr/local/Cellar/node/0.12.4/bin/node, /usr/local/Cellar/node/0.12.4/bin/lab], rss=73306112, heapTotal=62168096, heapUsed=29534752, loadavg=[1.6005859375, 1.84716796875, 1.8701171875], uptime=648559
OverwriteModelError: Cannot overwrite Account model once compiled.
Which I'm fine with, but now that I'm writing unit tests for my models, I'm running into an issue.
Just some basic info about the file structure...
I have all the Mongoose models in separate files, located inside the src/models/ folder, and to load these models, one simply has to require the folder, passing a Mongoose object to it, and the src/models/index.js file will load all the models, and return an object of the models. The index.js file can be seen here (And not that its relevant, but the model names are basically the filename, without the .js)
Now the Unit tests for the models are also split up into separate files. Theres one test file for each model. And even though each unit test file focuses on a specific model, some of them use other models as well (for before/after tasks).
Initial Problem
I just created the 2nd unit test file, and when I execute each one independently, they work just fine. But when I execute all of them, I receive the above error, stating that I'm attempting to load the models more than once. Which since I require the ./models in each unit test case, I am loading them more than once.
First Resolution Attempt
I thought that maybe I could clear all of the loaded models via after() in each separate unit test file, like so:
after(function(done) {
mongoose.connection.close(function() {
mongoose.connection.models = {}
done()
})
})
Which didn't work at all (No new errors, but the same Cannot overwrite Account model once compiled error(s) persisted)
Second Resolution Attempt (semi-successful)
Instead of the models throwing an error on the last line, when it attempts to return the Mongoose.model(), I insert some logic in the top of the model, to check if the model is loaded, and if so, return that model object:
const thisFile = path.basename( __filename ).match( /(.*)\.js$/ )[ 1 ]
const modelName = _.chain( thisFile ).toLower().upperFirst().value()
module.exports = Mongoose => {
// Return this model, if it already exists
if( ! _.isUndefined( Mongoose.models[ modelName ] ) )
return Mongoose.models[ modelName ]
const Schema = Mongoose.Schema
const appSchema = new Schema( /* ..schema.. */)
return Mongoose.model( modelName, appSchema )
}
I'm trying that out in my models right now, and it seems to work alright, (alright meaning I don't get the errors listed above, saying I'm loading models multiple times)
New Problem
Now whenever the unit tests execute, I receive an error, the error displays once per a model, but its the same error:
$ lab
..................................................
...
Test script errors:
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
Cannot set property '0' of undefined
at emitOne (events.js:83:20)
at EventEmitter.emit (events.js:170:7)
at EventEmitter.g (events.js:261:16)
at emitNone (events.js:68:13)
at EventEmitter.emit (events.js:167:7)
There were 3 test script error(s).
53 tests complete
Test duration: 1028 ms
No global variable leaks detected
There isn't too much details to go off of in that stack trace..
I'm not sure if its caused by the code I added into each model, checking if it's already loaded, if it was, it would either show up when I execute a single unit test, or it would only show that Cannot set property '0' of undefined twice (Once for a successful initial model load, then twice for the next two... I would think)
If anyone has any input, I would very much appreciate it! Thanks
Updates
I tried running lab --debug to get more info, and while it doesn't show any stack traces around the errors showing up, it doubles them... which is odd. So if there was 2 when executing just lab, lab --debug shows 4
Also, I use Winston to do my logging. If I change the log level to debug, which shows a lot of debug entries in the console, it doesn't show any entries around these errors... So that makes me think it may not be caused by my scripts, but rather something in the unit testing dependencies?
The errors say they originate from the error.js file, but don't say much else. I tried to find an error.js via find . -name 'events.js', with no results.. Odd
I think the code you placed into each model is a hack. During the normal execution, require has "global" effect - once you import the module, it will not be imported second time.
Probably this normal flow is changed during the tests, but that means that it is better to find a solution which can be locally implemented inside the tests.
It also looks like you have the problem similar to what is discussed in this issue - OverwriteModelError with mocha 'watch'.
There are some solutions to try:
1) Create new mongoose connection each time:
var db = mongoose.createConnection()
2) Run the mocha via nodemon. This one looks puzzling for me, but still worth trying, maybe it makes each test to run completely independently. I also assume you use mocha for tests:
nodemon --exec "mocha -R min" test
3) Clear mongoose models and schemes after each test:
after(function(done){
mongoose.models = {};
mongoose.modelSchemas = {};
mongoose.connection.close();
done();
});

How to Completely End a Test in Node Mocha Without Continuing

How do I force a Mochajs test to end completely without continuing on to the next tests. A scenario could be prevent any further tests if the environment was accidentally set to production and I need to prevent the tests from continuing.
I've tried throwing Errors but those don't stop the entire test because it's running asynchronously.
The kind of "test" you are talking about --- namely checking whether the environment is properly set for the test suite to run --- should be done in a before hook. (Or perhaps in a beforeEach hook but before seems more appropriate to do what you are describing.)
However, it would be better to use this before hook to set an isolated environment to run your test suite with. It would take the form:
describe("suite", function () {
before(function () {
// Set the environment for testing here.
// e.g. Connect to a test database, etc.
});
it("blah", ...
});
If there is some overriding reason that makes it so that you cannot create a test environment with a hook and you must perform a check instead you could do it like this:
describe("suite", function () {
before(function () {
if (production_environment)
throw new Error("production environment! Aborting!");
});
it("blah", ...
});
A failure in the before hook will prevent the execution of any callbacks given to it. At most, Mocha will perform the after hook (if you specify one) to perform cleanup after the failure.
Note that whether the before hook is asynchronous or not does not matter. (Nor does it matter whether your tests are asynchronous.) If you write it correctly (and call done when you are done, if it is asynchronous), Mocha will detect that an error occurred in the hook and won't execute tests.
And the fact that Mocha continues testing after you have a failure in a test (in a callback to it) is not dependent on whether the tests are asynchronous. Mocha does not interpret a failure of a test as a reason to stop the whole suite. It will continue trying to execute tests even if an earlier test has failed. (As I said above, a failure in a hook is a different matter.)
I generally agree with Glen, but since you have a decent use case, you should be able to trigger node to exit with the process.exit() command. See http://nodejs.org/api/process.html#process_process_exit_code. You can use it like so:
process.exit(1);
As of the mocha documentation, you can add --exit flag when you are executing the tests.
It will stop the execution whenever all the tests have been executed successfully or not.
ex:
mocha **/*.spec.js --exit

Resources