I read the document and got confused on the difference between these two. I know codes in setupFiles would be executed before codes in setupTestFrameworkScriptFile. What else differences do they have?
I guess codes in these two would be executed before each test. Does that mean if I have 10 it(); they are executed 10 times?
setupTestFrameworkScriptFile and setupFiles are executed before each file containing tests. If you have 10 tests in one file - no mater how many describe's - it will run once. If in 2 separate files - it will run twice.
In both setupTestFrameworkScriptFile and setupFiles you can initiate globals, like this:
global.MY_GLOBAL = 42
setupFiles run before test framework is installed in the environment.
In setupTestFrameworkScriptFile you have also access to installed test environment, methods like describe, expect and other globals. You can for example add your custom matchers there:
expect.extend({
toHaveLength(received, argument) {
// ...
}
})
... or set a new maximum timeout interval:
jest.setTimeout(12000)
Related
I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.
I have a WDIO project that has many tests. Some tests need to be run consecutively while other tests can run in parallel.
I cannot run all tests in parallel because the tests that need to be run consecutively will fail, and I cannot run all tests consecutively because it would take far too long for the execution to finish.
For these reasons I need to find a way to run these tests both consecutively and in parallel. Is it possible to configure this WDIO project to accomplish this?
I run these tests through SauceLabs and understand that I can set the maxInstances variable to as many VMs as I'd like to run in parallel. Is it possible to set certain tests to use a high maxInstance while other tests have a maxInstance of 1?
Or perhaps there is a way to use logic logic via the test directories to run certain tests in parallel and others consecutively?
For example, if I have these tests:
'./tests/parallel/one.js',
'./tests/parallel/two.js',
'./tests/consecutive/three.js',
'./tests/consecutive/four.js',
Could I create some logic such as:
if(spec.includes('/consecutive/'){
//Do not run until other '/consecutive/' test finishes execution
} else {
//Run in parallel
}
How can I configure this WDIO project to run tests both consecutively and in parallel? Thank you!
You could create 2 separate conf.js files.
//concurrent.conf.js
exports.config = {
// ==================
// Specify Test Files
// ==================
specs: [
'./test/concurrent/**/*.js'
],
maxInstances: 1,
and have one for parallel. To reduce duplication, create a shared conf.js and then simply override the appropriate values.
//parallel.conf.js
const {config} = require('./shared.conf');
config.specs = [
'./test/parallel/**/*.js'
],
config.maxInstances = 100,
And then when you run your tests you can do:
//parallel
wdio test/configs/parallel.conf.js
//concurrent
wdio test/configs/concurrent.conf.js
Here's an example of how to have a shared config file. And other config files using the shared one
I am writing some tests for a Node/MongoDB project that runs various modules via command line entries. My question is, for the tests, is there a way I can simulate a command line entry? For instance, if what I write in the command line is:
TASK=run-comparison node server
... is there a way I can effectively simulate that within my tests?
The common practice here as far as I know, is to wrap as much of your app as you can within a function/class where you pass the arguments, so you can easily test it with unit tests:
function myApp(args, env){
// My app code with given args and env variables
}
In your test file:
// Run app with given env variable
myApp("", { TASK: "run-comparison"});
In your particular case, if all your tasks are set through env variables, through editing of process.env, mocks, or .env files you may be able to test that without modifications on your code.
If that is not enough for your case (i.e. you really need to exactly simulate command line execution) I wrote a small library to solve this exact issue some time ago: https://github.com/angrykoala/yerbamate (I'm not sure if there are other alternatives available now).
With the example you provided, A test case could be something like this:
const yerbamate = require('yerbamate');
// Gets the package.json information
const pkg = yerbamate.loadPackage(module);
//Test the given command in the package.json root dir
yerbamate.run("TASK=run-comparison node server", pkg.dir, {}, function(code, out, errs) {
// This callback will be called once the script finished
if (!yerbamate.successCode(code)) console.log("Process exited with error code!");
if (errs.length > 0) console.log("Errors in process:" + errs.length);
console.log("Output: " + out[0]); // Stdoutput
});
In the end, this is a fairly simple wrapper of native child_process which you could also use to solve your problem by directly executing subprocesses.
I like letting mocha -w run in a terminal while I work on test so I get immediate feedback, but I can't always tell from a glance if it's changed or not when the status doesn't change - did it run, or did it get stuck (it's happened)?
I'd like to have a way to append a timestamp to the end of each test run, but ideally only when run in 'watch' mode - if I'm running it manually, of course I know if it ran or not.
For now, I'm appending an asynchronous console log to the last test that runs:
it('description', function () {
// real test parts.should.test.things();
// Trick - schedule the time to be printed to the log - so I can see when it was run last
setTimeout(() => console.log(new Date().toDateString() + " # " + new Date().toTimeString()), 5);
});
Obviously this is ugly and bad for several reasons:
It's manually added to the last test - have to know which that is
It is added every time that test is run, but never others - so if I run a different file or test -> no log; if I run only that test manually -> log
It's just kind of an affront to the purpose of the tests - subverting it to serve my will
I have seen some references to mocha adding a global.it object with the command line args, which could be searched for the '-w' flag, but that is even uglier, and still doesn't solve most of the problems.
Is there some other mocha add-in module which provides this? Or perhaps I've overlooked something in the options? Or perhaps I really shouldn't need this and I'm doing it all wrong to begin with?
Mocha supports root level hooks. If you place an after hook (for example) outside any describe block, it should run at the end of all tests. It won't run only in watch mode, of course, but should otherwise be fit for purpose.
I have multiple Mocha test files using one shared base file known as testBase.js. It's responsible for setting up all stubs and spies.
If I run individual file through mocha all test cases pass but when it run tests through mocha *.js, test cases begin to fail and raise error
TypeError: Attempted to wrap send which is already wrapped
Here are my beforeEach and afterEach blocks
beforeEach(function (done) {
context.alexaSpy = sinon.spy(alexa, "send");
}
beforeEach(function (done) {
context.alexaSpy.restore();
}
I actually printed out logs in both blocks and there is a strange thing I noticed. I see logs this way
-- BeforeEach Fired Test1
-- BeforeEach Fired Test1
-- AfterEach Fired Test1
-- AfterEach Fired Test1
I don't know why it's calling twice and its the root cause of the issue. BefireEach must not call twice for one mocha test.
Does importing multiple files call beforeEach twice? Can someone suggest any possible solution to this? I tried sinon.sandbox too but it does not work
We need to see how you require in the base file to be certain.
My guess is simply that you require the file from multiple files, and each time you do this you add the setup and teardown functions. That happens because all the tests share the same outer scope. Requiring the Base file ten times will add the beforeEach ten times too.
The right way to do this would be using sinon.sandbox or sinon-test. Much easier to avoid one test interfering with the next.
But no matter what you do, you would need to export the function and run that in a beforeEach in each file
Typically like this
const base = require('./base')
describe('module one', ()=> {
beforeEach(base.commonStubs);
it('should.... ',..);
})