After I read through the docs. It seems to not have the option to set maxWorkers within the configuration files. I need to specify it in the cli.
Is there something that I'm missing here?
Using jest#26.1.0, npx jest --init generates maxWorkers in jest.config.js. It's just not documented on the website.
// ...
module.exports = {
// ...
// The maximum amount of workers used to run your tests. Can be specified as % or a number. E.g. maxWorkers: 10% will use 10% of your CPU amount + 1 as the maximum worker number. maxWorkers: 2 will use a maximum of 2 workers.
// maxWorkers: "50%",
// ...
};
Related
I have a WDIO project that has many tests. Some tests need to be run consecutively while other tests can run in parallel.
I cannot run all tests in parallel because the tests that need to be run consecutively will fail, and I cannot run all tests consecutively because it would take far too long for the execution to finish.
For these reasons I need to find a way to run these tests both consecutively and in parallel. Is it possible to configure this WDIO project to accomplish this?
I run these tests through SauceLabs and understand that I can set the maxInstances variable to as many VMs as I'd like to run in parallel. Is it possible to set certain tests to use a high maxInstance while other tests have a maxInstance of 1?
Or perhaps there is a way to use logic logic via the test directories to run certain tests in parallel and others consecutively?
For example, if I have these tests:
'./tests/parallel/one.js',
'./tests/parallel/two.js',
'./tests/consecutive/three.js',
'./tests/consecutive/four.js',
Could I create some logic such as:
if(spec.includes('/consecutive/'){
//Do not run until other '/consecutive/' test finishes execution
} else {
//Run in parallel
}
How can I configure this WDIO project to run tests both consecutively and in parallel? Thank you!
You could create 2 separate conf.js files.
//concurrent.conf.js
exports.config = {
// ==================
// Specify Test Files
// ==================
specs: [
'./test/concurrent/**/*.js'
],
maxInstances: 1,
and have one for parallel. To reduce duplication, create a shared conf.js and then simply override the appropriate values.
//parallel.conf.js
const {config} = require('./shared.conf');
config.specs = [
'./test/parallel/**/*.js'
],
config.maxInstances = 100,
And then when you run your tests you can do:
//parallel
wdio test/configs/parallel.conf.js
//concurrent
wdio test/configs/concurrent.conf.js
Here's an example of how to have a shared config file. And other config files using the shared one
Is there a way to fail a jest test when it exceeds x number of seconds?
There is this property: https://jestjs.io/docs/configuration#slowtestthreshold-number, but it's only for reporting right?
Global-level configuration:
Create a jest.config.js file and add this testTimeout option. jest will read the config file before executing.
Add --testTimeout option when you are using the jest CLI like jest --testTimeout 2000
File-level:
Use jest.setTimeout(timeout)
Set the default timeout interval for tests and before/after hooks in milliseconds. This only affects the test file from which this function is called.
Test case level:
Use test(name, fn, timeout)
The third argument (optional) is timeout (in milliseconds) for specifying how long to wait before aborting. Note: The default timeout is 5 seconds.
According to the docs one can increase the default async timeout from 5000ms using the jest-object
More specifically, by using the jestsettimeouttimeout
The issue I am facing is I am running a series of tests against an API that is very slow, 5-15 second response times, configuring this jest object at the top of each test is painfully annoying.
Is it possible to declare these settings once before all test files are run?
Jest offers a testTimeout configuration option you can add to your package.json:
"jest": {
"testTimeout": 15000,
}
OK, putting bits together:
Option "setupTestFrameworkScriptFile" was replaced by configuration "setupFilesAfterEnv", which supports multiple paths
https://jestjs.io/docs/en/jest-object#jestsettimeouttimeout
https://jestjs.io/docs/en/jest-object#jestdisableautomock
The Jest search box doesn't actually return anything when you search for: setupFilesAfterEnv
And docs talk about: setupTestFrameworkScriptFile (which also doesn't return anything on the search:/ )
Anyway, the docs leave you scratching your head but this works:
jest.config.js:
module.exports = {
setupFilesAfterEnv: ['./setup.js'],
setup.js:
jest.setTimeout(10000); // in milliseconds
The jest folks should make it easier to find this information.
Use testTimeout. In yourjest.config.js (or similar), add the following:
export SECONDS = 1000;
module.exports = {
testTimeout: 60 * SECONDS
}
If you are working with react and initializing you app using create-react-app, then under your src/ directory you should have a file named setupTests.js. Here you can setup a global timeout for all of your tests just by insert this line after the import statement for #testing-libary
jest.setTimeout(15000); // in milliseconds
when running a test locally it succeeds, but when configuring a remote grid, it fails with
1) Scenario: Login - features/api.feature:10
Step: When he enters his credentials - features/api.feature:13
Step Definition: node_modules/serenity-js/src/serenity-cucumber/webdriver_synchroniser.ts:46
Message:
function timed out after 5000 milliseconds
How can I increase the timeout value?
Thanks & Ciao
Stefan
Hi Stefan and thanks for giving Serenity/JS a try!
You have a couple of options here, depending on what is timing out.
As it's Protractor that's in charge of the timeouts, you'll need to look into your protractor.conf.js file.
Let's assume that your protractor.conf.js file looks more or less like the snippet below. I omit the Serenity/JS and Cucumber.js config for brevity as they're described at serenity-js.org:
exports.config = {
baseUrl: 'http://your.webapp.com',
// Serenity/JS config
framework: ...
specs: [ 'features/**/*.feature' ],
cucumberOpts: {
// ...
},
};
0. Increasing the overall timeout
To start with, you might want to increase the overall timeout of all the tests (for Protractor 5.0.0 the default value is set to 11s).
To do this, add the allScriptsTimeout entry to your config:
exports.config = {
allScriptsTimeout: <appropriate_timeout_in_millis>
// ... rest of the config file
}
1. Loading the page
If the webapp under test is slow to load, you can tweak the getPageTimeout property (default set to 10s):
exports.config = {
getPageTimeout: <appropriate_timeout_in_millis>
// ... rest of the config file
}
2. A specific Cucumber step
If a specific Cucumber step is timing out (which is most likely the case here, as Cucumber.js sets the default value of the cucumber step timeout to 5s), you can increase the timeout by changing the step definition (value in millis):
this.Given(/^When he enters his credentials$/, { timeout: 10 * 1000 }, () => {
return stage.theActorInTheSpotlight().attemptsTo(
Login.withTheirCredentials()
);
});
Please note that in the above answer I'm assuming that you're using Serenity/JS with Cucumber to test an Angular app. If you're using a different web framework (like React), the test might also time out when Protractor is waiting for Angular to load.
If this describes your scenario, you might want to ignoreSynchronization:
exports.config = {
onPrepare: function() {
browser.ignoreSynchronization = false;
}
// ... rest of the config file
}
To find out more, check out the Protractor documentation and the already mentioned Cucumber docs. I'll also add an article on serenity-js.org shortly to describe the different options so everything is in one place :-)
Hope this helps!
Jan
I'm building a an application using sails and every time I leave the server running for more than a few minutes my CPU jumps to a solid 100% usage. I'm including a big amount of less files in my assets and I believe my issue lies here. Are there any other reasons this may happen?
It could be the grunt-watch, when you have a lot of files it squeezes your cpu. Try disabling that and check if your cpu gets to a normal usage (6-30% depending on your cpu and overall usage).
To do that go to tasks/register/default.js and remove 'watch' from the array.
module.exports = function (grunt) {
grunt.registerTask('default', ['compileAssets', 'linkAssets', 'watch']);
};
If you don't want to completely disable the grunt watcher, then go to tasks/config/watch.js and try excluding the folder that has most of your files, or exclude them all if they are not in a particular folder.
I'll give you an example of how to exclude a folder for this task. Just add a ! before the path you want to exclude.
module.exports = function(grunt) {
grunt.config.set('watch', {
// Some config you can ignore in this case
assets: {
// Assets to watch:
files: ['assets/**/*',
'tasks/pipeline.js', '!**/node_modules/**',
'!assets/folder-to-exlude/**' // <-- HERE IS THE EXCLUDED PATH
],
// More code
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
};
I had a similar issue and this worked for me, let me know if it works.