Running subsets of Jest tests in parallel - node.js

We're using Jest to power our Node.js tests, these interact with a Postgres database to test CRUD operations. We're currently passing the --runInBand CLI option to ensure our tests operate in serial, this works fine but is obviously slower than we'd like.
Now from reading around (and previous experience) I've found it useful to be able to mark groups of tests as parallelise-able. This is possible with nose in python but I cannot seem to find the syntax in Jest. Is this possible? Or is there another approach to speeding up database (or state constrained to generalise) tests that Jest advocates?
Thanks,
Alex

Put your tests in separate files (in a new sub folder if you want to keep them organized). That way Jest runs the files in parallel.

Related

Is it possible to run some steps within one Junit test in parallel with failsafe library?

I don't want to run separate tests in parallel, but I want to run steps within one test in parallel, for example if three clusters need to be created, I want them to be created parallelly. I don't fully understand maven-failsafe but I was wondering if it will help me achieve this easily or if i should use normal Java threads?

VSTest: Order the execution of test assemblies

Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.

Pytest running a certain test suite in a certain thread

The requirement is to run all the tests belonging to a suite in a certain thread.
For example having suite1, suite2, suite3 and so on.
And I'd like to make all the tests belonging to the suite1 executed in one thread(thread1), whereas tests belonging to the suite2 in another thread(thread2) and tests from suite3 in one more separate thread(thread3).
As per xdist there are only 2 options:
- --dist=loadscope(by module)
- --dist=loadfile(by filename)
It would be exactly what I need to have kind of a --dist=loaddirectory, to grab all the tests in a certain directory(e.g for each dir in a dir given as a path to all the test suits)
For sure I can try to launch pytest several times each time passing a suite directory, but afraid it would have bad consequences for performance as in this case there are several instances of the pytest running rather than several threads.
So could anybody please advise if you are familiar with something that can help to tackle the requirements?
Any help or ideas, or even directions to dig are much appreciated.

Speed up Meteor test refresh

When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources