I'm running a project that has several test suites (Jest), all passing locally.
I have a step in my CircleCI job that simply runs yarn test.
The problem is that the entire build hangs during the yarn test step. The tests all pass, but the step just hangs, and times out after 10 min.
To make thing more interesting, if I remove one test suite file from my tests folder, the step will pass. I can remove any tests file, it will still pass. That leads me to believe there is nothing wrong with my tests. Also, if I remove one file, and add a new file with just one dummy test, the step will hang again.
What am I overlooking?
Thanks!
Problem
I was not able the find the RIGHT solution and looks like no one has been. When the tests are run in sequence (or ran alone), and the tests fail, Jest can show a readable error message.
The problem happens when you run Jest in parallel mode (which is the default mode). That's why adding the -w=2 helped, it's because it decreased the "parallelity" to only 2.
Probably Jest has a bug that it can't detect failing tests when they're run in parallel mode. So the solution is to always use a config that forces sequence.
Solution
You have basically these options:
npx jest --maxWorkers=1 # run tests serially
npx jest --runInBand # equivalent of the above
npx jest --detectOpenHandles # equivalent of the above, plus some additional checks
Performance
It'll slow down the test speed a little bit, but in most cases, the difference won't be big. In my case, the difference was in the order of seconds, look:
Running with --detectOpenHandles:
Test Suites: 26 passed, 26 total
Tests: 145 passed, 145 total
Snapshots: 0 total
Time: 24.093 s
Running without --detectOpenHandles:
Test Suites: 26 passed, 26 total
Tests: 145 passed, 145 total
Snapshots: 0 total
Time: 23.602 s
Also, I found this article in which the guy also states he hadn't noticed a huge performance issue, even with 600+ tests.
In my case I had to use earlier version of circleci/node image, 'cause if keeps failing no matter how much workers enabled. Something with yarn and jest https://github.com/facebook/jest/issues/5989.
Image that worked: circleci/node#sha256:6a10e853547cd5b7480ca27eac13f58505493cb375652dd432084fa07903fa7d
Related
I am using cucumber 4.4.0 with parallel run through cucumber.api.cli.Main from mvn using --threads for parallel run
<mainClass>cucumber.api.cli.Main</mainClass>
<arguments>
<argument>--threads</argument>
<argument>5</argument>
</arguments>
I need to extend that for rerun the failed tests and get the report of the very last run if rerun happens ( say test1 failed first time and passed second time then report should be the passed one for test1)
This should be done as part of single build.
otherwise i have to do mvn run to create rerun.txt file
then use that reurn.txt again through mvn run in jenkins
I know one library https://github.com/prashant-ramcharan/courgette-jvm which does this all above in a single go. ( parallel run, rerun the failed ones, get the report of the latest run result). This library I have used before as well.
However the only problem is the above library during parallel run, say it starts with 5 threads and it waits until all the 5 threads finish. Then start again with another set of 5 threads etc. So it increases the execution time of the test suite. As example :- test1 takes 1 min and test5 takes 5 mins then those threads which finished the tests already still wait until test5 fininsh. After that only another set of 5 threads start.
But in the cucumber.api.cli.Main --threads 5, in this case the moment thread finish it picks the next test. so execution time is quicker for test suite.
Anyone using any other library which does everything but execution time is faster?
in my Jest test suite,
i use
jest.retryTimes(4)
because of the particular and instable architecture of the software. And this works as expected.
There are some test that must to pass at the first time so for these particular test i need to set
jest.retryTimes(1)
at the beginning of the test, restoring
jest.retryTimes(4)
at the end.
There are two problems :
The problem is this configuration is global, and test are executed
in parallel, so when this test start, it put 1 to jest retry for all
the test running at this moment. I would like to make only this
particular test fail the first time.
Jest Circus ignore the update of jest.retryTimes at the beginning and at the end
of the test, it keep considering 4 temptative before of raise the failure.
I read the documentation but I think I cannot obtain this result.
any suggestion?
Thanks
we are using codeceptjs for e2e tests for my work. For every code submission, we run these tests (we use webpack-dev-server to mock the backend).
In the beginning, the time spent to run these tests is acceptable. However after 1 year we have around 900 tests (i.e. CodeCept's Scenarios) and it takes around 1 hour to finish. Basically after finishing a feature we add some tests and for bugs we also add e2e test. We realize this is not sustainable if we keep adding more e2e tests since it takes to long to run. Do you have any suggestions how can I improve them ( we are using codeceptjs ) ?
I am thinking about only running some e2e tests for important features for each submission, the rests should be run separately ( maybe once per day ). Thanks.
Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.
When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?