we are using codeceptjs for e2e tests for my work. For every code submission, we run these tests (we use webpack-dev-server to mock the backend).
In the beginning, the time spent to run these tests is acceptable. However after 1 year we have around 900 tests (i.e. CodeCept's Scenarios) and it takes around 1 hour to finish. Basically after finishing a feature we add some tests and for bugs we also add e2e test. We realize this is not sustainable if we keep adding more e2e tests since it takes to long to run. Do you have any suggestions how can I improve them ( we are using codeceptjs ) ?
I am thinking about only running some e2e tests for important features for each submission, the rests should be run separately ( maybe once per day ). Thanks.
Related
in my Jest test suite,
i use
jest.retryTimes(4)
because of the particular and instable architecture of the software. And this works as expected.
There are some test that must to pass at the first time so for these particular test i need to set
jest.retryTimes(1)
at the beginning of the test, restoring
jest.retryTimes(4)
at the end.
There are two problems :
The problem is this configuration is global, and test are executed
in parallel, so when this test start, it put 1 to jest retry for all
the test running at this moment. I would like to make only this
particular test fail the first time.
Jest Circus ignore the update of jest.retryTimes at the beginning and at the end
of the test, it keep considering 4 temptative before of raise the failure.
I read the documentation but I think I cannot obtain this result.
any suggestion?
Thanks
Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.
We have an existing application, the tests are written in cypress. We now want to integrate a cucumber style feature which will internally run using cypress. We used cypress-cucumber-preprocessor for the same. I followed the steps given here on the github page. The problem I'm facing now, is while running tests, it shows both the scenarios, but runs only one. Shows a green tick mark next to it, but doesn't start the second one, and the clock keeps on ticking. On clicking the second scenario in the cypress launcher it says - no commands were issued in this test.
What have I tried:
I tried to duplicate the same scenario twice in the same feature file. It still runs only first one and does not move to the next one.
I moved both different scenarios in two different feature files. It runs both of them successfully.
I tried to run the example repo (cypress-cucumber-example) locally with n number of scenarios. That works seamlessly.
Some observations:
While the first test is run I ran chrome console, and saw some errors due to some network calls failing. But these calls were made (with same errors) even when I was using only cypress and hadn't integrated with cucumber, and all tests were passing. Is it because of some magic cucumber is bringing along with it? Read somewhere default cucumber waits for a test is 60 seconds, I waited for maximum 170 seconds, and then stopped the suite. At the end all I get is one scenario green and other not even started.
It took me quite a long time, but I actually figured out what the issue was. I had an enter key after Feature: in my feature file. The ide didn't raise it as any problem and all was good. I was just comparing successful runs against this issue and saw that the feature name is not appearing in the UI, and hence took away the \n. It works like a charm now. Wondering what a small enter key can do.
When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?
Pretty new to Mocha and testing here, so hoping somebody can help me out or point me in the right direction.
What I am trying to do is have a mochaJS test run every 5 minutes for an entire day. However, I don't want to actually type the command, and am hoping that I could write some code that would carry this out.
So far, I have tried adjusting the this.timeout in the mocha test, and then setting javascript intervals (setInterval(function(){}, time)) and using while loops with the setTimeout.
Is it possible to set an interval like this in mocha?
Or is there some other way around this, say through command line, that would execute the mocha test every 5 minutes?
Thank you for your advice and expertise!
Cheers
This really sounds like a task that should be managed by a Continuous Integration server such as Jenkins.
If you install Jenkins, you can create a new job that will run the tests you want on a given interval, ie. every 5 minutes. Better yet, you can connect the Job to your source code repo and run the job on any code push.