We have used cucumber in our golang project. But I find that it always takes several seconds to minutes for GoLand to complie these *.feature files. Is there any method to improve the execution efficiency?
Related
I have an automation package (Pytest Based) that has the following structure:
tests\
test_1.py
test_2.py
test_3.py
Currently all 3 tests are executed sequentially, but it takes alot of time to execute them.
I've read about pytest-xdist but i don't see in it's documentation where I can specify which scripts can be run in parallel through it's invocation.
we are using codeceptjs for e2e tests for my work. For every code submission, we run these tests (we use webpack-dev-server to mock the backend).
In the beginning, the time spent to run these tests is acceptable. However after 1 year we have around 900 tests (i.e. CodeCept's Scenarios) and it takes around 1 hour to finish. Basically after finishing a feature we add some tests and for bugs we also add e2e test. We realize this is not sustainable if we keep adding more e2e tests since it takes to long to run. Do you have any suggestions how can I improve them ( we are using codeceptjs ) ?
I am thinking about only running some e2e tests for important features for each submission, the rests should be run separately ( maybe once per day ). Thanks.
my organization is using selenium automation framework designed using Jbehave-Thucydides-Maven. The framework consists of 1500+ tests, however, not all are required to be executed every time. Whenever we execute small batch (say 10 scripts or so), all 1500+ scripts are loaded in system and filtering is done on "Meta" tags (which are passed at time of execution) to execute selected 10 scripts. This is causing high overall execution time wherein actual scripts execution takes only 10 mins whereas loading the scripts and filtering takes 15+ mins making high total execution time. Below is the snap-shot of the maven pom which is used to trigger the multi-thread execution. Could you please advise what changes are required so that only required 10 scripts are loaded in system rather than whole 1500+ scripts.enter image description here
I'm trying to run cucumber scenarios in parallel from inside my gem. From other answers, I've found I can execute cucumber scenarios with the following:
runtime = Cucumber::Runtime.new
runtime.load_programming_language('rb')
#result = Cucumber::Cli::Main.new(['features\my_feature:20']).execute!(runtime)
The above code works fine when I run one scenario at a time, but when I run them in parallel using something like Celluloid or Peach, I get Ambiguous Step errors. It seems like my step definitions are being loaded for each parallel test and cucumber thinks I have multiple steps definitions of the same kind.
Any ideas how I can run these things in parallel?
Cucumber is not thread safe. Each scenario must be run in a separate thread with it's own cucumber runtime. Celluloid may try to run multiple scenarios on the same actor at the same time.
There is a project called cukeforker that can run scenarios in parallel but it only supports mri on linux and osx. It forks a subprocess per scenario.
I've created a fork of cukeforker called jcukeforker that supports both mri and jruby on linux. Jcukeforker will distribute scenarios to subprocesses. The subprocesses are reused. Subprocesses are used instead of threads to guarantee that each test has it's own global variables. This is important when running the subprocess on a vncserver which requires the DISPLAY variable to be set.
I have simple Watir tests.
Each test is self-contained, no shared state or dependency of any kind. Each test open and close the browser.
Is it possible to run the test in parallel to reduce the time to run all tests?
Even only 2 or 3 tests in parallel can reduce the time dramatically.
Take a look at parallel_tests Ruby gem. Depending on your setup, running the tests in parallel could be as simple as this:
parallel_cucumber features/