Parallel execution of Pytest Scripts - python-3.x

I have an automation package (Pytest Based) that has the following structure:
tests\
test_1.py
test_2.py
test_3.py
Currently all 3 tests are executed sequentially, but it takes alot of time to execute them.
I've read about pytest-xdist but i don't see in it's documentation where I can specify which scripts can be run in parallel through it's invocation.

Related

In AzureML, start_logging will start asynchronous execution or synchronous execution?

It was written in the Microsoft AzureML documentation, "A run represents a single trial of an experiment. Runs are used to monitor the asynchronous execution of a trial" and A Run object is also created when you submit or start_logging with the Experiment class."
Related to start_logging, as far as I know, when we have simply started the run by executing this start logging method. We have to stop, or complete by complete method when the run is completed. This is because start_logging is a synchronized way of creating an experiment. However, Run object created from start_logging is to monitor the asynchronous execution of a trial.
Can anyone clarify whether start_logging will start asynchronous execution or synchronous execution?
start_logging will be considered as asynchronous execution as this generates the multiple interactive run sessions. In a specific experiment, there is a chance of multiple interactive sessions, that work parallelly and there will be no scenario to be followed in sequential.
The individual operation can be performed and recognized based on the parameters like args and kwargs.
When the start_logging is called, then an interactive run like jupyter notebook was created. The complete metrics and components which are created when the start_logging was called will be utilized. When the output directory was mentioned for each interactive run, based on the args value, the output folder will be called seamlessly.
The following code block will help to define the operation of start_logging
experiment = Experiment(your_workspace, "your_experiment_name")
run = experiment.start_logging(outputs=None, snapshot_directory=".", display_name="test")
...
run.log_metric("Accuracy_Value", accuracy)
run.complete()
the below code block will be defining the basic syntax of start_logging
start_logging(*args, **kwargs)

Cucumber jvm library needed for parallel run with rerun failed test and collect the latest result

I am using cucumber 4.4.0 with parallel run through cucumber.api.cli.Main from mvn using --threads for parallel run
<mainClass>cucumber.api.cli.Main</mainClass>
<arguments>
<argument>--threads</argument>
<argument>5</argument>
</arguments>
I need to extend that for rerun the failed tests and get the report of the very last run if rerun happens ( say test1 failed first time and passed second time then report should be the passed one for test1)
This should be done as part of single build.
otherwise i have to do mvn run to create rerun.txt file
then use that reurn.txt again through mvn run in jenkins
I know one library https://github.com/prashant-ramcharan/courgette-jvm which does this all above in a single go. ( parallel run, rerun the failed ones, get the report of the latest run result). This library I have used before as well.
However the only problem is the above library during parallel run, say it starts with 5 threads and it waits until all the 5 threads finish. Then start again with another set of 5 threads etc. So it increases the execution time of the test suite. As example :- test1 takes 1 min and test5 takes 5 mins then those threads which finished the tests already still wait until test5 fininsh. After that only another set of 5 threads start.
But in the cucumber.api.cli.Main --threads 5, in this case the moment thread finish it picks the next test. so execution time is quicker for test suite.
Anyone using any other library which does everything but execution time is faster?

How do I run multiple behave+python tests simultaneously without errors?

I have a python web application that uses behave for behavioral testing. I have 5 *.feature files that each take a few minutes to run when I run them, both locally and on our Jenkins build server. I would like to run the five files in parallel rather than sequentially to save time. I can do this locally, but not on my build server. Here are the details:
Locally runs on Windows:
I can run all 5 files in separate command windows using these command:
behave.exe --include "file_01.feature"
behave.exe --include "file_02.feature"
behave.exe --include "file_03.feature"
behave.exe --include "file_04.feature"
behave.exe --include "file_05.feature"
I can also run a python script that spins off 5 separate processes using the same command.
Both of these work, I have no problems
Build server runs on Linux:
When I try to run all five files using a similar command, some of the behave scenarios give me errors. The errors are one of these three:
Message: unknown error: cannot determine loading status from disconnected: Unable to receive message from renderer
Message: chrome not reachable
Message: no such session
The behave scenarios that throw these errors seem to change with every test run.
Oddly, if I rearrange the 5 *.feature files into 3, it works. This is not an ideal solution though. Our application is growing. We'll have more feature files as it grows.
I suspect that there is some shared resource between the chrome drivers in the running behave tests, but I'm not sure. I can't explain why this works for me locally, but not on my build server. Nor can I explain why 3 files work, but not 5.
Has anyone seen errors like this when trying to run multiple behave tests simultaneously? Or do you know what I should be looking for? My project is big enough that it'd be difficult to put together a minimal example of my problem. That's why I haven't posted any code. I'm just wondering what I should be looking for, because I'm at a loss.
This is how I running multiple features in parallel way.
from behave.__main__ import main as behave_main
#step(u'run in parallel "{feature}" "{scenario}"')
def step_impl(context, feature, scenario):
t = threading.Thread(
name='run test parallel',
target=parallel_executor,
args=[context, feature, scenario])
#args=[context, 'parallel_actions.feature', 'Make Cab-Cab communication'])
t.start()
def parallel_executor(context, feature_name, scenario):
os.chdir(testenv.PARALLEACTIONS_PATH)
behave_main('-i "{}" -n "{}" --no-capture --no-skipped'.format(feature_name, scenario))
And feature
Feature: testing parallel
Scenario: parallel run
When run in parallel "parallel_actions-1.feature" "Make Cab-Cab communication"
And run in parallel "parallel_actions-1.feature" "Another Scenario"
And run in parallel "another_parallel.feature" "Another Scenario 2"
I just create new thread and call behave executor directly, you don't need call behave.exe process 5 times separately but once. All features are executed at same time as parallel.
I cant answer your message errors, but you can try another approach (more behave way) to run behave features as parallel.

Running Cukes in Parallel with JRuby

I'm trying to run cucumber scenarios in parallel from inside my gem. From other answers, I've found I can execute cucumber scenarios with the following:
runtime = Cucumber::Runtime.new
runtime.load_programming_language('rb')
#result = Cucumber::Cli::Main.new(['features\my_feature:20']).execute!(runtime)
The above code works fine when I run one scenario at a time, but when I run them in parallel using something like Celluloid or Peach, I get Ambiguous Step errors. It seems like my step definitions are being loaded for each parallel test and cucumber thinks I have multiple steps definitions of the same kind.
Any ideas how I can run these things in parallel?
Cucumber is not thread safe. Each scenario must be run in a separate thread with it's own cucumber runtime. Celluloid may try to run multiple scenarios on the same actor at the same time.
There is a project called cukeforker that can run scenarios in parallel but it only supports mri on linux and osx. It forks a subprocess per scenario.
I've created a fork of cukeforker called jcukeforker that supports both mri and jruby on linux. Jcukeforker will distribute scenarios to subprocesses. The subprocesses are reused. Subprocesses are used instead of threads to guarantee that each test has it's own global variables. This is important when running the subprocess on a vncserver which requires the DISPLAY variable to be set.

Is it possible to run Watir test in parallel?

I have simple Watir tests.
Each test is self-contained, no shared state or dependency of any kind. Each test open and close the browser.
Is it possible to run the test in parallel to reduce the time to run all tests?
Even only 2 or 3 tests in parallel can reduce the time dramatically.
Take a look at parallel_tests Ruby gem. Depending on your setup, running the tests in parallel could be as simple as this:
parallel_cucumber features/

Resources