How do I run multiple behave+python tests simultaneously without errors? - linux

I have a python web application that uses behave for behavioral testing. I have 5 *.feature files that each take a few minutes to run when I run them, both locally and on our Jenkins build server. I would like to run the five files in parallel rather than sequentially to save time. I can do this locally, but not on my build server. Here are the details:
Locally runs on Windows:
I can run all 5 files in separate command windows using these command:
behave.exe --include "file_01.feature"
behave.exe --include "file_02.feature"
behave.exe --include "file_03.feature"
behave.exe --include "file_04.feature"
behave.exe --include "file_05.feature"
I can also run a python script that spins off 5 separate processes using the same command.
Both of these work, I have no problems
Build server runs on Linux:
When I try to run all five files using a similar command, some of the behave scenarios give me errors. The errors are one of these three:
Message: unknown error: cannot determine loading status from disconnected: Unable to receive message from renderer
Message: chrome not reachable
Message: no such session
The behave scenarios that throw these errors seem to change with every test run.
Oddly, if I rearrange the 5 *.feature files into 3, it works. This is not an ideal solution though. Our application is growing. We'll have more feature files as it grows.
I suspect that there is some shared resource between the chrome drivers in the running behave tests, but I'm not sure. I can't explain why this works for me locally, but not on my build server. Nor can I explain why 3 files work, but not 5.
Has anyone seen errors like this when trying to run multiple behave tests simultaneously? Or do you know what I should be looking for? My project is big enough that it'd be difficult to put together a minimal example of my problem. That's why I haven't posted any code. I'm just wondering what I should be looking for, because I'm at a loss.

This is how I running multiple features in parallel way.
from behave.__main__ import main as behave_main
#step(u'run in parallel "{feature}" "{scenario}"')
def step_impl(context, feature, scenario):
t = threading.Thread(
name='run test parallel',
target=parallel_executor,
args=[context, feature, scenario])
#args=[context, 'parallel_actions.feature', 'Make Cab-Cab communication'])
t.start()
def parallel_executor(context, feature_name, scenario):
os.chdir(testenv.PARALLEACTIONS_PATH)
behave_main('-i "{}" -n "{}" --no-capture --no-skipped'.format(feature_name, scenario))
And feature
Feature: testing parallel
Scenario: parallel run
When run in parallel "parallel_actions-1.feature" "Make Cab-Cab communication"
And run in parallel "parallel_actions-1.feature" "Another Scenario"
And run in parallel "another_parallel.feature" "Another Scenario 2"
I just create new thread and call behave executor directly, you don't need call behave.exe process 5 times separately but once. All features are executed at same time as parallel.
I cant answer your message errors, but you can try another approach (more behave way) to run behave features as parallel.

Related

Best way to program flow through a job loop

I see that Origen supports passing jobs to the program command in this video. What would be the preferred method to run the program command in a job loop (i.e. job == 'ws' then job == 'ft', etc.).
thx
The job is a runtime concept, not a compile/generate time concept, so it doesn't really make sense to run the program command (i.e. generate the program) against different settings of job.
Origen doesn't currently provide any mechanism to pass define-type arguments through to the program generator from the command line, though you could implement that in your app easily enough by overriding the program command - i.e. capture and store them somewhere in your app and then continue with the regular command.
The 'Origen-way' of doing things like this is to setup different target files with different variables set within them, then execute the program command for the different targets.

How to run parallel fork as single thread in perl?

I was trying to check response messages written in perl which takes requests through Amazon API and returns responses..How to run parallel fork as single thread in perl?. I'm using LWP::UserAgent module and I want to debug HTTP requests.
As a word of warning - threads and forks are different things in perl. Very different.
However the long and short of it is - you can't, at least not trivially - a fork is a separate process. It actually happens when you run -any- external command in perl, it's just by default perl sits and waits for that command to finish and return output.
However if you've got access to the code, you can amend it to run single threaded - sometimes that's as simple as reducing the paralleism with a config parameter. (In fact quite often - debugging parallel code is a much more complicated task than sequential, so getting it working before running parallel is really important).
You might be able to embed a waitpid into your primary code so you've only got one thing running at once. Without a code example though, it's impossible to say for sure.

Run multiple copies of Speedy or PersistentPerl to be called from Tomcat

I have a modern webapp running under Tomcat, which often needs to call some legacy perl code to get some results. Right now, we wrap these in a call to Runtime.getRuntime().exec() which is working fine.
However, as the webapp gets busier we are noticing that often the perl is timing out and we need to control this.
I am using commons-pool to ensure that only X number of copies can be run at a time, and threads will queue up nicely for a perl instance when they need one, timing out after Y seconds and returning an error (this is fine, the client will just retry).
However we still have the problem that Perl takes a long time to start up, interpret the script, execute and return. At busy times we are doing this 30-50 times per second. It's a beefy machine but it's starting to struggle.
I have read up on Speedy and PersistentPerl and am considering holding open a copy of this in memory for each object in my pool, so that we do not need to open and close the Perl each time.
Is this a good idea? Any tips for how to go about doing this?
Those approaches should reduce the overhead from the start up time of your script. If the script is something that can be run as a CGI program then you might be better offer making it work with Plack and running it with a PSGI server. Your Tomcat application could collect and send the request parameters to your script and/or "web application" running in the background.

Running Cukes in Parallel with JRuby

I'm trying to run cucumber scenarios in parallel from inside my gem. From other answers, I've found I can execute cucumber scenarios with the following:
runtime = Cucumber::Runtime.new
runtime.load_programming_language('rb')
#result = Cucumber::Cli::Main.new(['features\my_feature:20']).execute!(runtime)
The above code works fine when I run one scenario at a time, but when I run them in parallel using something like Celluloid or Peach, I get Ambiguous Step errors. It seems like my step definitions are being loaded for each parallel test and cucumber thinks I have multiple steps definitions of the same kind.
Any ideas how I can run these things in parallel?
Cucumber is not thread safe. Each scenario must be run in a separate thread with it's own cucumber runtime. Celluloid may try to run multiple scenarios on the same actor at the same time.
There is a project called cukeforker that can run scenarios in parallel but it only supports mri on linux and osx. It forks a subprocess per scenario.
I've created a fork of cukeforker called jcukeforker that supports both mri and jruby on linux. Jcukeforker will distribute scenarios to subprocesses. The subprocesses are reused. Subprocesses are used instead of threads to guarantee that each test has it's own global variables. This is important when running the subprocess on a vncserver which requires the DISPLAY variable to be set.

Automatic Background Perl Execution on Ubuntu

I've been troubleshooting this issue for about a week and I am nowhere, so I wanted to reach out for some help.
I have a perl script that I execute via command like, usually in a manner of
nohup ./script.pl --param arg --param2 arg2 &
I usually have about ten of these running at once to process the same type of data from different sources (that is specified through parameters). The script works fine and I can see logs for everything in nohup.out and monitor status via ps output. This script also uses a sql database to track status of various tasks, so I can track finishes of certain sources.
However, that was too much work, so I wrote a wrapper script to execute the script automatically and that is where I am running into problems. I want something exactly the same as I have, but automatic.
The getwork.pl script runs ps and parses output to find out how many other processes are running, if it is below the configured thresh it will query the database for the most out of date source and kick off the script.
The problem is that the kicked off jobs aren't running properly, sometimes they terminate without any error messages and sometimes they just hang and sit idle until I kill them.
The getwork script queries sql and gets the entire execution command via sql concatanation, so in the sql query I am doing something like CONCAT('nohup ./script.pl --arg ',param1,' --arg2 ',param2,' &') to get the command string.
I've tried everything to get these kicked off, I've tried using system (), but again, some jobs kick off, some don't, sometimes it gets stuck, sometimes jobs start and then die within a minute. If I take the exact command I used to start the job and run it in bash, it works fine.
I've tried to also open a pipe to the command like
open my $ca, "| $command" or die ($!);
print $ca $command;
close $ca;
That works just about as well as everything else I've tried. The getwork script used to be executed through cron every 30 minutes, but I scrapped that because I needed another shell wrapper script, so now there is an infinite look in the get work script that executes a function every 30 minutes.
I've also tried many variations of the execution command, including redirecting output to different files, etc... nothing seems to be consistent. Any help would be much appreciated, because I am truly stuck here....
EDIT:
Also, I've tried to add separate logging within each script, it would start a new log file with it's PID ($$). There was a bunch of weirdness there too, all log files would get created, but then some of the processes would be running and writing to the file, others would just have an empty text file and some would just have one or two log entries. Sometimes the process would still be running and just not doing anything, other times it would die with nothing in the log. Me, running the command in shell directly always works though.
Thanks in advance
You need a kind of job managing framework.
One of the bigest one is Gearman: http://www.slideshare.net/andy.sh/gearman-and-perl

Resources