How to restart child process with Parallel::ForkManager on finish - multithreading

I'd like to have Parallel::ForkManager use a callback to get something back from a child process and then also restart it. Is that possible? The following is from the Parallel::ForkManager docs:
use strict;
use Parallel::ForkManager;
my $max_procs = 5;
my #names = qw( Fred Jim Lily Steve Jessica Bob Dave Christine Rico Sara );
# hash to resolve PID's back to child specific information
my $pm = new Parallel::ForkManager($max_procs);
# Setup a callback for when a child finishes up so we can
# get it's exit code
$pm->run_on_finish(
sub { my ($pid, $exit_code, $ident) = #_;
print "** $ident just got out of the pool ".
"with PID $pid and exit code: $exit_code\n";
}
);
$pm->run_on_start(
sub { my ($pid,$ident)=#_;
print "** $ident started, pid: $pid\n";
}
);
$pm->run_on_wait(
sub {
print "** Have to wait for one children ...\n"
},
0.5
);
foreach my $child ( 0 .. $#names ) {
my $pid = $pm->start($names[$child]) and next;
# This code is the child process
print "This is $names[$child], Child number $child\n";
sleep ( 2 * $child );
print "$names[$child], Child $child is about to get out...\n";
sleep 1;
$pm->finish($child); # pass an exit code to finish
#####here is where I'd like each child process to restart
}
So when $pm->finish happens, the callback confirms the "child" is "out of the pool." How can I both get the callback to fire and immediately put the child back in the pool as they come out, so that it runs forever?

I think you're misunderstanding what's happening. Under the covers, what Parallel::ForkManager is doing is calling a fork(). Two processes exist at this point, with only a single difference - different PIDs.
Your child process goes and runs some stuff, then exits, generating an exit status, which your parent then reaps.
Restarting the process... well, you just need to fork again and run your code.
Now, what you're doing - a foreach loop, that - foreach array element, forks and then the fork exits.
So really - all your need to do, is call $pm -> start again. How you figure out which one exited (and thus the child name) is more difficult though - your callback runs in the parent process, so data isn't being passed back aside from the exit status of your child. You'll need to figure out some sort of IPC to notify the necessary details.
Although - I'd point out #names isn't a hash, so treating it like one is going to have strange behaviour :).
Have you considered threading as an alternative? Threads are good for shared memory operations passing keyed subprocesses is something it's good at.

Related

How to run parallel process with limit?

How to run in Nodejs parallel process but limit the number of process that exist in the current bulk?
Example, I have an Array with 200 items ['word.doc', 'foo.pdf', 'a.txt', ...] and I need to run Work process for each item like so:
Work.exe word.doc
Work.exe foo.pdf
Work.exe a.txt
Work.exe ....
What I did is:
forEach in the Array..
call to exec from child_process lib.
But I want only 5 process each time. When some process will end a new item should be up and running. So every time I have only 5 process until all process are completed.
Not sure how it can be done.
The child_process lib as a "close" event
you could create a counter and add a listener to this event. Each time you call exec, you increment the counter. If your number of processes goes beyond a threshold (here, 5) you do not call the function anymore.
(you could just wrap the increment and the exec call in one function and call it multiple times)
And when you receive a "close" event you could decrement the counter and if the counter hits 0, recall the function to spawn a child process N times.
You would have to keep a global variable for the array index though.
hope this helps.
exec accepts a callback that will be invoked with the child process exits. You can easily combine this with async eachLimit:
const async = require('async');
const child_process = require('child_process');
// items = ['word.doc', 'foo.pdf', 'a.txt', ...]
async.eachLimit(items, 5, (item, done) => {
child_process.exec(`Work.exe ${item}`, done);
});

Make Perl find "wanted" function multithreaded

I have the following code:
find(\&jpegCompress, #search_paths);
sub jpegCompress()
{
#do processing
}
Currently it steps through each file one by one in series, which is quite slow. Is there anyway to have the jpegCompress function create a thread (if the thread count is < maxThreads) and return to the find function quickly?
The Parallel::ForkManager module provides simple parallel processing. Example:
use Parallel::ForkManager;
$pm = new Parallel::ForkManager($MAX_PROCESSES);
foreach $file (#jpeg_files) {
# Forks and returns the pid for the child:
my $pid = $pm->start and next;
jpegCompress($file);
$pm->finish; # Terminates the child process
}

How to deal with multiple threads in perl which turn into zombie

It seems using pipe in threads might cause the threads turn into zombie. In fact the commands in the pipe truned into zombie, not the threads. This does not happen very time which is annoying since it's hard to find out the real problem. How to deal with this issue? What causes these? Was it related to the pipe? How to avoid this?
The following is the codes that creates sample files.
#buildTest.pl
use strict;
use warnings;
sub generateChrs{
my ($outfile, $num, $range)=#_;
open OUTPUT, "|gzip>$outfile";
my #set=('A','T','C','G');
my $cnt=0;
while ($cnt<$num) {
# body...
my $pos=int(rand($range));
my $str = join '' => map $set[rand #set], 1 .. rand(200)+1;
print OUTPUT "$cnt\t$pos\t$str\n";
$cnt++
}
close OUTPUT;
}
sub new_chr{
my #chrs=1..22;
push #chrs,("X","Y","M", "Other");
return #chrs;
}
for my $chr (&new_chr){
generateChrs("$chr.gz",50000,100000)
}
The following codes will create zombie threads occasionally. Reason or trigger remains unknown.
#paralRM.pl
use strict;
use threads;
use Thread::Semaphore;
my $s = Thread::Semaphore->new(10);
sub rmDup{
my $reads_chr=$_[0];
print "remove duplication $reads_chr START TIME: ",`date`;
return 0 if(!-s $reads_chr);
my $dup_removed_file=$reads_chr . ".rm.gz";
$s->down();
open READCHR, "gunzip -c $reads_chr |sort -n -k2 |" or die "Error: cannot open $reads_chr";
open OUTPUT, "|sort -k4 -n|gzip>$dup_removed_file";
my ($last_id, $last_pos, $last_reads)=split('\t',<READCHR>);
chomp($last_reads);
my $last_length=length($last_reads);
my $removalCnts=0;
while (<READCHR>) {
chomp;
my #line=split('\t',$_);
my ($id, $pos, $reads)=#line;
my $cur_length=length($reads);
if($last_pos==$pos){
#may dup
if($cur_length>$last_length){
($last_id, $last_pos, $last_reads)=#line;
$last_length=$cur_length;
}
$removalCnts++;
next;
}else{
#not dup
}
print OUTPUT join("\t",$last_id, $last_pos, $last_reads, $last_length, "\n");
($last_id, $last_pos, $last_reads)=#line;
$last_length=$cur_length;
}
print OUTPUT join("\t",$last_id, $last_pos, $last_reads, $last_length, "\n");
close OUTPUT;
close READCHR;
$s->up();
print "remove duplication $reads_chr END TIME: ",`date`;
#unlink("$reads_chr")
return $removalCnts;
}
sub parallelRMdup{
my #chrs=#_;
my %jobs;
my #removedCnts;
my #processing;
foreach my $chr(#chrs){
while (${$s}<=0) {
# body...
sleep 10;
}
$jobs{$chr}=async {
return &rmDup("$chr.gz")
}
push #processing, $chr;
};
#wait for all threads finish
foreach my $chr(#processing){
push #removedCnts, $jobs{$chr}->join();
}
}
sub new_chr{
my #chrs=1..22;
push #chrs,("X","Y","M", "Other");
return #chrs;
}
&parallelRMdup(&new_chr);
As the comments on your originating post suggest - there isn't anything obviously wrong with your code here. What might be helpful to understand is what a zombie process is.
Specifically - it's a spawned process (by your open) which has exited, but the parent hasn't collected it's return code yet.
For short running code, that's not all that significant - when your main program exits, the zombies will 'reparent' to init which will clean them up automatically.
For longer running, you can use waitpid to clean them up and collect return codes.
Now in this specific case - I can't see a specific problem, but I would guess it's to do with how you're opening your filehandles. The downside of opening filehandles like you are, is that they're globally scoped - and that's just generally bad news when you're doing thready things.
I would imagine if you changed your open calls to:
my $pid = open ( my $exec_fh, "|-", "executable" );
And then called waitpid on that $pid following your close then your zombies would finish. Test the return from waitpid to get an idea of which of your execs has errored (if any), which should help you track down why.
Alternatively - set $SIG{CHLD} = "IGNORE"; which will mean you - effectively - tell your child processes to 'just go away immediately' - but you won't be able to get a return code from them if they die.

Perl Reading from Thread::Queue with timeout

I am working in a boss worker crew multithreaded scenario with Thread::Queue in Perl.
The boss enqueues tasks and the workers dequeue from the queue.
I need to achieve that the worker crew sends downstream ping messages in case the boss does not send a task via the queue for x seconds.
Unfortunately there seems to be no dequeue method with a timeout.
Have I missed something or would you recommend a different approach/different data structure?
You can add the functionality yourself, knowing that a Thread::Queue object is a blessed reference to a shared array (which I believe is the implementation from 5.8 through 5.16):
package Thread::Queue::TimedDequeue;
use parent 'Thread::Queue';
use threads::shared qw(lock cond_timedwait);
sub timed_dequeue {
my ($q, $patience) = #_; # XXX revert to $q->dequeue() if $patience is negative?
# $q->dequeue_nb() if $patience is zero?
my $timelimit = time() + $patience;
lock(#$q);
until (#$q) {
last if !cond_timedwait(#$q, $timelimit);
}
return shift if #$q; # We got an element
# else we timed out.
}
1;
Then you'd do something like:
# main.pl
use threads;
use strict; use warnings;
use Thread::Queue::TimedDequeue;
use constant WORKER_PATIENCE => 10; # 10 seconds
my $queue = Thread::Queue::TimedDequeue->new();
...
sub worker {
my $item = $queue->dequeue(WORKER_PATIENCE);
timedout() unless $item;
...
}
Note that the above approach assumes you do not enqueue undef or an otherwise false value.
There is nothing wrong with your approach/structure, you just need to add some timeout control over your "Thread::Queue". That is either:
create some "yield" based loop to check your queue from the child side while using a time reference to detect timeout.
use the "Thread::Queue::Duplex" or "Thread::Queue::Multiplex" modules which might be a bit overill but do implement timeout controls.

Perl Thread Safe Modules

I am trying to take a Perl program I wrote and thread it. The problem is I read that some modules aren't "thread safe". How do I know if a module is thread safe? I've looked around for a list and cannot locate one.
To test out one module I use frequently (Text::CSV_XS) I tried the following code out:
use strict;
use warnings;
use threads;
use threads::shared;
require Text::CSV_XS;
my $CSV = Text::CSV_XS->new ({ binary => 1, eol => "\n" }) or die("Cannot use CSV: ".Text::CSV->error_diag());
open my $OUTPUT , ">:encoding(utf8)", "test.csv" or die("test.csv: $!");
share($CSV);
my $thr1 = threads->create(\&sayHello('1'));
my $thr2 = threads->create(\&sayHello('2'));
my $thr3 = threads->create(\&sayHello('3'));
sub sayHello
{
my($num) = #_;
print("Hello thread number: $num\n");
my #row = ($num);
lock($CSV);{
$CSV->print($OUTPUT, \#row);
$OUTPUT->autoflush(1);
}#lock
}#sayHello
The output I receive is the following:
Hello thread number: 1
Segmentation fault
Does this mean the module is not thread safe, or is it another problem?
Thanks
Generally speaking, core and high-visibility modules are thread-safe unless their documentation says otherwise.
That said, there are a few missteps in your post:
share($CSV)
This clears $CSV (a blessed hashref), just as documented in threads. Generally, you want to share() complex objects prior to initialization or, perhaps in this case, share() some dumb $lock variable between threads.
Since $CSV holds state for the underlying XS, this might lead to undefined behavior.
But this isn't your segfault.
threads->create(\&sayHello('1'));
You are mistakenly invoking sayHello(1) in the main thread and passing a reference to its return value to threads->create() as a (bogus) start routine.
You meant to say:
threads->create(\&sayHello, '1');
But this isn't your segfault.
(EDIT Just to clarify -- a bad start routine here doesn't risk a SEGV in any case. threads::create properly complains if an unrecognized subroutine name or non-CODE ref is passed in. In your case, however, you are segfaulting too quickly to reach this error handling.)
Encodings are not thread-safe.
Again as documented in encodings, the encoding module is not thread-safe.
Here's the smallest possible code I could get to reproduce your symptoms:
use threads;
open my $OUTPUT , ">:encoding(utf8)", "/dev/null" or die $!;
threads->create( sub {} )->join;
That's perl 5.12.1 with threads-1.77 on i686-linux-thread-multi, if you're interested. Drop the "utf8" magic, and it works just fine.
This is your segfault

Resources