I have a perl scipt that I need it to end every two hours on a Linux machine. I was going to do a separate script and add it on cron.d to accomplish this, but I wanted an easier way. It has to do a graceful exit because after doing a CTRL+C, it writes a log file and killing it won't write the file.
You can set up an alarm at the beginning of the script, and provide a handler for it:
alarm 60 * 60 * 2;
local $SIG{ALRM} = sub {
warn "Time over!\n";
# Do the logging here...
exit
};
The question is how you would restart the script again.
A wrapper keeps things simple.
#!/usr/bin/perl
# usage:
# restarter program [arg [...]]
use strict;
use warnings;
use IPC::Open3 qw( open3 );
use POSIX qw( WNOHANG );
use constant RESTART_AFTER => 2*60*60;
use constant KILL_INT_WAIT => 30;
use constant KILL_TERM_WAIT => 30;
use constant WAIT_POLL => 15;
sub start_it {
open(local *NULL, '<', '/dev/null')
or die($!);
return open3('<&NULL', '>&STDOUT', '>&STDERR', #_);
}
sub wait_for_it {
my ($pid, $max_wait) = #_;
my $end_time = time + $max_wait;
while (1) {
if (waitpid($pid, WNOHANG) > 0) {
return 1;
}
my $time = time;
if ($end_time >= $time) {
return 0;
}
sleep(1);
}
}
sub end_it {
my ($pid) = #_;
kill(INT => $pid)
or die($!);
return if wait_for_it($pid, KILL_INT_WAIT);
kill(TERM => $pid)
or die($!);
return if wait_for_it($pid, KILL_TERM_WAIT);
kill(KILL => $pid)
or die($!);
waitpid($pid, 0);
}
sub run_it {
my $end_time = time + RESTART_AFTER;
my $pid = start_it(#_);
while (1) {
if (waitpid($pid, WNOHANG) > 0) {
last;
}
my $time = time;
if ($end_time >= $time) {
end_it($pid);
last;
}
my $sleep_time = $end_time - $time;
$sleep_time = WAIT_POLL if $sleep_time > WAIT_POLL; # Workaround for race condition.
sleep($sleep_time);
}
my $status = $?;
if ($? & 0x7F) { warn("Child killed by signal ".($? & 0x7F)."\n"); }
elsif ($? >> 8) { warn("Child exited with error ".($? >> 8)."\n"); }
else { warn("Child exited with succcesfully.\n"); }
}
run_it(#ARGV) while 1;
You might want to forward signals sent to the handler to the child.
You can catch the signal sent by Ctrl-C by setting a subroutine in $SIG{INT}:
$ perl -e '$SIG{INT} = sub { print "Caught signal, cleaning up\n"; exit 0 }; while(1) {}'
Do your cleanup within the sub, and there you go.
Related
I'm trying to work as an example for the following code:
my $milon;
my $pid = fork();
die if not defined $pid;
if (not $pid)
{
$milon->{$pid} = $pid;
exit;
}
$milon->{3} = 4;
my $finished = wait();
print( "debug10: TEST = ", Dumper($milon));
output:
debug10: TEST = $VAR1 = {
'3' => 4
};
How do I make the dictionary keep both 3 => 4 and also the $pid => $pid?
It doesn't have to be forking, it could be multithreading or NonBlocking IO, whichever is better according to what you think.
This is an example of course, I just want to conclude from this example to my real code.
You need some memory that is shared between your threads/processes. The easiest is probably to use interpreter-based threads and threads::shared. For instance:
use threads;
use threads::shared;
my %milon :shared;
for (1 .. 2) {
threads->create(sub {
my $tid = threads->tid();
$milon{$tid} = $tid;
});
}
$milon{3} = 4;
$_->join for threads->list; # Wait for all threads to be done
print Dumper \%milon;
This outputs:
$VAR1 = {
'1' => 1,
'2' => 2,
'3' => 4
};
Following sample code demonstrates usage of fork() to compute square of a number from an array #numbers (total 100) in parallel execution.
REAPER function assigned to $SIG{CHLD} signal cleans up completed child processes to avoid zombie processes hanging around in process table.
Investigate if fork() approach will fit your problem/task.
use strict;
use warnings;
use POSIX qw(strftime :sys_wait_h);
use Time::HiRes qw(usleep);
my $limit = 10;
my $threads = $limit;
my #numbers = map { int(rand(100)) } 1..100;
sub REAPER {
local $!;
while( (my $pid = waitpid(-1, WNOHANG) ) > 0 && WIFEXITED($?) ) {
$threads++;
}
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
for ( #numbers ) {
while( $threads == 0 or $threads > $limit ) { usleep(1) }
my $pid = fork();
die $! unless defined $pid;
if( $pid ) {
# parent
$threads--;
} else {
# child
my $n = compute_square($_);
printf "Process %6d: %3d => %d\n", $$, $_, $n;
exit 0;
}
}
sub compute_square {
my $num = shift;
return $num*$num;
}
During the end of my multi threaded perl script, I get error like below. The number changes from time to time.
Scalars leaked: -2
Scalars leaked:2
What could be the cause of this problem? Are they just warnings?
I have created my threads in the following way:
our $threads1=3;
our $threads2=3;
for(my $i = 0; $i<$threads1; $i++)
{
$threadpool1[$i] = threads->create( \&sub1, $arg1, $arg2 , $arg2, $threads1, $threads2);
}
#Add work to queue1
foreach my $work (keys %{$workobj})
{
$queue1->enqueue( $work );
}
for(my $i = 0; $i<$threads2; $i++)
{
$threadpool2[$i] = threads->create( \&sub2, $arg1 , $arg2);
}
#Wait until worker threads complete the work
$_->join for #threadpool1;
$_->join for #threadpool2;
sub sub1($arg1, $arg2 , $arg2, $threads1, $threads2)
{
while($queue1->dequeue)
{
#do some work
#send work to queue 2
$queue2->enqueue(work);
$queue1->enqueue(undef x threads1);
}
# if all work has been sent to second queue, send undef to second set of threads
$queue1->enqueue(undef x $threads2);
return;
}
sub sub1($arg1, $arg2)
{
while($queue2->dequeue)
{
#do some work
}
return;
}
Any ideas on where I am going wrong?
I have a basic Perl script that runs a function with 20 threads.
use threads;
use threads::shared;
use Thread::Queue;
use File::Slurp;
$| = 1; my $numthreads = 20;
my $fetch_q = Thread::Queue->new();
sub fetch {
while ( my $target = $fetch_q->dequeue() ) {
my $request = `curl "http://WEBSITE" -s -o /dev/null -w "%{http_code}"`; # Returns HTTP Status code of request (i.e. 200, 302, 404, etc)
if ($request eq "200") {
print "Success. Sleeping for 5 seconds.";
sleep(5);
}
else {
print "Fail. Will try again in 10 seconds.";
sleep(10);
redo;
}
}
}
my #workers = map { threads->create( \&fetch ) } 1 .. $numthreads;
$fetch_q->enqueue( 1 .. $max );
$fetch_q->end();
foreach my $thr (#workers) {$thr->join();}
If a condition is true, I want the thread to sleep for 5 seconds but have all the other threads continue. Right now when I use sleep(5) it seems like the entire script sleeps for 5 seconds.
How can I use sleep() for individual threads?
sleep is the right tool. I don't know why you think it seems to block all threads, but it doesn't. The following illustrates this:
use threads;
use threads::shared;
use Thread::Queue;
my $fetch_q = Thread::Queue->new();
sub fetch {
while (defined( my $job = $fetch_q->dequeue() )) {
if ($job) {
print threads->tid, " Starting to sleep\n";
sleep(5);
print threads->tid, " Finished sleeping\n";
}
else {
print threads->tid, " Starting to sleep\n";
sleep(10);
print threads->tid, " Finished sleeping\n";
}
}
}
my #workers = map { threads->create( \&fetch ) } 1 .. 2;
$fetch_q->enqueue($_) for 0, 1, 1;
$fetch_q->end();
foreach my $thr (#workers) {$thr->join();}
Output:
1 Starting to sleep
2 Starting to sleep
2 Finished sleeping
2 Starting to sleep <-- The second job picked up work while the first was still sleeping.
1 Finished sleeping
2 Finished sleeping
I have written the perl script to pause and resume.When the user enters Ctrl+c it has to pause and on pressing c it should resume. But is not working properly as expected. Can anyone help me on this what mistake i am making:
use strict;
use threads;
use threads::shared;
use Thread::Suspend;
use Lens;
$SIG{'INT'} = 'Pause';
#$| = 1;
print chr(7);
my $nthreads = 64;
my #thrs;
for(1..$nthreads)
{
print "START $_ \n";
my ($thr) = threads->create(\&worker, $_);
push #thrs ,$thr;
}
$_->join for #thrs;
exit;
sub worker
{
my $id = shift;
my $tmp;
my $lens = Lens->new("172.16.1.65:2000");
die "cannot create object" unless defined $lens;
die "cannot connect to XRay at " unless defined $lens->open("172.16.1.65:2000");
for(1..100000)
{
print "Thread $id \n";
}
print "$id>LOAD EXIT\n";
}
sub Pause
{
sleep(1);
print "\nCaught ^C\n";
print "Press \"c\" to continue, \"e\" to exit: ";
$_->suspend() for #thrs;
while (1)
{
my $input = lc(getc());
chomp ($input);
if ($input eq 'c') {
#clock($hour,$min,$sec);
$_->resume() for #thrs;
return;
}
elsif ($input eq 'e') {
exit 1;
}
}
}
Well, you haven't been too specific as to how it's "not working properly". But I would suggest looking at using Thread::Semaphore for a 'suspend' mechanism.
I would also suggest not using signal and instead doing something like:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Semaphore;
use Term::ReadKey;
my $nthreads = 64;
my $thread_semaphore = Thread::Semaphore->new($nthreads);
sub worker {
for ( 1 .. 10 ) {
$thread_semaphore->down();
print threads->self->tid(), "\n";
sleep 1;
$thread_semaphore->up();
}
}
for ( 1 .. $nthreads ) {
threads->create( \&worker );
}
my $keypress;
ReadMode 4;
while ( threads->list(threads::running) ) {
while ( not defined( $keypress = ReadKey(-1) )
and threads->list(threads::running) )
{
print "Waiting\nRunning:". threads->list(threads::running) . "\n";
sleep 1;
}
print "Got $keypress\n";
if ( $keypress eq "p" ) {
print "Pausing...";
$thread_semaphore -> down_force($nthreads);
print "All paused\n";
}
if ( $keypress eq "c" ) {
print "Resuming...";
$thread_semaphore -> up ( $nthreads );
}
}
ReadMode 0;
foreach my $thr ( threads->list ) {
$thr->join();
}
It'll 'suspend' by setting the semaphores to zero (or negative) and relies on the threads checking if they should be stopping here or not.
I think the root of your problem though, will probably be signal propagation - your signal handler is global across your threads. You might find configuring $SIG{'INT'} for your threads separately will yield better results. (E.g. set the signal handler to 'IGNORE' at the start of your code, and set specific ones in the thread/main once the threads have been spawned).
Is there a multiprocessing module for Perl? Something that has similar functionality to what's offered by Python's multiprocessing module.
I understand I could build similar functionality using Perl, but I'm looking for something already implemented.
forks provides the same awesome interface as threads, but uses processes instead of threads.
use forks; # Or: use threads;
use Thread::Queue;
my $q = Thread::Queue->new();
my #workers;
for (1..NUM_WORKERS) {
push #workers, async {
while (defined(my $job = $q->dequeue())) {
...
}
};
}
$q->enqueue(...);
$q->enqueue(undef) for #workers;
$_->join for #workers;
Comparing forks with Forks::Super.
Keep in mind, these are suppose to the be the cases where Forks::Super excels!
use Forks::Super;
sub do_something { my #args = #_; ... }
$process = fork { sub => \&do_something, args => [#args] };
$process->wait;
can be written as
use forks;
sub do_something { my #args = #_; ... }
$process = async { do_something(#args) };
$process->join;
---
use Forks::Super;
my $x = 42;
my #y = ();
my %z = ();
sub do_something_else {
$x = 19;
#y = qw(foo bar);
%z = (foo => 'bar');
}
$process = fork { sub => 'do_something_else', share => [\$x, \#y, \%z ] };
$process->wait;
can be written as
use forks;
use forks::shared;
my $x :shared = 42;
my #y :shared = ();
my %z :shared = ();
sub do_something_else {
$x = 19;
#y = qw(foo bar);
%z = (foo => 'bar');
}
$process = async { do_something_else() };
$process->join;
---
use Forks::Super;
use IO::Handle;
pipe my $child_read, my $parent_write;
pipe my $parent_read, my $child_write;
$parent_write->autoflush(1);
$child_write->autoflush(1);
sub square {
while (my $x = <$child_read>) {
chomp($x);
print {$child_write} $x ** 2, "\n";
}
close $child_write;
}
$process = fork { sub => 'square' };
print { $parent_write } "9\n";
chomp( my $result = <$parent_read> ); # 81
close $parent_write;
$process->wait;
can be written as
use forks;
use Threads::Queue;
my $req = Threads::Queue->new();
my $resp = Threads::Queue->new();
sub square { $_[0] ** 2 }
$process = async {
while (defined(my $x = $req->dequeue())) {
$resp->enqueue( square($x) );
}
};
$req->enqueue(9);
my $result = $resp->dequeue(); # 81
$resp->enqueue(undef);
$process->join;
---
use Forks::Super;
sub square_root {
sleep 1 && seek STDIN,0,1 while eof(STDIN); # ok, this is a workaround for an existing bug :-(
while (my $x = <STDIN>) {
chomp($x);
print sqrt($x), "\n";
}
}
$process = fork { sub => 'square_root', child_fh => 'in,out,block' };
$process->write_stdin("81\n");
chomp( $result = $process->read_stdout() ); # 9
$process->close_fh('stdin');
$process->wait;
can be written as
use forks;
use Threads::Queue;
my $req = Threads::Queue->new();
my $resp = Threads::Queue->new();
$process = async {
while (defined(my $x = $req->dequeue())) {
$resp->enqueue( sqrt($x) );
}
};
$req->enqueue(81);
my $result = $resp->dequeue(); # 9
$resp->enqueue(undef);
$process->join;
I think Forks::Super comes pretty close. It has a few features for running an arbitrary subroutine (or external command) in a background process, monitoring and signalling the background process, and making interprocess communication a little less painful.
use Forks::Super;
sub do_something { my #args = #_; ... }
$process = fork { sub => \&do_something, args => [#args] };
$process->wait;
my $x = 42;
my #y = ();
my %z = ();
sub do_something_else {
$x = 19;
#y = qw(foo bar);
%z = (foo => 'bar');
}
$process = fork { sub => 'do_something_else', share => [\$x, \#y, \%z ] };
$process->wait;
# $x, #y, and %z are now updated with changes made in background process
# create your own pipes to use for IPC
use IO::Handle;
pipe my $child_read, my $parent_write;
pipe my $parent_read, my $child_write;
$parent_write->autoflush(1);
$child_write->autoflush(1);
sub square {
while (my $x = <$child_read>) {
print {$child_write} $x ** 2, "\n";
}
close $child_write;
}
$process = fork { sub => 'square' };
print {$parent_write} "9\n";
my $result = <$parent_read>; # should be "81\n";
close $parent_write;
# or use the standard I/O handles for IPC
sub square_root {
sleep 1 && seek STDIN,0,1 while eof(STDIN); # ok, this is a workaround for an existing bug :-(
while (my $x = <STDIN>) {
print sqrt($x), "\n";
}
}
$process = fork { sub => 'square_root', child_fh => 'in,out,block' };
$process->write_stdin("81\n");
$result = $process->read_stdout(); # => "9\n"
Both the multiprocessing module and Forks::Super have a lot of features. Which ones are you specifically interested in?
I am the author of Forks::Super and my goal is to include any features for parallel processing that people find useful, so if there's a feature in multiprocessing that you want in Perl, let me know.
What about POE: Perl Object Environment? It has support for asynchronous child processes.
You can use https://github.com/marioroy/mce-perl
It is similar to python multiprocess module