In this program POE::Wheel::FollowTail works well for following the tail of a file, it is also running in a separate thread to simply monitor the progress of a compile job.
Inside the InputEvent handler there's a crude regex to extract compile results, and there everything is working fine, but I cannot get any result values to be accessible outside this sub. Even if I put result variables in the global scope they are not modified.
The program consists of one process running the compile job, another watching the log, and the main loop waiting.
Global scope:
my $Pass = 0;
my $Done = 0;
Then to kick off the monitoring:
threads->create(\&StartWatcher);
Where the watch-log file sub looks like this:
sub StartWatcher
{
my $logfile = "filename.log";
# Create the logfile watcher
POE::Session->create
(
inline_states =>
{
_start => sub
{
$_[HEAP]{tailor} = POE::Wheel::FollowTail->new( Filename => $logfile, InputEvent => "got_log_line", );
},
got_log_line => sub
{
$Pass += () = $_[ARG0] =~ /^\d+.*vcproj \- 0 error\(s\), \d+ warning\(s\)/g;
$Done += () = $_[ARG0] =~ /^\d+.*vcproj \- \d+ error\(s\), \d+ warning\(s\)/g;
print "POE InputEvent Pass: $Pass, Done: $Done\n"; # Debug output
},
}
);
POE::Kernel->run();
}
The $logfile is being written by a Visual Studio compile job started using Win32::Process::Create and the main Perl execution is sitting in this loop waiting for the compiler to terminate, and producing a status output every second.
while('true')
{
$ProcessObj->Wait(100); # milliseconds wait
$ProcessObj->GetExitCode($exitcode);
if ( $exitcode == STILL_ACTIVE )
{
"Compiling... [$Done/$Count] Pass: $Pass Failed: $failed"
if($RunCounter++ % 10 == 0);
next;
}
last;
}
The output produced is similar to this:
POE InputEvent Pass: 1, Done: 1
Compiling... [0/91] Pass: 0 Failed: 0
ie. in the InputEvent handler got_log_line the two global variables have been incremented, yet in the Perl main loop they are still at zero. I realise that I could do the print output from the InputEvent handler but why doesn't it modify global variables?
What is going wrong?
Threading in perl doesn't work in same way as in other languages, the program space is not shared. In thread creation, current thread is copied into new one, which separated from the parent one (each thread has it's own instrance of perl interpret). If you want to communicate between the threads, look at threads::shared, Thread::Queue and Thread::Semaphore.
Following from anydot's suggestion above here's the answer:
Before starting the thread, create a shared queue
use threads;
use Thread::Queue;
use threads::shared;
my $queue:shared = Thread::Queue->new();
In the input event, create some shared data to enqueue
my %data:shared = ();
$data{PASS_VCPRJ_COUNT} = () = $_[ARG0] =~ /^\d+.*vcproj.*0 error.*\d+ warning/g;
$data{DONE_VCPRJ_COUNT} = () = $_[ARG0] =~ /^\d+.*vcproj.*d+ error.*\d+ warning/g;
$queue->enqueue(\%data) if($data{DONE_VCPRJ_COUNT} ne 0 || $data{PASS_VCPRJ_COUNT} ne 0);
Then in the screen update code, dequeue it, here non-blocking
if (defined(my $item = $queue->dequeue_nb()))
{
foreach my $key(%$item)
{print "$key $item->{$key}\n";}
}
There are other ways, I'm sure but this works for me.
Many thanks.
Related
I have a threaded application and would like to set timeouts for the threads. Peldoc for alarm suggests to use a eval-die pair and catch the ALRM signal. However, this fails with threads producing the error Alarm clock:
use strict; use warnings;
require threads;
require threads::shared;
my $t = threads->create( sub {
eval {
$SIG{ALRM} = sub { die "alarm\n" };
alarm 2;
main();
alarm 0;
};
if ($#){
die $# unless $# eq "alarm\n";
print "timed out\n";
}
}
);
my #r = $t->join;
print "done\n";
sub main {
sleep 3;
}
This post suggests that alarm is called without signal handler in the threads library. Another post is about this problem and answers suggest to use fork and waitpid, but I would really like to use threads. Another post claims to come up with a solution, but this still gives the Alarm clock error for me. I tried to catch Alarm clock in the if ($#), but no success. Any idea how I could make this work?
The whole idea of using alarm in threads is problematic.
Signals are sent to processes, not threads.
What if two threads want to use alarm?
You'll have to implement your own system. The following is an attempt at a general solution:
package Threads::Alarm;
use strict;
use warnings;
use threads;
use threads::shared;
use Exporter qw( import );
our #EXPORT_OK = qw( alarm thread_alarm );
# A list of "$time:$tid" strings sorted by ascending time.
my #alarms :shared;
sub thread_alarm {
my ($wait) = #_;
my $tid = threads->tid();
lock #alarms;
# Cancel existing alarm for this thread, if any.
for my $i (0..$#alarms) {
if ((split(/:/, $alarms[$i]))[1] == $tid) {
splice(#alarms, $i, 1);
last;
}
}
# Create an alarm
if ($wait) {
my $when = time() + $wait;
# A binary search would be better.
my $i;
for ($i=0; $i<#alarms; ++$i) {
last if $when < (split(/:/, $alarms[$i]))[0];
}
splice(#alarms, $i, 0, "$when:$tid");
}
# Notify others of change to #alarms.
cond_broadcast(#alarms);
}
{
no warnings 'once';
*alarm = \&thread_alarm;
}
threads->create(sub {
while (1) {
my $thread;
{
lock #alarms;
while (1) {
# Wait for an alarm request to come in.
cond_wait(#alarms) while !#alarms;
# Grab the soonest alarm.
my ($when, $tid) = split(/:/, $alarms[0]);
# Check if the thread still exists.
my $thread = threads->object($tid)
or last;
# Wait for the #alarms to change or for the alarm time.
last if !cond_timedwait(#alarms, $when);
}
# Before releasing the lock, remove the alarm we're about to raise.
shift(#alarms);
# Notify others of change to #alarms.
# Doesn't actually do anything at this time.
cond_broadcast(#alarms);
}
$thread->kill('ALRM') if $thread;
}
})->detach();
1;
Completely untested. Well, I made sure it compiles, but that's it.
Note that threads->kill doesn't send a real signal (since those are sent to processes, not threads), so the OS won't interrupt any operation (e.g. sleep, wait). Simple solution: Send a real signal to a handler that does nothing right after calling threads->kill. Maybe I should have written a solution that was based around the actual SIGALRM.
sub worker {
# Curse stuff
initscr();
cbreak();
noecho();
my $fh = FileHandle->new;
open $fh, q{-|},
"$myexe #pre_args make #after_args 2>&1"
or croak 'Cannot open';
process_output($fh);
my $err = close $fh;
endwin();
return;
}
sub process_output {
my ($fh) = #_;
while (my $line = <$fh>) {
#do stuff
}
}
ReadMode 3;
threads->create(\&worker);
while (threads->list(threads::running)) {
my $char = ReadKey -1, *STDIN;
if ($char) {
if ($char eq 'q') {
endwin();
kill('INT', $$);
threads->exit();
}
}
}
ReadMode 0;
foreach my $thr (threads->list) {
$thr->join();
When I press 'q':
Perl exited with active threads:
1 running and unjoined
0 finished and unjoined
0 running and detached
and then I did ps -fu myuserid
I saw that $myexe was still running
Q1) How can i force child process to exit? threads->exit() didnt seem to work
The most obvious problem with the sample program is that it is using multiple threads for the curses library. That won't work. (curses is not thread-safe). If you have to do this, keep all of the curses work in the same thread.
You call exit but don't detach or join the threads.
Stick:
foreach my $thr ( threads -> list() ) {
$thr -> join;
}
at the end, and your 'main' code will wait for your threads to (all) exit.
However - threads -> exit() is for exiting the current thread. See:
http://perldoc.perl.org/threads.html#EXITING-A-THREAD
In order to terminate another thread, you need something like threads -> kill. Either send a 'proper' kill signal, or use a signal handler for SIGUSR1 or similar.
I'd probably approach it a little differently though - define a shared variable:
my $done : shared;
And then test it within the while loop, so you've a normal execution flow rather than a mid flight kill.
Your kill ( INT, $$ ) is going to be killing your main process, and just drop the threads on the floor. That's not good style really.
So - to backtrack a bit - the problem you're having - I think - is because 'signals' in perl aren't what you're expecting them to be. Perl uses safe signals, which makes a blocking call (such as a read from a filehandle) block the signal.
http://perldoc.perl.org/perlipc.html#Deferred-Signals-%28Safe-Signals%29
So I wouldn't normally suggest using signals within threads are a good idea - they're a little erratic, which isn't good for program predictability.
You can 'throw' and 'catch' signals, by defining a signal handler within the thread:
$SIG{'USR1'} = sub { print "Caught USR1"; die };
And then 'call' it using:
$worker -> kill ( 'USR1' );
But in certain circumstances, that won't 'bail out' immediately in the way you expect.
For example - this will work:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
sub worker {
print Dumper \%SIG;
my $tail_pid = open ( my $tail_proc, "-|", "tail -f /var/log/messages" );
$SIG{'USR1'} = sub { print "Caught USR1\nKilling $tail_pid"; kill ( 'TERM', $tail_pid ); die; threads -> exit() };
print "Pre-loop\n";
while ( 1 ) {
print "Thread processing\n";
sleep 1;
}
print "Done";
return;
}
my $worker = threads -> create ( \&worker );
sleep 2;
print "Sending kill\n";
$worker -> kill ( 'SIGUSR1' );
sleep 2;
print "waiting for join\n";
$worker -> join();
But if your while loop is reading from the file handle - it's a blocking call, so the 'kill' will be held until the block lifts.
E.g.
while ( <$tail_proc> ) {
Will go into a block pending IO, and your thread won't 'get' the signal until IO occurs, and the thread continues processing. That might be sufficient for your needs though. Otherwise you're potentially looking at select or IO::Select to test if the handle is readable.
So what you may want to do instead is just kill the process that's 'feeding' your while loop - because by doing so, it'll unblock and the while will become undef and exit.
E.g.:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
my $kill_pid : shared;
sub worker {
$kill_pid = open ( my $tail_proc, "-|", "tail -f /var/log/messages" );
print "Pre-loop\n";
while ( <$tail_proc> ) {
print "Thread processing\n";
print;
sleep 1;
}
print "Done";
return;
}
my $worker = threads -> create ( \&worker );
sleep 2;
print "Sending kill\n";
if ( defined $kill_pid ) { print "killing tail, $kill_pid\n"; kill ( 'TERM', $kill_pid ); };
sleep 2;
print "waiting for join\n";
$worker -> join();
I've been writing a "checker" system that performs various "checks" on various services, systems, databases, files, etc. A "check" is generic in nature and can be anything. All checks are reported in a common format weather they pass or fail, whatever that may be.
It is written in a modular OO fashion so that developers can simply follow the framework and write checks independently of one and other. Each object contains a shared reporting object which after they run a check they simply $self->{'reporting'}->report(params). The params are defined and developers are assumed to report appropriately. The reporting object then indexes these reports. My main loader script has entries such as the following:
my $reportingObject = new Checks::Reporting(params);
my #checks;
push #checks, new Checks::Check_One($reportingObject, params));
push #checks, new Checks::Check_One($reportingObject, params));
.
.
push #checks, new Checks::Check_N($reportingObject, params));
To kick off the checks and finalize the report once they are done I have been doing:
foreach my $check (#checks) {
$check->run_stuff();
}
$reportingObject->finalize_report();
Now since these checks are totally independent (do not worry about the reporting object) they can be run in parallel. As an improvement I have done:
my #threads;
foreach my $check (#checks) {
push #threads, async { $check->run_stuff(); }
}
foreach my $thread (#threads) {
$thread->join;
}
#All threads are complete, and thus all checks are done
$reportingObject->finalize_report();
As I said earlier the developers will write Checks independently of each other. Some checks are simple and others are not. The simple checks may not have asynchronous code in them, but others might need to run asynchronously internally such as
sub do_check {
my #threads;
my #list = #{$self->{'list'}};
foreach my $item (#list) {
push #threads, async {
#do_work_on_$item
#return 1 or 0 for success or fail
};
foreach my $thread (#threads) {
my $res = $thread->join;
if($res == 1) {
$self->{'reporting'}->report(params_here);
}
}
}
}
As you can see the threading model allows me to do things in very vague terms. Each "Check" no matter what it is runs independently in its own thread. If an individual developer has asynchronous stuff to do, no matter what it is, he simply does it independently in its own thread. I want a model similar to this.
Unfortunately threads are slow and inefficient. All of the async libraries have specific watchers such as IO, etc. I do not want anything specific. I would like an event based model that allows me to simply kick off async tasks, no matter what they are, and simply notify when they are all done so I can move on.
Hopefully that explains it and you can point me in the right direction.
This seems like a good fit for a boss-worker model:
Spawn a few workers at the beginning of the program. Make sure they all have access to a queue.
Enqueue as many checks as you like. The workers dequeue the checks, execute them, and enqueue the result in an output queue.
Your main thread looks at the results from the output thread, and does whatever it wants.
Join the workers in an END block
You probably want to look at Thread::Queue::Any if there is a chance you want to put coderefs into the queue.
Here is a fully runnable example:
use strict; use feature 'say';
use threads; use threads::shared; use Thread::Queue::Any;
use constant NUM_THREADS => 5;
local $Storable::Deparse = 1; local $Storable::Eval = 1; # needed to serialize code
my $check_q = Thread::Queue::Any->new;
my $result_q = Thread::Queue::Any->new;
# start the workers
{
my $running :shared = NUM_THREADS;
my #threads = map threads->new(\&worker, $check_q, $result_q, \$running), 1..NUM_THREADS;
END { $_->join for #threads }
}
# enqueue the checks
$check_q->enqueue($_) for sub {1}, sub{2}, sub{"hi"}, sub{ die };
$check_q->enqueue(undef) for 1..NUM_THREADS; # end the queue
while(defined( my $result = $result_q->dequeue )) {
report($$result);
}
sub report {
say shift // "FAILED";
}
sub worker {
my ($in, $out, $running_ref) = #_;
while (defined( my $check = $in->dequeue )) {
my $result = eval { $check->() };
$out->enqueue(\$result);
}
# last thread closes the door
lock $$running_ref;
--$$running_ref || $out->enqueue(undef);
}
This prints
1
2
hi
FAILED
in a slightly random order.
My Perl script needs to run multiple threads simultaneously...
use threads ('yield', 'exit' => 'threads_only');
use threads::shared;
use strict;
use warnings;
no warnings 'threads';
use LWP::UserAgent;
use HTTP::Request;
use HTTP::Async;
use ...
...and such threads need to obtain some information from web, so HTTP::Async is used.
my $request = HTTP::Request->new;
$request->protocol('HTTP/1.1');
$request->method('GET');
$request->header('User-Agent' => '...');
my $async = HTTP::Async->new( slots => 100,
timeout => REQUEST_TIMEOUT,
max_request_time => REQUEST_TIMEOUT );
But some threads need to access web only when other thread(s) says so.
my $start = [Time::HiRes::gettimeofday()];
my #threads = ();
foreach ... {
$thread = threads->create(
sub {
local $SIG{KILL} = sub { threads->exit };
my $url = shift;
if ($url ... ) {
# wait for "go" signal from other threads
}
my ($response, $data);
$request->url($url);
$data = '';
$async->add($request);
while ($response = $async->wait_for_next_response) {
threads->yield();
$data .= $response->as_string;
}
if ($data ... ) {
# send "go" signal to waiting threads
}
}
}, $_);
if (defined $thread) {
$thread->detach;
push (#threads, $thread);
}
}
There might be one or more threads waiting for "go" signal and there might be one or more threads that such "go" signal can send. At the beginning the status of semaphore is "wait" and once it turns to "go", it will stay so.
Finally, app checks max running time. If threads are running too long, self-termination signal is sent.
my $running;
do {
$running = 0;
foreach my $thread (#threads) {
$running++ if $thread->is_running();
}
threads->yield();
} until (($running == 0) ||
(Time::HiRes::tv_interval($start) > MAX_RUN_TIME));
$running = 0;
foreach my $thread (#threads) {
if ($thread->is_running()) {
$thread->kill('KILL');
$running++;
}
}
threads->yield();
Now to the point. My questions are:
How can I most effectively code waiting "semaphore" in the script (see comments in script above). Should I simply use just shared variable with some dummy sleep loop?
Do I need to add some sleep loop at the end of app to give time to threads for self-destruction?
You might look at Thread::Queue to perform this work. You could setup a queue that would handle the signaling between the threads waiting for the 'go' signal and the threads sending the 'go' signal. Here's a quick mock-up that I haven't tested:
...
use Thread::Queue;
...
# In main body
my $q = Thread::Queue->new();
...
$thread = threads->create(
sub {
local $SIG{KILL} = sub { threads->exit };
my $url = shift;
if ($url ... ) {
# wait for "go" signal from other threads
my $mesg = $q->dequeue();
# you could put in some termination code if the $mesg isn't 'go'
if ($mesg ne 'go') { ... }
}
...
if ($data ... ) {
# send "go" signal to waiting threads
$q->enqueue('go');
}
}
}, $_);
...
The threads that need to wait for a 'go' signal will wait on the dequeue method until something enters the queue. Once a message enters the queue one thread and only one thread will grab the message and process it.
If you wish to stop the threads so that they won't run, you can insert a stop message to the head of the queue.
$q->insert(0, 'stop') foreach (#threads);
There are examples in Thread::Queue and threads CPAN distributions that show this in more detail.
In response to your second question, the answer is, unfortunately, it depends. When you proceed to terminate your threads, what kind of clean up is required for a clean shutdown? What's the worst case scenario that could occur if the rug was yanked out from beneath the thread? You would want to plan in any time for the clean up to occur. The other option you could do is wait on each thread to actually complete.
The reason for my comment asking if you could remove the detach call is because this method allows the main thread to exit and not care what was happening to any child threads. Instead, if you remove this call, and add:
$_->join() foreach threads->list();
to the end of your main block, this will require the main application to wait for each thread to actually complete.
If you leave the detach method in place, then you will need to sleep at the end of your code if you require your threads to perform any sort of clean-up. When you call detach on a thread, what you are telling Perl is that you don't care what the thread is doing when your main thread exits. If the main thread exits and there are threads that still running that have been detached, then the program will finish with no warnings. However, if you don't require any clean-up, and you still call detach, feel free to exit whenever you like.
Try out something like this....
#!/usr/bin/perl
use threads;
use threads::shared;
$|=1;
my ($global):shared;
my (#threads);
push(#threads, threads->new(\&mySub,1));
push(#threads, threads->new(\&mySub,2));
push(#threads, threads->new(\&mySub,3));
$i = 0;
foreach my $myThread(#threads)
{
my #ReturnData = $myTread->join ;
print "Thread $i returned: #ReturnData\n";
$i++;
}
sub mySub
{
my ($threadID) = #_;
for(0..1000)
{
$global++;
print "Thread ID: $threadID >> $_ >> GLB: $global\n";
sleep(1);
}
return( $id );
}
So I recently wanted to thread one of my Perl programs to increase its speed. Taking in a list of websites, I wanted to start a thread for each url and get the content of each website and then look for a company description on the page. Once one thread found a result, or all thread's didn't, I wanted to exit, write my result, and read in urls for my next company.
The problem that I see is that I use the Perl::Unsafe::Signals module inside of the function that I call when creating a thread. I need the unsafe signals to interrupt regular expressions that get "stuck". However this seems to cause all sorts of problems, mainly having the program crash and the error msg "Alarm Clock" shown.
Therefore, is there a way to use Perl::Unsafe::Signals and threads safely? Is there a way to timeout a regular expression in another way by sending a signal to the function ( like I send a 'KILL' signal below?) Thanks.
Note: I stripped down the code to all pertinent parts, let me know if you need more.
use threads ('exit' => 'threads_only');
use threads::shared;
my #descrip;
share(#descrip);
my $lock;
share($lock);
URL:foreach my $url(#unique_urls) {
#skip blank urls
if(!$url) { next URL; }#if
#find description
my $thread = threads->create(\&findCompanyDescription, $PREV_COMPANY, $PREV_BASE_URL, $url);
#while a description has not been found and there are still active threads, keep looking
#there may be a better way to do this, but this seems to work for me
while(!#descrip && threads->list() != 0) {;}
#kill all threads, write output, read in next batch of urls
my #threads = threads->list();
foreach(#threads) { print("detaching\n"); $_->kill('KILL')->detach(); }#foreach
#######SUBROUTINE CALLED BY THREAD CREATE
sub findCompanyDescription {
my($company_full, $base_url, $url) = #_;
my($descrip, $raw_meta, $raw) = '';
my #company;
$SIG{'KILL'} = sub { alarm(0); threads->exit(); };
eval {
local $SIG{ALRM} = sub { die("alarm\n") }; # NB: \n required
alarm(5);
use Perl::Unsafe::Signals;
UNSAFE_SIGNALS {
while($company) {
my #matches = ($content =~ m!.*<([\w\d]+).*?>\s*about\s+$company[\w\s\-_]*<.*?>(?:<.*?>|\s)*(.*?)</\1.*?>!sig);
MATCH:for(my $ndx=1; $ndx<#matches; $ndx+=2) {
($raw, $descrip) = &filterResult($matches[$ndx], $company_full);
if($descrip) {
$company = undef;
last(MATCH);
}#if
}#for
#reduce the company name and try again
$company = &reduceCompanyName($company);
}#while
alarm(0);
};#unsafe_signals
};#eval
if($#) {
if($# eq "alarm\n" && $DEBUG) { print("\nWebpage Timeout [].\n"); }#if
}#if
if($descrip) { lock($lock); {
#descrip = ($PREV_ID, $company_full, $base_url, $url, 1, $raw, $descrip); }
}#if
In general, "unsafe" signals are unsafe for both single threaded and multi-threaded. You've only increased your peril by using threads and unsafe signals. Perl's usual safe signal handler sets the flag signal_pending without meaningfull interrupting execution. The VM checks that flag when it's between opcodes.
Your regexp execution is a single, "atomic" opcode. Of course, the regexp itself is another VM with its own opcodes but we don't have currently visibility into that for the perl signal handler.
Frankly, I've no good idea about how to interrupt the regexp engine. It's got some global C state which in the past prior to perl-5.10 prevented it from being reentrant. It might not be safe for universal interruptability like you're trying. If you really wanted it to be fully interruptible, you might want to fork and have your child process do the regexp and communicate the results back over a pipe.
require JSON;
require IO::Select;
my $TIMEOUT_SECONDS = 2.5; # seconds
my ( $read, $write );
pipe $read, $write;
my #matches;
my $pid = fork;
if ( $pid ) {
my $select = IO::Select->new( $read );
if ( $select->can_read( $TIMEOUT_SECONDS ) ) {
local $/;
my $json = <$read>;
if ( $json ) {
my $matches_ref = JSON::from_json( $json );
if ( $matches_ref ) {
#matches = #$matches_ref;
}
}
}
waitpid $pid, 0;
}
else {
my #r = $content =~ m!.*<([\w\d]+).*?>\s*about\s+$company[\w\s\-_]*<.*?>(?:<.*?>|\s)*(.*?)</\1.*?>!sig;
my $json = JSON::to_json( \ #r );
print { $write } $json;
close $write;
exit;
}
IMHO, mixing signals and threads is a challenging task per se (i.e. w/o perl-specific things).
Remember that even in a single-threaded program you can safely call only async-signal-safe functions from the signal handler because the program may be interrupted at any point.
Perl adds another layer of abstraction, so I have no idea about safety of calling "die" from signal handler in case of unsafe signals.
If I remember properly, SIGALRM is asynchronous signal, so it must be handled synchronously. Your way of handling it is generally incorrect in multi-threaded programs.
Moreover, IMHO perl threads just do not work as most people expect.
Just avoid using them and use processes instead.
P.S.
The following line doesn't make sense:
$SIG{'KILL'} = sub { alarm(0); threads->exit(); };
SIGKILL (as well as SIGSTOP) cannot be caught.
I'm not really specialist on Perl-MT, but one thing you apparently is missing is that signals are global to the whole process - they are not thread specific. On POSIX systems you can't set a signal handler for a thread: signals are delivered to the whole process. IOW alarm() call affects the whole process, not only the thread which calls it. And even local %SIG in MT context doesn't do what one might think it does - because local is a thing of syntax.