How do I queue perl subroutines to a thread queue instead of data? - multithreading

Background:
In reading how to multithread my perl script, I read (from http://perldoc.perl.org/threads.html#BUGS-AND-LIMITATIONS)
On most systems, frequent and continual creation and destruction of
threads can lead to ever-increasing growth in the memory footprint of
the Perl interpreter. While it is simple to just launch threads and
then ->join() or ->detach() them, for long-lived applications, it is
better to maintain a pool of threads, and to reuse them for the work
needed, using queues to notify threads of pending work.
My script will be long-lived; it's an PKI LDAP directory monitoring daemon that will always be running. The enterprise monitoring solution will generate an alarm if it stops running for any reason. My script will check that I can reach another PKI LDAP directory, as well as validate revocation lists on both.
Problem: Everything I can find on google shows passing variables (e.g. scalars) to the thread queue rather than the subroutine itself... I think I'm just not understanding how to implement a thread queue properly compared to how you implement a thread (without queues).
Question 1: How can I "maintain a pool of threads" to avoid the perl interpreter from slowly eating up more and more memory?
Question 2: (Unrelated but while I have this code posted) Is there a safe amount of sleep at the end of the main program so that I don't start a thread more than once in a minute? 60 seems obvious but could that ever cause it to run more than once if the loop is fast, or perhaps miss a minute because of processing time or something?
Thanks in advance!
#!/usr/bin/perl
use feature ":5.10";
use warnings;
use strict;
use threads;
use Proc::Daemon;
#
### Global Variables
use constant false => 0;
use constant true => 1;
my $app = $0;
my $continue = true;
$SIG{TERM} = sub { $continue = false };
# Directory Server Agent (DSA) info
my #ListOfDSAs = (
{ name => "Myself (inbound)",
host => "ldap.myco.ca",
base => "ou=mydir,o=myco,c=ca",
},
{ name => "Company 2",
host => "ldap.comp2.ca",
base => "ou=their-dir,o=comp2,c=ca",
}
);
#
### Subroutines
sub checkConnections
{ # runs every 5 minutes
my (#DSAs, $logfile) = #_;
# Code to ldapsearch
threads->detach();
}
sub validateRevocationLists
{ # runs every hour on minute xx:55
my (#DSAs, $logfile) = #_;
# Code to validate CRLs haven't expired, etc
threads->detach();
}
#
### Main program
Proc::Daemon::Init;
while ($continue)
{
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
# Question 1: Queues??
if ($min % 5 == 0 || $min == 0)
{ threads->create(&checkConnections, #ListOfDSAs, "/var/connect.log"); }
if ($min % 55 == 0)
{ threads->create(&validateRevocationLists, #ListOfDSAs, "/var/RLs.log"); }
sleep 60; # Question 2: Safer/better way to prevent multiple threads being started for same check in one matching minute?
}
# TERM RECEIVED
exit 0;
__END__

use threads;
use Thread::Queue 3.01 qw( );
my $check_conn_q = Thread::Queue->new();
my $validate_revoke_q = Thread::Queue->new();
my #threads;
push #threads, async {
while (my $job = $check_conn_q->dequeue()) {
check_conn(#$job);
}
};
push #threads, async {
while (my $job = $validate_revoke_q->dequeue()) {
validate_revoke(#$job);
}
};
while ($continue) {
my ($S,$M,$H,$m,$d,$Y) = localtime; $m+=1; $Y+=1900;
$check_conn_q->enqueue([ #ListOfDSAs, "/var/connect.log" ])
if $M % 5 == 0;
$validate_revoke_q->enqueue([ #ListOfDSAs, "/var/RLs.log" ])
if $M == 55;
sleep 30;
}
$check_conn_q->end();
$validate_revoke_q->end();
$_->join for #threads;
I'm not sure parallelisation is needed here. If it's not, you could simply use
use List::Util qw( min );
sub sleep_until {
my ($until) = #_;
my $time = time;
return if $time >= $until;
sleep($until - $time);
}
my $next_check_conn = my $next_validate_revoke = time;
while ($continue) {
sleep_until min $next_check_conn, $next_validate_revoke;
last if !$continue;
my $time = time;
if ($time >= $next_check_conn) {
check_conn(#ListOfDSAs, "/var/connect.log");
$next_check_conn = time + 5*60;
}
if ($time >= $next_validate_revoke) {
validate_revoke(#ListOfDSAs, "/var/RLs.log");
$next_validate_revoke = time + 60*60;
}
}

I would recommend just running the checks one at a time, as there does not appear to be a compelling reason to use threads here, and you don't want to add unnecessary complexity to a program that will be running all the time.
If you do want to learn how use a thread pool, there are examples included with the threads module. There is also a Thread::Pool module that may be useful.
As for ensuring you don't repeat the checks in the same minute, you are correct that sleeping for 60 seconds will be inadequate. No matter what value you choose to sleep, you will have edge cases in which it fails: either it will be slightly shorter than a minute, and you will occasionally have two checks in the same minute, or it will be slightly longer than a minute, and you will occasionally miss a check altogether.
Instead, use a variable to remember when the task was last done. You can then use a shorter sleep time without worrying about multiple checks per minute.
my $last_task_time = -1;
while ($continue)
{
my $min = (localtime(time))[1];
if ($last_task_time != $min &&
($min % 5 == 0 || $min > ($last_task_time+5)%60))
{
#Check connections here.
if ($min == 55 || ($last_task_time < 55 && $min > 55))
{
#Validate revocation lists here.
}
$last_task_time = $min;
}
else
{
sleep 55; #Ensures there is at least one check per minute.
}
}
Update: I fixed the code so that it will recover if the last task ran too long. This would be fine if it occasionally takes a long time. If the tasks are frequently taking longer than five minutes, though, you need a different solution (threads would probably make sense in that case).

Related

Perl threads: How to make a producer?

I have a function that is very slow to run. I need input from that function in the main part of the program. So I would like to do something similar to the UNIX command yes, that produces as much input as is read, but only a little more than is needed. Unlike yes I do not want the values from STDIN but I want the values in a Perl queue.
In other words: This problem is not about selecting on file-handles, but on queues maintained by threads.
I imagine the meta code will look similar to this:
my $DataQueue = Thread::Queue->new();
my #producers;
my $no_of_threads = 10;
for (1..$no_of_threads) {
push #producers, threads->create(\&producer);
}
for(<>) {
# This should block until there is a value to dequeue
# Maybe dequeue blocks by default - then this part is not a problem
my $val = $DataQueue->dequeue();
do_something($_,$val);
}
# We are done: The producers are no longer needed
kill #producers;
sub producer {
while(1) {
# How do I wait until the queue length is smaller than number of threads?
wait_until(length of $DataQueue < $no_of_threads);
$DataQueue->enqueue(compute_slow_value());
}
}
But is there a more elegant way of doing this? I am especially unsure of how to do the wait_until part in an efficient way.
Something like this will probably work:
my $DataQueue = Thread::Queue->new();
my #producers;
my $no_of_threads = 10;
for (1..$no_of_threads) {
push #producers, threads->create(\&producer);
}
$DataQueue->limit = 2 * $no_of_threads;
for(<>) {
# This blocks until $DataQueue->pending > 0
my $val = $DataQueue->dequeue();
do_something($_,$val);
}
# We are done: The producers are no longer needed
kill #producers;
sub producer {
while(1) {
# enqueue will block until $DataQueue->pending < $DataQueue->limit
$DataQueue->enqueue(compute_slow_value());
}
}

perl threads self detach

I'm pretty new to perl (and programming too) and were toying around with threads for the last couple of weeks and so far I understood that using them to perform some similar parallel tasks is descouraged - memory consumption is uncontrollable if your number of threads depends on some input values, and simply limiting that number and doing some interim joins seems pretty much silly.
So I've tried to trick threads to return me some values through queues followed by detaching those threads (and without actually joining them) - here's an example with parallel ping:
#!/usr/bin/perl
#
use strict;
use warnings;
use threads;
use NetAddr::IP;
use Net::Ping;
use Thread::Queue;
use Thread::Semaphore;
########## get my IPs from CIDR-notation #############
my #ips;
for my $cidr (#ARGV) {
my $n = NetAddr::IP->new($cidr);
foreach ( #{ $n->hostenumref } ) {
push #ips, ( split( '/', $_ ) )[0];
}
}
my $ping = Net::Ping->new("icmp");
my $pq = Thread::Queue->new( #ips, undef ); # ping-worker-queue
my $rq = Thread::Queue->new(); # response queue
my $semaphore = Thread::Semaphore->new(100); # I hoped this may be usefull to limit # of concurrent threads
while ( my $phost = $pq->dequeue() ) {
$semaphore->down();
threads->create( { 'stack_size' => 32 * 4096 }, \&ping_th, $phost );
}
sub ping_th {
$rq->enqueue( $_[0] ) if $ping->ping( $_[0], 1 );
$semaphore->up();
threads->detach();
}
$rq->enqueue(undef);
while ( my $alive_ip = $rq->dequeue() ) {
print $alive_ip, "\n";
}
I couldn't find a fully comprehensive description of how threads->detach() should work from within a threaded subroutine and thought that this might work... and it does - if I do something in the main program (thread) that stretches it's lifetime (sleep does well), so all the detached threads finish up and enqueue their part to my $rq, otherwise it will run some threads collect their results to the queue and exit with warnings like:
Perl exited with active threads:
5 running and unjoined
0 finished and unjoined
0 running and detached
Making the main program "sleep" for a while, once again, seems silly - is there no way to make threads do their stuff and detach ONLY after the actual threads->detach() call?
So far my guess is that threads->detach() inside a sub applies as soon as the thread is created and so this is not the way.
I tried this out with CentOSs good old v5.10.1. Should this change with modern v5.16 or v5.18 (usethreads-compiled)?
Detaching a thread isn't particularly useful, because you're effectively saying 'I don't care when they exit'.
This isn't typically what you want - your process is finishing with thread still running.
Generally though - creating threads has an overhead, because your processs is cloned in memory. You want to avoid doing this. Thread::Queue is also good to use, because it's a thread safe way of transferring information. In your code, you don't actually need it for $pq because you're not actually threading at the point where you're using it.
Your semaphore is one approach to doing it, but can I suggest as an alternative:
#!/usr/bin/perl
use strict;
use warnings;
use Thread::Queue;
my $nthreads = 100;
my $ping_q = Thread::Queue -> new();
my $result_q = Thread::Queue -> new();
sub ping_host {
my $pinger = Net::Ping->new("icmp");
while ( my $hostname = $ping_q -> dequeue() ) {
if ( $pinger -> ping ( $hostname, 1 ) ) {
$result_q -> enqueue ( $hostname );
}
}
}
#start the threads
for ( 1..$nthreads ) {
threads -> create ( \&ping_host );
}
#queue the workload
$ping_q -> enqueue ( #ip_list );
#close the queue, so '$ping_q -> dequeue' returns undef, breaking the while loop.
$ping_q -> end();
#wait for pingers to finish.
foreach my $thr ( threads -> list() ) {
$thr -> join();
}
$results_q -> end();
#collate results
while ( my $successful_host = $results_q -> dequeue_nb() ) {
print $successful_host, "\n";
}
This way you spawn the threads up front, queue the targets and then collate the results when you're done. You don't incur the overhead for repeatedly respawning threads, and you program will wait until all the threads are done. Which may be a while, because the ping timeout on a 'down' host will be quite a while.
Since detached threads can't be joined, you can wait for threads to finish their jobs,
sleep 1 while threads->list();

perl system call causing hang when using threads

I am a newbie to perl, so please excuse my ignorance. (I'm using windows 7)
I have borrowed echicken's threads example script and wanted to use it as a basis for a script to make a number of system calls, but I have run into an issue which is beyond my understanding. To illustrate the issue I am seeing, I am doing a simple ping command in the example code below.
$nb_process is the number or simultaneous running threads allowed.
$nb_compute as the number of times we want to run the sub routine (i.e the total number of time we will issue the ping command).
When I set $nb_compute and $nb_process to be same value as each other, it works perfectly.
However when I reduce $nb_process (to restrict the number of running threads at any one time), it seems to lock once the number of threads defined in $nb_process have started.
It works fine if I remove the system call (ping command).
I see the same behaviour for other system calls (it'd not just ping).
Please could someone help? I have provided the script below.
#!/opt/local/bin/perl -w
use threads;
use strict;
use warnings;
my #a = ();
my #b = ();
sub sleeping_sub ( $ $ $ );
print "Starting main program\n";
my $nb_process = 3;
my $nb_compute = 6;
my $i=0;
my #running = ();
my #Threads;
while (scalar #Threads < $nb_compute) {
#running = threads->list(threads::running);
print "LOOP $i\n";
print " - BEGIN LOOP >> NB running threads = ".(scalar #running)."\n";
if (scalar #running < $nb_process) {
my $thread = threads->new( sub { sleeping_sub($i, \#a, \#b) });
push (#Threads, $thread);
my $tid = $thread->tid;
print " - starting thread $tid\n";
}
#running = threads->list(threads::running);
print " - AFTER STARTING >> NB running Threads = ".(scalar #running)."\n";
foreach my $thr (#Threads) {
if ($thr->is_running()) {
my $tid = $thr->tid;
print " - Thread $tid running\n";
}
elsif ($thr->is_joinable()) {
my $tid = $thr->tid;
$thr->join;
print " - Results for thread $tid:\n";
print " - Thread $tid has been joined\n";
}
}
#running = threads->list(threads::running);
print " - END LOOP >> NB Threads = ".(scalar #running)."\n";
$i++;
}
print "\nJOINING pending threads\n";
while (scalar #running != 0) {
foreach my $thr (#Threads) {
$thr->join if ($thr->is_joinable());
}
#running = threads->list(threads::running);
}
print "NB started threads = ".(scalar #Threads)."\n";
print "End of main program\n";
sub sleeping_sub ( $ $ $ ) {
my #res2 = `ping 136.13.221.34`;
print "\n#res2";
sleep(3);
}
The main problem with your program is that you have a busy loop that tests whether a thread can be joined. This is wasteful. Furthermore, you could reduce the amount of global variables to better understand your code.
Other eyebrow-raiser:
Never ever use prototypes, unless you know exactly what they mean.
The sleeping_sub does not use any of its arguments.
You use the threads::running list a lot without contemplating whether this is actually correct.
It seems you only want to run N workers at once, but want to launch M workers in total. Here is a fairly elegant way to implement this. The main idea is that we have a queue between threads where threads that just finished can enqueue their thread ID. This thread will then be joined. To limit the number of threads, we use a semaphore:
use threads; use strict; use warnings;
use feature 'say'; # "say" works like "print", but appends newline.
use Thread::Queue;
use Thread::Semaphore;
my #pieces_of_work = 1..6;
my $num_threads = 3;
my $finished_threads = Thread::Queue->new;
my $semaphore = Thread::Semaphore->new($num_threads);
for my $task (#pieces_of_work) {
$semaphore->down; # wait for permission to launch a thread
say "Starting a new thread...";
# create a new thread in scalar context
threads->new({ scalar => 1 }, sub {
my $result = worker($task); # run actual task
$finished_threads->enqueue(threads->tid); # report as joinable "in a second"
$semaphore->up; # allow another thread to be launched
return $result;
});
# maybe join some threads
while (defined( my $thr_id = $finished_threads->dequeue_nb )) {
join_thread($thr_id);
}
}
# wait for all threads to be finished, by "down"ing the semaphore:
$semaphore->down for 1..$num_threads;
# end the finished thread ID queue:
$finished_threads->enqueue(undef);
# join any threads that are left:
while (defined( my $thr_id = $finished_threads->dequeue )) {
join_thread($thr_id);
}
With join_thread and worker defined as
sub worker {
my ($task) = #_;
sleep rand 2; # sleep random amount of time
return $task + rand; # return some number
}
sub join_thread {
my ($tid) = #_;
my $thr = threads->object($tid);
my $result = $thr->join;
say "Thread #$tid returned $result";
}
we could get the output:
Starting a new thread...
Starting a new thread...
Starting a new thread...
Starting a new thread...
Thread #3 returned 3.05652608754778
Starting a new thread...
Thread #1 returned 1.64777186731541
Thread #2 returned 2.18426146087901
Starting a new thread...
Thread #4 returned 4.59414651998983
Thread #6 returned 6.99852684265667
Thread #5 returned 5.2316971836585
(order and return values are not deterministic).
The usage of a queue makes it easy to tell which thread has finished. Semaphores make it easier to protect resources, or limit the amount of parallel somethings.
The main benefit of this pattern is that far less CPU is used, when contrasted to your busy loop. This also shortens general execution time.
While this is a very big improvement, we could do better! Spawning threads is expensive: This is basically a fork() without all the copy-on-write optimizations on Unix systems. The whole interpreter is copied, including all variables, all state etc. that you have already created.
Therefore, as threads should be used sparingly, and be spawned as early as possible. I already introduced you to queues that can pass values between threads. We can extend this so that a few worker threads constantly pull work from an input queue, and return via an output queue. The difficulty now is to have the last thread to exit finish the output queue.
use threads; use strict; use warnings;
use feature 'say';
use Thread::Queue;
use Thread::Semaphore;
# define I/O queues
my $input_q = Thread::Queue->new;
my $output_q = Thread::Queue->new;
# spawn the workers
my $num_threads = 3;
my $all_finished_s = Thread::Semaphore->new(1 - $num_threads); # a negative start value!
my #workers;
for (1 .. $num_threads) {
push #workers, threads->new( { scalar => 1 }, sub {
while (defined( my $task = $input_q->dequeue )) {
my $result = worker($task);
$output_q->enqueue([$task, $result]);
}
# we get here when the input queue is exhausted.
$all_finished_s->up;
# end the output queue if we are the last thread (the semaphore is > 0).
if ($all_finished_s->down_nb) {
$output_q->enqueue(undef);
}
});
}
# fill the input queue with tasks
my #pieces_of_work = 1 .. 6;
$input_q->enqueue($_) for #pieces_of_work;
# finish the input queue
$input_q->enqueue(undef) for 1 .. $num_threads;
# do something with the data
while (defined( my $result = $output_q->dequeue )) {
my ($task, $answer) = #$result;
say "Task $task produced $answer";
}
# join the workers:
$_->join for #workers;
With worker defined as before, we get:
Task 1 produced 1.15207098293783
Task 4 produced 4.31247785766295
Task 5 produced 5.96967474718984
Task 6 produced 6.2695013168678
Task 2 produced 2.02545636412421
Task 3 produced 3.22281619053999
(The three threads would get joined after all output is printed, so that output would be boring).
This second solution gets a bit simpler when we detach the threads – the main thread won't exit before all threads have exited, because it is listening to the input queue which is finished by the last thread.

Perl: Correctly passing array for threads to work on

I'm learning how to do threading in Perl. I was going over the example code here and adapted the solution code slightly:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Semaphore;
my $sem = Thread::Semaphore->new(2); # max 2 threads
my #names = ("Kaku", "Tyson", "Dawkins", "Hawking", "Goswami", "Nye");
my #threads = map {
# request a thread slot, waiting if none are available:
foreach my $whiz (#names) {
$sem->down;
threads->create(\&mySubName, $whiz);
}
} #names;
sub mySubName {
return "Hello Dr. " . $_[0] . "\n";
# release slot:
$sem->up;
}
foreach my $t (#threads) {
my $hello = $t->join();
print "$hello";
}
Of course, this is now completely broken and does not work. It results in this error:
C:\scripts\perl\sandbox>threaded.pl
Can't call method "join" without a package or object reference at C:\scripts\perl\sandbox\threaded.pl line 24.
Perl exited with active threads:
0 running and unjoined
9 finished and unjoined
0 running and detached
My objective was two-fold:
Enforce max number of threads allowed at any given time
Provide the array of 'work' for the threads to consume
In the original solution, I noticed that the 0..100; code seems to specify the amount of 'work' given to the threads. However, in my case where I want to supply an array of work, do I still need to supply something similar?
Any guidance and corrections very welcome.
You're storing the result of foreach into #threads rather than the result of threads->create.
Even if you fix this, you collect completed threads too late. I'm not sure how big of a problem that is, but it might prevent more than 64 threads from being started on some systems. (64 is the max number of threads a program can have at a time on some systems.)
A better approach is to reuse your threads. This solves both of your problems and avoids the overhead of repeatedly creating threads.
use threads;
use Thread::Queue 3.01 qw( );
use constant NUM_WORKERS => 2;
sub work {
my ($job) = #_;
...
}
{
my $q = Thread::Queue->new();
for (1..NUM_WORKERS) {
async {
while (my $job = $q->dequeue()) {
work($job);
}
};
}
$q->enqueue(#names); # Can be done over time.
$q->end(); # When you're done adding.
$_->join() for threads->list();
}

Strange variable behaviour using Perl ithreads

I'm trying to implement a multithreaded application based on a slightly altered boss/worker model. Basically the main thread creates several boss threads, which in turn spawn two worker threads each (possibly more). That's because the boss threads deal with one host or network device each, and the worker threads could take a while to complete their work.
I'm using Thread::Pool to realize this concept, and so far it works quite well; I also don't think my problem is related to Thread::Pool (see below). Very simplified pseudocode ahead:
use strict;
use warnings;
my $bosspool = create_bosspool(); # spawns all boss threads
my $taskpool = undef; # created in each boss thread at
# creation of each boss thread
# give device jobs to boss threads
while (1) {
foreach my $device ( #devices ) {
$bosspool->job($device);
}
sleep(1);
}
# This sub is called for jobs passed to the $bosspool
sub process_boss
{
my $device = shift;
foreach my $task ( $device->{tasks} ) {
# process results as they become available
process_result() while ( $taskpool->results );
# give task jobs to task threads
scalar $taskpool->job($device, $task);
sleep(1); ### HACK ###
}
# process remaining results / wait for all tasks to finish
process_result() while ( $taskpool->results || $taskpool->todo );
# happy result processing
}
sub process_result
{
my $result = $taskpool->result_any();
# mangle $result
}
# This sub is called for jobs passed to the $taskpool of each boss thread
sub process_task
{
# not so important stuff
return $result;
}
By the way, the reason I'm not using the monitor()-routine is because I have to wait for all jobs in the $taskpool to finish. Now, this code works just wonderful, unless you remove the ### HACK ### line. Without sleeping, $taskpool->todo() won't deliver the right number of jobs still open if you add them or receive their results too "fast". Like, you add 4 jobs in total but $taskpool->todo() will only return 2 afterwards (with no pending results). This leads to all sorts of interesting effects.
OK, so Thread::Pool->todo() is crap, let's try a workaround:
sub process_boss
{
my $device = shift;
my $todo = 0;
foreach my $task ( $device->{tasks} ) {
# process results as they become available
while ( $taskpool->results ) {
process_result();
$todo--;
}
# give task jobs to task threads
scalar $taskpool->job($device, $task);
$todo++;
}
# process remaining results / wait for all tasks to finish
while ( $todo ) {
process_result();
sleep(1); ### HACK ###
$todo--;
}
}
This will also work fine, as long as I keep the ### HACK ### line. Without this line, this code will reproduce the problems of Thread::Pool->todo(), as $todo does not only get decremented by 1, but 2 or even more.
I've tested this code with only one boss thread, so there was basically no multithreading involved (when it comes to this subroutine). $bosspool, $taskpool and especially $todo aren't :shared, no side effects possible, right? What's happening in this subroutine, which gets executed by only one boss thread, with no shared variables, semaphores, etc.?
I would suggest that the best way to implement a 'worker' threads model, is with Thread::Queue. The problem with doing something like this, is figuring out when queues are complete, or whether items are dequeued and pending processing.
With Thread::Queue you can use a while loop to fetch elements from the queue, and end the queue, such that the while loop returns undef and the threads exit.
So you don't always need multiple 'boss' threads, you can just use multiple different flavours of worker and input queues. I would question why you need a 'boss' thread model in that instance. It seems unnecessary.
With reference to:
Perl daemonize with child daemons
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Queue;
my $nthreads = 4;
my #targets = qw ( device1 device2 device3 device4 );
my $task_one_q = Thread::Queue->new();
my $task_two_q = Thread::Queue->new();
my $results_q = Thread::Queue->new();
sub task_one_worker {
while ( my $item = task_one_q->dequeue ) {
#do something with $item
$results_q->enqueue("$item task_one complete");
}
}
sub task_two_worker {
while ( my $item = task_two_q->dequeue ) {
#do something with $item
$results_q->enqueue("$item task_two complete");
}
}
#start threads;
for ( 1 .. $nthreads ) {
threads->create( \&task_one_worker );
threads->create( \&task_two_worker );
}
foreach my $target (#targets) {
$task_one_q->enqueue($target);
$task_two_q->enqueue($target);
}
$task_one_q->end;
$task_two_q->end;
#Wait for threads to exit.
foreach my $thr ( threads->list() ) {
threads->join();
}
$results_q->end();
while ( my $item = $results_q->dequeue() ) {
print $item, "\n";
}
You could do something similar with a boss thread if you were desirous - you can create a queue per boss and pass it by reference to the workers. I'm not sure that it's necessary though.

Resources