What exactly does Perl do to threads that have completed its task? Does it let it idle or just kills it? I have a basic code structure below and I was wondering how to best optimize it.
use threads;
use Thread::Semaphore
my $s = Thread::Semaphore->new($maxThreads);
my #threads;
my $thread;
foreach my $tasktodo (#tasktodo) {
$s->down();
$thread = threads->new(\&doThis);
push #threads, $thread;
}
foreach my $thr (#threads) {
$thr->join();
}
sub doThis {
# blah blah
# completed, gonna let more threads run with $s->up()
$s->up();
}
In this case, once a thread completes, I want to free up resources for more threads to run. I'm worried about joining threads at the end of the loop. If in the whole program life cycle it will have 4 threads created, will #threads still have 4 threads in it when joining?
Lets say $maxThreads is 2, will it run 2 threads then when those 2 completes, it will be killed and run 2 more threads. At the end it will only join or wait for those 2 threads running?
EDIT: I also don't care for the return values of these threads, that's why I want to free up resources. Only reason I'm joining is I want all threads to complete before continuing with the script. Again, is this the best implementation?
The usual method for terminating a thread is to return EXPR from the entry point function with the appropriate return value(s).
The join function waits for this return value, and clean up the thread. So, in my opinion your code is fine.
Another way to exit a thread is this:
threads->exit(status);
Also, you can get a list of joinable threads with:
threads->list(threads::joinable);
Related
I have the following code:
foreach my $inst (sort keys %{ ... }) {
next if (...)
somefuntion($a, $b, $c, $inst);
}
I would like to run this function on all the $inst-s asynchronously.
I tried to make it multi-threaded, but I'm having trouble with the syntax or implementation.
*** EDIT: ***
Apparently (i haven't noticed until now), the function uses a hash and the updates gets lost.
Should Threads::shared help in this case? Is it relevant in this case or should I just try forks?
Perl's got three major ways I'd suggest to do parallel code
Threads
Forks
Nonblocking IO
The latter isn't strictly speaking 'parallel' in all circumstances, but it does let you do multiple things at the same time, without waiting for each to finish, so it's beneficial in certain circumstances.
E.g. maybe you want to open 10 concurrent ssh sessions - you can just do an IO::Select to find which of them are 'ready' and process them as they come in.
The ssh shells themselves are of course, separate processes.
But when doing parallel, you need to be aware of a couple of pitfalls - one being 'self denial of service' - you can generate huge resource consumption very easily. The other being that you've got some inherent race conditions, and no longer a deterministic flow of program - that brings you a whole new class of exciting bugs.
Threads
I wouldn't advocate spawning a thread-per-instance, as that scales badly. Threads in perl are NOT lightweight, like you might be assuming. That means that implementing them as if they are, gives you a denial of service condition.
What I'd typically suggest is running with Thread::Queue and some "worker" threads - and use the Queue to pass data to some number of workers that are scaled to your resource availability. Depending on what is your limiting factor here that's making you do parallel.
(e.g. disk, network, cpu, etc.)
So to use a simplistic example that I've posted previously:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Queue;
my $nthreads = 5;
my $process_q = Thread::Queue->new();
my $failed_q = Thread::Queue->new();
#this is a subroutine, but that runs 'as a thread'.
#when it starts, it inherits the program state 'as is'. E.g.
#the variable declarations above all apply - but changes to
#values within the program are 'thread local' unless the
#variable is defined as 'shared'.
#Behind the scenes - Thread::Queue are 'shared' arrays.
sub worker {
#NB - this will sit a loop indefinitely, until you close the queue.
#using $process_q -> end
#we do this once we've queued all the things we want to process
#and the sub completes and exits neatly.
#however if you _don't_ end it, this will sit waiting forever.
while ( my $server = $process_q->dequeue() ) {
chomp($server);
print threads->self()->tid() . ": pinging $server\n";
my $result = `/bin/ping -c 1 $server`;
if ($?) { $failed_q->enqueue($server) }
print $result;
}
}
#insert tasks into thread queue.
open( my $input_fh, "<", "server_list" ) or die $!;
$process_q->enqueue(<$input_fh>);
close($input_fh);
#we 'end' process_q - when we do, no more items may be inserted,
#and 'dequeue' returns 'undefined' when the queue is emptied.
#this means our worker threads (in their 'while' loop) will then exit.
$process_q->end();
#start some threads
for ( 1 .. $nthreads ) {
threads->create( \&worker );
}
#Wait for threads to all finish processing.
foreach my $thr ( threads->list() ) {
$thr->join();
}
#collate results. ('synchronise' operation)
while ( my $server = $failed_q->dequeue_nb() ) {
print "$server failed to ping\n";
}
This will start 5 threads, and queue up some number of jobs, such that 5 are running in parallel at any given time, and 'unwind' gracefully after.
Forking
Parallel::Forkmanager is the tool for the job here.
Unlike threads, forks are quite efficient on a Unix system, as the native fork() system call is well optimised.
But what it's not so good at is passing data around - you've got to hand roll any IPCs between your forks in a way that you don't so much with Threads.
A simple example of this would be:
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
my $concurrent_fork_limit = 4;
my $fork_manager = Parallel::ForkManager->new($concurrent_fork_limit);
foreach my $thing ( "fork", "spoon", "knife", "plate" ) {
my $pid = $fork_manager->start;
if ($pid) {
print "$$: Fork made a child with pid $pid\n";
} else {
print "$$: child process started, with a key of $thing ($pid)\n";
}
$fork_manager->finish;
}
$fork_manager->wait_all_children();
This does spawn off subprocesses, but cleans up after them fairly readily.
Nonblocking IO
Using IO::Select you would open some number of filehandles to subprocesses, and then use the can_read function to process the ones that are ready to run.
The perldoc IO::Select covers most of the detail here, which I'll reproduce for convenience:
use IO::Select;
$select = IO::Select->new();
$select->add(\*STDIN);
$select->add($some_handle);
#ready = $select->can_read($timeout);
#ready = IO::Select->new(#handles)->can_read(0);
You could use threads.
Here's an example that should take about 5 seconds to finish although it calls sleep(5) twice:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
my %data = (
'foo' => 'bar',
'apa' => 'bepa',
);
sub somefuntion {
my $key = shift;
print "$key\n";
sleep(5);
return $data{$key};
}
my #threads;
for my $inst (sort keys %data) {
push #threads, threads->create('somefuntion', $inst);
}
print "running...\n";
for my $thr (#threads) {
print $thr->join() . "\n";
}
print "done\n";
This answer was made to show how threads works in Perl because you mentioned threads. Just a word of caution:
The "interpreter-based threads" provided by Perl are not the fast, lightweight system for multitasking that one might expect or hope for. Threads are implemented in a way that makes them easy to misuse. Few people know how to use them correctly or will be able to provide help.
The use of interpreter-based threads in perl is officially discouraged.
I have a perl program that takes over 13 hours to run. I think it could benefit from introducing multithreading but I have never done this before and I'm at a loss as to how to begin.
Here is my situation:
I have a directory of hundreds of text files. I loop through every file in the directory using a basic for loop and do some processing (text processing on the file itself, calling an outside program on the file, and compressing it). When complete I move on to the next file. I continue this way doing each file, one after the other, in a serial fashion. The files are completely independent from each other and the process returns no values (other than success/failure codes) so this seems like a good candidate for multithreading.
My questions:
How do I rewrite my basic loop to take advantage of threads? There appear to be several moduals for threading out there.
How do I control how many threads are currently running? If I have N cores available, how do I limit the number of threads to N or N - n?
Do I need to manage the thread count manually or will Perl do that for me?
Any advice would be much appreciated.
Since your threads are simply going to launch a process and wait for it to end, best to bypass the middlemen and just use processes. Unless you're on a Windows system, I'd recommend Parallel::ForkManager for your scenario.
use Parallel::ForkManager qw( );
use constant MAX_PROCESSES => ...;
my $pm = Parallel::ForkManager->new(MAX_PROCESSES);
my #qfns = ...;
for my $qfn (#qfns) {
my $pid = $pm->start and next;
exec("extprog", $qfn)
or die $!;
}
$pm->wait_all_children();
If you wanted you avoid using needless intermediary threads in Windows, you'd have to use something akin to the following:
use constant MAX_PROCESSES => ...;
my #qfns = ...;
my %children;
for my $qfn (#qfns) {
while (keys(%children) >= MAX_PROCESSES) {
my $pid = wait();
delete $children{$pid};
}
my $pid = system(1, "extprog", $qfn);
++$children{$pid};
}
while (keys(%children)) {
my $pid = wait();
delete $children{$pid};
}
Someone's given your a forking example. Forks aren't native on Windows, so I'd tend to prefer threading.
For the sake of completeness - here's a rough idea of how threading works (and IMO is one of the better approaches, rather than respawning threads).
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use Thread::Queue;
my $nthreads = 5;
my $process_q = Thread::Queue->new();
my $failed_q = Thread::Queue->new();
#this is a subroutine, but that runs 'as a thread'.
#when it starts, it inherits the program state 'as is'. E.g.
#the variable declarations above all apply - but changes to
#values within the program are 'thread local' unless the
#variable is defined as 'shared'.
#Behind the scenes - Thread::Queue are 'shared' arrays.
sub worker {
#NB - this will sit a loop indefinitely, until you close the queue.
#using $process_q -> end
#we do this once we've queued all the things we want to process
#and the sub completes and exits neatly.
#however if you _don't_ end it, this will sit waiting forever.
while ( my $server = $process_q->dequeue() ) {
chomp($server);
print threads->self()->tid() . ": pinging $server\n";
my $result = `/bin/ping -c 1 $server`;
if ($?) { $failed_q->enqueue($server) }
print $result;
}
}
#insert tasks into thread queue.
open( my $input_fh, "<", "server_list" ) or die $!;
$process_q->enqueue(<$input_fh>);
close($input_fh);
#we 'end' process_q - when we do, no more items may be inserted,
#and 'dequeue' returns 'undefined' when the queue is emptied.
#this means our worker threads (in their 'while' loop) will then exit.
$process_q->end();
#start some threads
for ( 1 .. $nthreads ) {
threads->create( \&worker );
}
#Wait for threads to all finish processing.
foreach my $thr ( threads->list() ) {
$thr->join();
}
#collate results. ('synchronise' operation)
while ( my $server = $failed_q->dequeue_nb() ) {
print "$server failed to ping\n";
}
If you need to move complicated data structures around, I'd recommend having a look at Storable - specifically freeze and thaw. These will let you shuffle around objects, hashes, arrays etc. easily in queues.
Note though - for any parallel processing option, you get good CPU utilisation, but you don't get more disk IO - that's often a limiting factor.
I have a module that is running multiple threads and pushing them onto a list of threads.
ex:
#!/usr/bin/perl
#test_module.pm
package test_module;
use strict;
use warnings;
use threads;
sub main {
my $max_threads = 10;
my #threads = ();
# create threads
while (scalar #threads < $max_threads) {
my $thread = threads->new(\&thread_sub);
push #threads, $thread;
}
# join threads
for my $thread (#threads) {
$thread->join();
}
}
sub thread_sub {
my $id = threads->tid();
print "I am in thread $id\n";
}
1;
The problem is that I am calling this module multiple times from one Perl script and instead of eliminating the old threads and creating new ones, the thread ids just keep incrementing. I have heard that if you don't properly get rid of old threads in Perl this will cause a memory leak and slow your program down, is this true? Is the data from my old threads just sitting in memory taking up space?
If so this can become a large problem since my script will be part of a much larger program that may generate hundreds or thousands of threads all of which would just be taking up memory even after they are done being used. How can I stop this from happening? Can my threads be reused?
Here is an example script that will call the module and show how the threads will continue to increment even though I joined the old threads (I thought that "join" was how you cleaned up after them, am I doing something wrong?) The way this script will be used I can't afford to have memory from old threads sitting there taking up space.
ex:
#!/usr/bin/perl
#testing.pl
use strict;
use warnings;
use test_module;
test_module::main();
test_module::main();
test_module::main();
system 'pause';
Thanks!
Don't worry about thread IDs incrementing - that doesn't mean the number of running threads is increasing. Once a thread is joined it has finished executing and been terminated.
However, continuously respawning threads isn't ideal either - creating a thread isn't a particularly lightweight operation in perl. So if you've got to do something like that, and are particularly focussing on efficiency - look to fork() instead.
I find I tend to use a 'worker thread' model, using Thread::Queue:
my $processing_q = Thread::Queue -> new();
sub worker_thread {
while ( my $item = $processing_q -> dequeue() ) {
# do stuff to $item
}
}
for ( 1 .. $num_threads ) {
my $thr = threads -> create ( \&worker_thread );
}
$processing_q -> enqueue ( #generic_list_of_things );
$processing_q -> end;
foreach my $thread ( threads -> list() ) {
$thread -> join();
}
This will feed in a batch of items into a queue, and your worker threads will process them one at a time - means you can have a sensible number running, without having to continuously respawn.
As an alternative though - take a look at Parallel::ForkManager - fork style parallel processing may seem counterintuitive initially, but fork() is a native system call on Unix systems, so it tends to be better optimised.
So this is what I have right now:
for my $action (#actionList){
$q->enqueue([$_, $action]) for #component_dirs;
print "\nWaiting for prior actions to finish up...\n";
until (!defined($q->peek())) {}
}
$q->end();
$_->join() for threads->list();
But this doesn't seem to work.. is there a better way to force the queue to wait for previous $action items to complete before allowing access again?
edit: Oddly enough, it's magically started working... maybe it was working all along and I just didn't make the output apparent enough. Either way, my question still stands - is there a better way?
Your code doesn't wait until the previous action has completed, it just wastes CPU until another thread starts working on the last job.
For things like “flags”, you should generally use semaphores instead. Semaphores are thread-safe counters with up and down methods. For example, we could pass a semaphore along with the job, which starts with count zero. Each thread increments the semaphore when it finishes a job. Our main thread tries to decrement the semaphore by the count of jobs, which will block until all threads have finished:
my $q = Thread::Queue->new;
my #workers = map { threads->create(\&worker, $q) } 1 .. $NUM_WORKERS;
for my $action (#actionList) {
my $sem = Thread::Semaphore->new(0);
$q->enqueue([$_, $action, $sem]) for #component_dirs;
$sem->down(0+#component_dirs); # wait for the threads
}
$q->end;
$_->join for #workers;
sub worker {
my ($q) = #_;
while (my $job = $q->dequeue) {
my ($component, $action, $sem) = #$job;
...
$sem->up;
}
}
Actually, we could reuse the semaphore.
See the Thread::Semaphore docs for more details.
This usage is similar to barriers.
I am quite new to Perl, especially Perl Threads.
I want to accomplish:
Have 5 threads that will en-queue data(Random numbers) into a
Thread::queue
Have 3 threads that will de-queue data from the
Thread::queue.
The complete code that I wrote in order to achieve above mission:
#!/usr/bin/perl -w
use strict;
use threads;
use Thread::Queue;
my $queue = new Thread::Queue();
our #Enquing_threads;
our #Dequeuing_threads;
sub buildQueue
{
my $TotalEntry=1000;
while($TotalEntry-- >0)
{
my $query = rand(10000);
$queue->enqueue($query);
print "Enque thread with TID " .threads->tid . " got $query,";
print "Queue Size: " . $queue->pending . "\n";
}
}
sub process_Queue
{
my $query;
while ($query = $queue->dequeue)
{
print "Dequeu thread with TID " .threads->tid . " got $query\n";
}
}
push #Enquing_threads,threads->create(\&buildQueue) for 1..5;
push #Dequeuing_threads,threads->create(\&process_Queue) for 1..3;
Issues that I am Facing:
The threads are not running as concurrently as expected.
The entire program abnormally exit with following console output:
Perl exited with active threads:
8 running and unjoined
0 finished and unjoined
0 running and detached
Enque thread with TID 5 got 6646.13585023883,Queue Size: 595
Enque thread with TID 1 got 3573.84104215917,Queue Size: 595
Any help on code-optimization is appreciated.
This behaviour is to be expected: When the main thread exits, all other threads exit as well. If you don't care, you can $thread->detach them. Otherwise, you have to manually $thread->join them, which we'll do.
The $thread->join waits for the thread to complete, and fetches the return value (threads can return values just like subroutines, although the context (list/void/scalar) has to be fixed at spawn time).
We will detach the threads that enqueue data:
threads->create(\&buildQueue)->detach for 1..5;
Now for the dequeueing threads, we put them into a lexical variable (why are you using globals?), so that we can dequeue them later:
my #dequeue_threads = map threads->create(\&process_queue), 1 .. 3;
Then wait for them to complete:
$_->join for #dequeue_threads;
We know that the detached threads will finish execution before the programm exits, because the only way for the dequeueing threads to exit is to exhaust the queue.
Except for one and a half bugs. You see, there is a difference between an empty queue and a finished queue. If the queue is just empty, the dequeueing threads will block on $queue->dequeue until they get some input. The traditional solution is to dequeue while the value they get is defined. We can break the loop by supplying as many undef values in the queue as there are threads reading from the queue. More modern version of Thread::Queue have an end method, that makes dequeue return undef for all subsequent calls.
The problem is when to end the queue. We should to this after all enqueueing threads have exited. Which means, we should wait for them manually. Sigh.
my #enqueueing = map threads->create(\&enqueue), 1..5;
my #dequeueing = map threads->create(\&dequeue), 1..3;
$_->join for #enqueueing;
$queue->enqueue(undef) for 1..3;
$_->join for #dequeueing;
And in sub dequeuing: while(defined( my $item = $queue->dequeue )) { ... }.
Using the defined test fixes another bug: rand can return zero, although this is quite unlikely and will slip through most tests. The contract of rand is that it returns a pseudo-random floating point number between including zero and excluding some upper bound: A number from the interval [0, x). The bound defaults to 1.
If you don't want to join the enqueueing threads manually, you could use a semaphore to signal completition. A semaphore is a multithreading primitive that can be incremented and decremented, but not below zero. If a decrement operation would let the drop count below zero, the call blocks until another thread raises the count. If the start count is 1, this can be used as a flag to block resources.
We can also start with a negative value 1 - $NUM_THREADS, and have each thread increment the value, so that only when all threads have exited, it can be decremented again.
use threads; # make a habit of importing `threads` as the first thing
use strict; use warnings;
use feature 'say';
use Thread::Queue;
use Thread::Semaphore;
use constant {
NUM_ENQUEUE_THREADS => 5, # it's good to fix the thread counts early
NUM_DEQUEUE_THREADS => 3,
};
sub enqueue {
my ($out_queue, $finished_semaphore) = #_;
my $tid = threads->tid;
# iterate over ranges instead of using the while($maxval --> 0) idiom
for (1 .. 1000) {
$out_queue->enqueue(my $val = rand 10_000);
say "Thread $tid enqueued $val";
}
$finished_semaphore->up;
# try a non-blocking decrement. Returns true only for the last thread exiting.
if ($finished_semaphore->down_nb) {
$out_queue->end; # for sufficiently modern versions of Thread::Queue
# $out_queue->enqueue(undef) for 1 .. NUM_DEQUEUE_THREADS;
}
}
sub dequeue {
my ($in_queue) = #_;
my $tid = threads->tid;
while(defined( my $item = $in_queue->dequeue )) {
say "thread $tid dequeued $item";
}
}
# create the queue and the semaphore
my $queue = Thread::Queue->new;
my $enqueuers_ended_semaphore = Thread::Semaphore->new(1 - NUM_ENQUEUE_THREADS);
# kick off the enqueueing threads -- they handle themself
threads->create(\&enqueue, $queue, $enqueuers_ended_semaphore)->detach for 1..NUM_ENQUEUE_THREADS;
# start and join the dequeuing threads
my #dequeuers = map threads->create(\&dequeue, $queue), 1 .. NUM_DEQUEUE_THREADS;
$_->join for #dequeuers;
Don't be suprised if the threads do not seem to run in parallel, but sequentially: This task (enqueuing a random number) is very fast, and is not well suited for multithreading (enqueueing is more expensive than creating a random number).
Here is a sample run where each enqueuer only creates two values:
Thread 1 enqueued 6.39390993005694
Thread 1 enqueued 0.337993319585337
Thread 2 enqueued 4.34504733960242
Thread 2 enqueued 2.89158054485114
Thread 3 enqueued 9.4947585773571
Thread 3 enqueued 3.17079715055542
Thread 4 enqueued 8.86408863197179
Thread 5 enqueued 5.13654995317669
Thread 5 enqueued 4.2210886147538
Thread 4 enqueued 6.94064174636395
thread 6 dequeued 6.39390993005694
thread 6 dequeued 0.337993319585337
thread 6 dequeued 4.34504733960242
thread 6 dequeued 2.89158054485114
thread 6 dequeued 9.4947585773571
thread 6 dequeued 3.17079715055542
thread 6 dequeued 8.86408863197179
thread 6 dequeued 5.13654995317669
thread 6 dequeued 4.2210886147538
thread 6 dequeued 6.94064174636395
You can see that 5 managed to enqueue a few things before 4. The threads 7 and 8 don't get to dequeue anything, 6 is too fast. Also, all enqueuers are finished before the dequeuers are spawned (for such a small number of inputs).