while ($thr->is_running() {
# do something
}
vs
while (! $thr->joinable()) {
# do something
}
Is there any difference between the two?
When would a programmer use one over the other and vice versa?
I am assuming you cant join a thread if it's running so arent they basically the same thing?
If so, why does perl provides two different ways to check the status of a thread?
is_joinable is not the same as !is_running.
is_joinable checks for
(thread->state & PERL_ITHR_FINISHED) &&
!(thread->state & PERL_ITHR_DETACHED) &&
!(thread->state & PERL_ITHR_JOINED)
is_running checks for
!(thread->state & PERL_ITHR_FINISHED)
So
A detached thread that finished is neither running nor joinable.
A thread that's already been joined is neither running nor joinable.
As per the documentation :
$thr->is_running()
Returns true if a thread is still running
(i.e., if its entry point function has not yet finished or exited).
$thr->is_joinable()
Returns true if the thread has finished running,
is not detached and has not yet been joined. In other words,
the thread is ready to be joined, and a call to $thr->join() will not block.
So the difference stems from the way detached threads are treated.
i.e $thread->is_running() would return true if the thread is running irrespective of whether it is detached or not
but "not $thread->is_joinable()" would return true even if a thread is detached but has stopped running.
Example:
1) detached thread
use strict;
use warnings;
use threads;
sub do_nothing {
print("in thread\n");
sleep(30);
return;
}
my $t = threads->create(\&do_nothing);
$t->detach();
while ($t->is_running()) {
print("is running\n");
sleep(4);
}
if ($t->is_joinable()) {
print("is joinable\n");
}
else {
print("not joinable\n");
}
exit;
Exampled : a non-detached thread
use strict;
use warnings;
use threads;
sub do_nothing {
print("in thread\n");
sleep(30);
return;
}
my $t = threads->create(\&do_nothing);
while ($t->is_running()) {
print("is running\n");
sleep(4);
}
if ($t->is_joinable()) {
print("is joinable\n");
}
else {
print("not joinable\n");
}
exit;
They are not the same.
A thread is "joinable" if it has not been joined or detached, and is no longer running. That is to say, it provides a poll interface to the condition that joining the thread would block on.
Finished running, not yet joined, not detached == joinable
Not yet finished running, not yet joined, not detached == running.
See Perl Threads.
Related
How do I return a value to a process which initiated a daemon from a signal handler inside a daemon?
sub _fork
{
my $pid = fork;
$pid;
}
sub daemonize_monitor_sigio
{
_fork and return;
SIG{IO} = sub{
print "caught sigio";
$ret = {}
...#do some processing
#wants to return $ret here;
}
while(1)
{
;
}
}
daemoniz_monitor_sigio();
The thing about signal handlers, is they're pretty simple. They're basically an interrupt from the kernel, that's passed to the process.
The handler can alter state within the process. But because you've fork()ed beforehand, you have a parent process and a child process - the two don't have any shared state. So 'signaling' from one to the other, is an completely seperate IPC - at the simplest level - you can send another kill signal to the parent process - and get this pid via getpgrp.
For more complicated IPCs though, you're looking at... well, reading the perlipc doc, and figuring out what's most appropriate.
So my program spawns a number of child processes in response to certain events, and I'm doing something ike this to keep track and kill them upon program exit (Perl syntax):
my %children = ();
# this will be called upon exit
sub kill_children {
kill 'INT' => keys %children;
exit;
}
# main code
while(1) {
...
my $child = fork();
if ($child > 0) {
$children{$child} = 1;
} elsif ($child == 0) {
# do child work ...
exit();
} else {
# handle the error
}
}
So the idea is as above. However, there's a blatant race condition there, in that a given child can start and terminate before the father has a chance to run and record its pid in the %children hash. So the father may end up thinking that a given pid belongs to an active child, even if this child has terminated.
Is there a way to do what I'm trying to accomplish in a safe way?
Edit: To better keep track of children, the code can be extended as follows (which however also suffer of the exact same race condition, so that's why I didn't write it fully in the first place):
my %children = ();
sub reap {
my $child;
while (($child = waitpid(-1, WNOHANG)) > 0) {
#print "collecting dead child $child\n";
delete $children{$child};
}
}
$SIG{CHLD} = \&reap;
# this will be called upon exit
sub kill_children {
local $SIG{CHLD} = 'IGNORE';
kill 'INT' => keys %children;
exit;
}
# main code
while(1) {
...
my $child = fork();
if ($child > 0) {
$children{$child} = 1;
} elsif ($child == 0) {
# do child work ...
exit();
} else {
# handle the error
}
}
Even in this case, the contents of %children may not reflect the actual active children.
Edit 2: I found this question, which is exactly about the same problem. I like the solution suggested in there.
On UNIX it's not a race condition. This is the standard way to handle fork(). When the child process exits, its status is changed to "terminated"; it becomes a zombie. It still has an entry in the process table until the parent process calls one of the wait functions. Only after that is the dead process really removed.
Even if the parent sets itself up to ignore SIGCHLD, it still wouldn't qualify as a race condition; the parent would just have a PID that's not valid anymore. In that case, wait() would return ECHILD. But setting SIGCHLD would free up a child's PID, possibly leading to the parent trying to kill a process that is not a child.
On Windows, which doesn't have a fork call, it is emulated by creating a thread in the perl process. See perlfork. I'm not knowlegable enough about Windows to opinionate about if that could cause a race condition, but I suspect not.
My Perl script needs to run multiple threads simultaneously...
use threads ('yield', 'exit' => 'threads_only');
use threads::shared;
use strict;
use warnings;
no warnings 'threads';
use LWP::UserAgent;
use HTTP::Request;
use HTTP::Async;
use ...
...and such threads need to obtain some information from web, so HTTP::Async is used.
my $request = HTTP::Request->new;
$request->protocol('HTTP/1.1');
$request->method('GET');
$request->header('User-Agent' => '...');
my $async = HTTP::Async->new( slots => 100,
timeout => REQUEST_TIMEOUT,
max_request_time => REQUEST_TIMEOUT );
But some threads need to access web only when other thread(s) says so.
my $start = [Time::HiRes::gettimeofday()];
my #threads = ();
foreach ... {
$thread = threads->create(
sub {
local $SIG{KILL} = sub { threads->exit };
my $url = shift;
if ($url ... ) {
# wait for "go" signal from other threads
}
my ($response, $data);
$request->url($url);
$data = '';
$async->add($request);
while ($response = $async->wait_for_next_response) {
threads->yield();
$data .= $response->as_string;
}
if ($data ... ) {
# send "go" signal to waiting threads
}
}
}, $_);
if (defined $thread) {
$thread->detach;
push (#threads, $thread);
}
}
There might be one or more threads waiting for "go" signal and there might be one or more threads that such "go" signal can send. At the beginning the status of semaphore is "wait" and once it turns to "go", it will stay so.
Finally, app checks max running time. If threads are running too long, self-termination signal is sent.
my $running;
do {
$running = 0;
foreach my $thread (#threads) {
$running++ if $thread->is_running();
}
threads->yield();
} until (($running == 0) ||
(Time::HiRes::tv_interval($start) > MAX_RUN_TIME));
$running = 0;
foreach my $thread (#threads) {
if ($thread->is_running()) {
$thread->kill('KILL');
$running++;
}
}
threads->yield();
Now to the point. My questions are:
How can I most effectively code waiting "semaphore" in the script (see comments in script above). Should I simply use just shared variable with some dummy sleep loop?
Do I need to add some sleep loop at the end of app to give time to threads for self-destruction?
You might look at Thread::Queue to perform this work. You could setup a queue that would handle the signaling between the threads waiting for the 'go' signal and the threads sending the 'go' signal. Here's a quick mock-up that I haven't tested:
...
use Thread::Queue;
...
# In main body
my $q = Thread::Queue->new();
...
$thread = threads->create(
sub {
local $SIG{KILL} = sub { threads->exit };
my $url = shift;
if ($url ... ) {
# wait for "go" signal from other threads
my $mesg = $q->dequeue();
# you could put in some termination code if the $mesg isn't 'go'
if ($mesg ne 'go') { ... }
}
...
if ($data ... ) {
# send "go" signal to waiting threads
$q->enqueue('go');
}
}
}, $_);
...
The threads that need to wait for a 'go' signal will wait on the dequeue method until something enters the queue. Once a message enters the queue one thread and only one thread will grab the message and process it.
If you wish to stop the threads so that they won't run, you can insert a stop message to the head of the queue.
$q->insert(0, 'stop') foreach (#threads);
There are examples in Thread::Queue and threads CPAN distributions that show this in more detail.
In response to your second question, the answer is, unfortunately, it depends. When you proceed to terminate your threads, what kind of clean up is required for a clean shutdown? What's the worst case scenario that could occur if the rug was yanked out from beneath the thread? You would want to plan in any time for the clean up to occur. The other option you could do is wait on each thread to actually complete.
The reason for my comment asking if you could remove the detach call is because this method allows the main thread to exit and not care what was happening to any child threads. Instead, if you remove this call, and add:
$_->join() foreach threads->list();
to the end of your main block, this will require the main application to wait for each thread to actually complete.
If you leave the detach method in place, then you will need to sleep at the end of your code if you require your threads to perform any sort of clean-up. When you call detach on a thread, what you are telling Perl is that you don't care what the thread is doing when your main thread exits. If the main thread exits and there are threads that still running that have been detached, then the program will finish with no warnings. However, if you don't require any clean-up, and you still call detach, feel free to exit whenever you like.
Try out something like this....
#!/usr/bin/perl
use threads;
use threads::shared;
$|=1;
my ($global):shared;
my (#threads);
push(#threads, threads->new(\&mySub,1));
push(#threads, threads->new(\&mySub,2));
push(#threads, threads->new(\&mySub,3));
$i = 0;
foreach my $myThread(#threads)
{
my #ReturnData = $myTread->join ;
print "Thread $i returned: #ReturnData\n";
$i++;
}
sub mySub
{
my ($threadID) = #_;
for(0..1000)
{
$global++;
print "Thread ID: $threadID >> $_ >> GLB: $global\n";
sleep(1);
}
return( $id );
}
I have a process that must create and close threads on demand.
Each thread forks a new process using open2. Sometimes after executing the program for a long time open2 fails to fork the process sometimes and gives a "Can not allocate memory error", sometimes this happens for threads too.I know that the Linux has soft and hard limits but the number of the concurrent threads and processes for my server does not exceed those values.
Is there something like a counter for number of processes and threads that eliminates thread and process creation after sometime?
If it is so how servers like Postgres work for a long period of time?
The project has multiple processes that communicate using TCP, but the part that causes the error that i described in a frond end to mplayer, that is written in Perl. The code is as follows:
use strict;
use warnings;
use IO::Socket::INET;
use IO::Select;
use POSIX ":sys_wait_h";
use IPC::Open2;
use 5.010;
use Config;
BEGIN
{
if(!$Config{useithreads})
{
die "Your perl does not compiled with threading support.";
}
}
use threads;
use threads::shared;
use constant
{
SERVER_PORT=>5000,
#Remote request packet fields
PACKET_REQTYPE=>0,
PACKET_FILENAM=>1,
PACKET_VOLMLVL=>2,
PACKET_ENDPOSI=>3,
PACKET_SEEKPOS=>4,
#our request typs
PLAY_REQUEST=>1,
STOP_REQUEST=>2,
INFO_REQUEST=>3,
VOCH_REQUEST=>4,
PAUS_REQUEST=>5,
PLPA_REQUEST=>6,
SEEK_REQUEST=>7,
#Play states
STATE_PAUS=>0,
STATE_PLAY=>1,
STATE_STOP=>2,
};
#The following line must be added because of a bad behavior in the perl thread library that causes a SIGPIPE to be generated under heavy usage of the threads.
$SIG{PIPE} = 'IGNORE';
#This variable holds the server socket object
my $server_socket;
#This array is used to hold objects of our all threads
my #thread_objects;
#create the server socket
$server_socket=IO::Socket::INET->new(LocalPort=>SERVER_PORT,Listen=>20,Proto=>'tcp',Reuse=>1) or
die "Creating socket error ($#)";
#Now try to accept remote connections
print "Server socket created successfully now try to accept remote connections on port: ".SERVER_PORT."\n";
while(my $client_connection=$server_socket->accept())
{
push #thread_objects,threads->create(\&player_thread,$client_connection);
$thread_objects[$#thread_objects]->detach();
}
#This subroutine is used to play something using tcp-based commands
sub player_thread
{
my $client_socket=shift;
#create a new select object
my $selector=IO::Select->new($client_socket);
#this variabe is used to pars our request
my #remote_request;
#getting th thread id of the current thread
my $tid=threads->self()->tid;
#This variable is used to hold the pid of mplayer child
my $mp_pid=-1;
#Mplayer stdin and stdout file descriptors
my ($MP_STDIN,$MP_STDOUT);
#This variable is used to check if we are playing something now or not
my $is_playing=STATE_STOP;
print "Client thread $tid created.\n";
while(1)
{
#check to see if we can read anything from our handler
#print "Before select\n";
#my #ready=$selector->can_read();
#print "After select: #ready\n";
#now the data is ready for reading so we read it here
my $data=<$client_socket>;
#This means if the connection is closed by the remote end
if(!defined($data))
{
print "Remote connection has been closed in thread $tid mplayer id is: $mp_pid and state is: $is_playing.\n";
#if we have an mplayer child when remote connection is closed we must wait for it
#so that is work is done
if($mp_pid!=-1 and $is_playing ==STATE_PLAY)
{
waitpid $mp_pid,0;
$is_playing=STATE_STOP;
}
elsif($is_playing==STATE_PAUS and $mp_pid!=-1)
{
print "thread $tid is in the paused state, we must kill mplayer.\n";
print $MP_STDIN "quit\n";
waitpid $mp_pid,0;
$is_playing=STATE_STOP;
}
last;
}#if
#FIXME:: Here we must validate our argument
#Now we try to execute the command
chomp($data);
#remote_request=split ",",$data;
print "#remote_request\n";
#Trying to reap the death child and change the state of the thread
my $dead_child=-1;
$dead_child=&reaper($mp_pid);
if($dead_child)
{
$is_playing=STATE_STOP;
$mp_pid=-1;
}
given($remote_request[PACKET_REQTYPE])
{
when($_==PLAY_REQUEST)
{
print "Play request\n";
if($is_playing==STATE_STOP)
{
eval{$mp_pid=open2($MP_STDOUT,$MP_STDIN,"mplayer -slave -really-quiet -softvol -volume ".$remote_request[PACKET_VOLMLVL]." -endpos ".$remote_request[PACKET_ENDPOSI]." ./".$remote_request[PACKET_FILENAM]);};
print "Some error occurred in open2 system call: $#\n" if $#;
$is_playing=STATE_PLAY;
print "Mplayer pid: $mp_pid.\n";
}
}
when($_==STOP_REQUEST)
{
print "Stop request\n";
if($is_playing != STATE_STOP)
{
print $MP_STDIN "pausing_keep stop\n";
#FIXME:: Maybe we should use WNOHANG here
my $id=waitpid $mp_pid,0;
print "Mplayer($id) stopped.\n";
$is_playing=STATE_STOP;
$mp_pid=-1;
}
}
when($_==PAUS_REQUEST)
{
print "pause request\n";
if($is_playing !=STATE_STOP)
{
print $MP_STDIN "pausing_keep pause\n";
$is_playing=STATE_PAUS;
}
}
when($_==VOCH_REQUEST)
{
print "volume change request\n";
if($is_playing !=STATE_STOP)
{
print $MP_STDIN "pausing_keep volume ".$remote_request[PACKET_VOLMLVL]." 1\n";
}
}
when($_==INFO_REQUEST)
{
my $id;
$id=&reaper($mp_pid);
if($id > 0)
{
print "Mplayer($id) stopped.\n";
$is_playing=STATE_STOP;
$mp_pid=-1;
}
given($is_playing)
{
when($_==STATE_STOP)
{
print $client_socket "Stopped\n";
}
when($_==STATE_PAUS)
{
print $client_socket "Paused\n";
}
when($_==STATE_PLAY)
{
print $client_socket "Playing\n";
}
}
}
when ($_==PLPA_REQUEST)
{
print "play paused request\n";
if($is_playing==STATE_STOP)
{
eval{$mp_pid=open2($MP_STDOUT,$MP_STDIN,"mplayer -slave -really-quiet -softvol -volume ".$remote_request[PACKET_VOLMLVL]." -endpos ".$remote_request[PACKET_ENDPOSI]." ./".$remote_request[PACKET_FILENAM]);};
print "Some error occurred in open2 system call: $#\n" if $#;
print $MP_STDIN "pausing_keep pause\n";
$is_playing=STATE_PAUS;
}
}
when ($_==SEEK_REQUEST)
{
print "Seek request\n";
if($is_playing != STATE_STOP)
{
my $seek_pos=abs $remote_request[PACKET_SEEKPOS];
print $MP_STDIN "seek $seek_pos 2\n";
$is_playing=STATE_PLAY;
}
}
default
{
warn "Invalid request($_)!!!";
next;
}
}#Given
}#while
$client_socket->close();
print "Thread $tid is exiting now, the child mplayer pid is: $mp_pid and state is: $is_playing.\n";
}
#The following subroutine takes a pid and if that pid is grater than 0 it tries to reap it
#if it is successful returns pid of the reaped process else 0
sub reaper
{
my $pid=shift;
if($pid > 0)
{
my $id=waitpid($pid,WNOHANG);
if($id > 0)
{
return $id;
}
}
return 0;
}
"Can not allocate memory error" is what it says, either the user exceeded its memory quota (check with ulimit -m, compare to ps ux) or you're really out of memory (free).
The limits for max user processes are only indirectly connected - if you fork() more processes then the user's memory quota permits, fork() will fail with ENOMEM.
You also might want to see:
What are some conditions that may cause fork() or system() calls to fail on Linux?
I finally found the problem, it is because of a memory leak in the Perl's thread module that causes the memory to grow after a long time. Then open2 can not allocate memory and fails.
I have the following scenario:
sub_1 can run immediately
sub_2 can run immediately
sub_3 can run only after sub_1 finishes
sub_4 can run only after sub_1 finishes
sub_5 can run only after sub_2 finishes
sub_6 can run only after sub_2 finishes
sub_7 can run only after both sub_1 and sub_2 finish
sub_8 can run only after both sub_1 and sub_2 finish
I would like each sub to start run as soon as possible, than wait for all of them to finish.
I would really appreciate you help in creating a clean solution for this simple scenario -- I'm new to multi-threading.
I'm not sure if it makes a difference, but those subs are all in an object.
I'd suggest a "Boss/Worker" model, wherein one thread manages the subroutines to be executed in worker threads, who in turn report their status back to the boss upon completion.
In this model the boss is the only thread that needs to know how tasks are to be ordered. It might look something like this:
use threads;
use Thread::Queue;
use Thread::Pool;
our $done_queue = Thread::Queue->new;
our $work_pool = Thread::Pool->new;
sub sub_1 {
... do the work ...
$done_queue->enqueue('sub_1'); # tell the boss we're all done
}
sub sub_2 {
... do the work ...
$done_queue->enqueue('sub_2'); # tell boss we're done
}
...
# Main loop (boss thread)
$work_pool->enqueue(\&sub_1);
$work_pool->enqueue(\&sub_2);
while (my $sub_name = $done_queue->dequeue) {
# You, the boss thread, keep track of state and
# transitions however you like. You know what's
# just finished and what's finished in the past
...
}
Of course, abstraction can make that neater -- you could hide the Pool and the Queue behind a single object, one which didn't require sub_1() to know about the status queue at all:
$boss->enqueue( 'sub_1' => \&sub_1 ); # Will return 'sub_1' via await_completed()
$boss->enqueue( 'sub_2' => \&sub_2 ); # Will return 'sub_1'
while (my $sub_name = $boss->await_completed) {
...
}
Here's a possible solution using threads and thread sharing. Most of the code is just mocking up the test and emulating threads that have to do "work" before they finish. In the example the main thread spawns seven threads that each have a random amount of time that they have to do "work". The threads cannot begin working until the other threads they are dependent on (set in the dependencies array) have finished. You can change the thread dependencies and run the example a few times to illustrate that it works correctly.
Additionally you can have each thread terminate after it runs and have the main thread terminate after all of the subthreads have finished by checking the status hash.
use strict;
use warnings;
use threads;
use threads::shared;
my %status : shared;
my $dependencies = [
{3 => 1}, #three can only run after one has finished...
{4 => 1}, #four can only run after one has finished...
{5 => 2}, #five can only run after two has finished...
{6 => 1}, #etc...
{6 => 2},
{7 => 1},
{7 => 2}
];
main();
sub main{
foreach my $thread_number (1..7){
spawn_thread($thread_number);
}
while(1){
print "I am the main thread\n";
sleep(1);
}
}
sub spawn_thread{
my $thread_number = shift;
$status{$thread_number} = 'wait';
my $thr = threads->new(\&thread_routine, $thread_number);
}
sub thread_routine{
my $thread_number = shift;
my $working_time_left = int(rand(5)) + 1; #make a random time that this thread needs to "work"
while(1){
print "I am thread number $thread_number with status $status{$thread_number}\n";
{
lock(%status);
#see if this thread is active; if so, see if it finished running running
if ($status{$thread_number} eq 'active'){
if ($working_time_left <= 0){
$status{$thread_number} = 'ran';
}
}
else{
#see if we can activate
if ($status{$thread_number} eq 'wait'){
my $can_activate = 1;
foreach my $index (0..$#$dependencies){
if (exists $dependencies->[$index]->{$thread_number}){
if ($status{$dependencies->[$index]->{$thread_number}} ne 'ran'){
$can_activate = 0;
last;
}
}
}
if ($can_activate){
$status{$thread_number} = "active";
}
}
}
}
sleep(1);
if ($status{$thread_number} eq 'active'){ #do "work"
$working_time_left--;
}
}
}
Fork and create 2 processes:
In process 1:
sub_1; sub_3
In process 2:
sub_2; wait for sub_1 end; sub_4