I have a script and at some part I fork some processes to do a task and the main process waits for all children to complete.
So far all ok.
Problem: I am interested to get the max time that each child process spend while working on what it had to do.
What I do now is just look at the logs where I print the times spend at each action the child process did and try to figure out more or less the times.
I know that the only way to get something back from a child process is via some sort of shared memory but I was wondering for this specific problem is there a "ready"/easy solution?
I mean in order to get the times back and the parent process prints them in a nice fashion in one place.
I thought there could be a better way than just checking all over the logs....
Update based on comments:
I am not interested in the times of the child processes i.e. which child took most time to finish. Each child process is working on X tasks. Each of the tasks takes at worse case Y secs to finish. I am looking to find the Y i.e. the most time it took for a child process to finish one of the X tasks
The biggest limitation of fork() is that it doesn't do IPC as easily as threads. Aside from trapping when a process starts and exits, what you're doing otherwise has a whole segment of the perl documentation.
What I would suggest is that what you probably want is a pipe and connect it to the child.
Something like this (not tested yet, I'm on a Windows box!)
use strict;
use warnings;
use Parallel::ForkManager;
my $manager = Parallel::ForkManager -> new ( 5 ) ;
pipe ( my $read_handle, my $write_handle );
for ( 1..10 ) {
$manager -> start and next;
close ( $read_handle );
print {$write_handle} "$$ - child says hello!\n";
$manager -> finish;
}
close ( $write_handle );
while ( <$read_handle> ) { print; }
$manager -> wait_all_children();
Related
I am trying to get into Perl's use of threads. Reading the documentation I came across the following code:
use threads;
my $thr = threads->create(\&sub1); # Spawn the thread
$thr->detach(); # Now we officially don't care any more
sleep(15); # Let thread run for awhile
sub sub1 {
my $count = 0;
while (1) {
$count++;
print("\$count is $count\n");
sleep(1);
}
}
The goal, it seems, would be to create one thread running sub1 for 15 seconds, and in the mean time print some strings. However, I don't think I understand what's going on at the end of the programme.
First of all, detach() is defined as follows:
Once a thread is detached, it'll run until it's finished; then Perl
will clean up after it automatically.
However, when does the subroutine finish? while(1) never finishes. Nor do I find any information in sleep() that it'd cause to break a loop. On top of that, from the point we detach we are 'waiting for the script to finish and then clean it up' for 15 seconds, so if we are waiting for the subroutine to finish, why do we need sleep() in the main script? The position is awkward to me; it suggests that the main programme sleeps for 15 seconds. But what is the point of that? The main programme (thread?) sleeps while the sub-thread keeps running, but how is the subroutine then terminated?
I guess the idea is that after sleep-ing is done, the subroutine ends, after which we can detach/clean up. But how is this syntactically clear? Where in the definition of sleep is it said that sleep terminates a subroutine (and why), and how does it know which one to terminate in case there are more than one threads?
All threads end when the program ends. The program ends when the main thread ends. The sleep in the main thread is merely keeping the program running a short time, after which the main thread (therefore the program, therefore all created threads) also end.
So what's up with detach? It just says "I'm never going to bother joining to this thread, and I don't care what it returns". If you don't either detach a thread or join to it, you'd get a warning when the program ends.
detach a thread means "I don't care any more", and that does actually mean when your process exits, the thread will error and terminate.
Practically speaking - I don't think you ever want to detach a thread in perl - just add a join at the end of your code, so it can exit cleanly, and signal it via a semaphore or Thread::Queue in order to terminate.
$_ -> join for threads -> list;
Will do the trick.
That code example - in my opinion - is a bad example. It's just plain messy to sleep so a detached thread has a chance to complete, when you could just join and know that it's finished. This is especially true of perl threads, which it's deceptive to assume they're lightweight, and so can be trivially started (and detached). If you're ever spawning enough that the overhead of joining them is too high, then you're using perl threads wrong, and probably should fork instead.
You're quite right - the thread will never terminate, and so you code will always have a 'dirty' exit.
So instead I'd rewrite:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
my $run : shared;
$run = 1;
sub sub1 {
my $count = 0;
while ($run) {
$count++;
print("\$count is $count\n");
sleep(1);
}
print "Terminating\n";
}
my $thr = threads->create( \&sub1 ); # Spawn the thread
sleep(15); # Let thread run for awhile
$run = 0;
$thr->join;
That way your main signals the thread to say "I'm done" and waits for it to finish it's current loop.
I've got a perl program that trying to do conversion of a bunch of files from one format to another (via a command-line tool). It works fine, but too slow as it's converting the files one and a time.
I researched and utilize the fork() mechanism trying to spawn off all the conversion as child-forks hoping to utilize the cpu/cores.
Coding is done and tested, it does improve performance, but not to the way I expected.
When looking at /proc/cpuinfo, I have this:
> egrep -e "core id" -e ^physical /proc/cpuinfo|xargs -l2 echo|sort -u
physical id : 0 core id : 0
physical id : 0 core id : 1
physical id : 0 core id : 2
physical id : 0 core id : 3
physical id : 1 core id : 0
physical id : 1 core id : 1
physical id : 1 core id : 2
physical id : 1 core id : 3
That means I have 2 CPU and quad-core each? If so, I should able to fork out 8 forks and supposingly I should able to make a 8-min job (1min per file, 8 files) to finish in 1-min (8 forks, 1 file per fork).
However, when I test run this, it still take 4-min to finish. It appears like it only utilized 2 CPUs, but not the cores?
Hence, my question is:
Is it true that perl's fork() only parallel it based on CPUs, but not cores? Or maybe I didn't do it right? I'm simply using fork() and wait(). Nothing special.
I'd assume perl's fork() should be using cores, is there a simple bash/perl that I can write to prove my OS (i.e. RedHat 4) nor Perl is the culprit for such symptom?
To Add:
I even tried running the following command multiple times to simulate multiple processing and monitor htop.
while true; do echo abc >>devnull; done &
Somehow htop is telling me I've got 16 cores? and then when I spawn 4 of the above while-loop, I see 4 of them utilizing ~100% cpu each. When I spawn more, all of them start reducing the cpu utilization percentage evenly. (e.g. 8 processing, see 8 bash in htop, but using ~50% each) Does this mean something?
Thanks ahead. I tried google around but not able to find an obvious answer.
Edit: 2016-11-09
Here is the extract of perl code. I'm interested to see what I did wrong here.
my $maxForks = 50;
my $forks = 0;
while(<CIFLIST>) {
extractPDFByCIF($cifNumFromIndex, $acctTypeFromIndex, $startDate, $endDate);
}
for (1 .. $forks) {
my $pid = wait();
print "Child fork exited. PID=$pid\n";
}
sub extractPDFByCIF {
# doing SQL constructing to for the $stmt to do a DB query
$stmt->execute();
while ($stmt->fetch()) {
# fork the copy/afp2web process into child process
if ($forks >= $maxForks) {
my $pid = wait();
print "PARENTFORK: Child fork exited. PID=$pid\n";
$forks--;
}
my $pid = fork;
if (not defined $pid) {
warn "PARENTFORK: Could not fork. Do it sequentially with parent thread\n";
}
if ($pid) {
$forks++;
print "PARENTFORK: Spawned child fork number $forks. PID=$pid\n";
}else {
print "CHILDFORK: Processing child fork. PID=$$\n";
# prevent child fork to destroy dbh from parent thread
$dbh->{InactiveDestroy} = 1;
undef $dbh;
# perform the conversion as usual
if($fileName =~ m/.afp/){
system("file-conversion -parameter-list");
} elsif($fileName =~ m/.pdf/) {
system("cp $from-file $to-file");
} else {
print ERRORLOG "Problem happened here\r\n";
}
exit;
}
# end forking
$stmt->finish();
close(INDEX);
}
fork() spawns a new process - identical to, and with the same state as the existing one. No more, no less. The kernel schedules it and runs it wherever.
If you do not get the results you're expecting, I would suggest that a far more likely limiting factor is that you are reading files from your disk subsystem - disks are slow, and contending for IO isn't actually making them any faster - if anything the opposite, because it forces additional drive seeks and less easy caching.
So specifically:
1/ No, fork() does nothing more than clone your process.
2/ Largely meaningless unless you want to rewrite most of your algorithm as a shell script. There's no real reason to think that it'll be any different though.
To follow on from your edit:
system('file-conversion') looks an awful lot like an IO based process, which will be limited by your disk IO. As will your cp.
Have you considered Parallel::ForkManager which greatly simplifies the forking bit?
As a lesser style point, you should probably use 3 arg 'open'.
#!/usr/bin/env perl
use strict;
use warnings;
use Parallel::ForkManager;
my $maxForks = 50;
my $manager = Parallel::ForkManager->new($maxForks);
while ($ciflist) {
## do something with $_ to parse.
##instead of: extractPDFByCIF($cifNumFromIndex, $acctTypeFromIndex, $startDate, $endDate);
# doing SQL constructing to for the $stmt to do a DB query
$stmt->execute();
while ( $stmt->fetch() ) {
# fork the copy/afp2web process into child process
$manager->start and next;
print "CHILDFORK: Processing child fork. PID=$$\n";
# prevent child fork to destroy dbh from parent thread
$dbh->{InactiveDestroy} = 1;
undef $dbh;
# perform the conversion as usual
if ( $fileName =~ m/.afp/ ) {
system("file-conversion -parameter-list");
} elsif ( $fileName =~ m/.pdf/ ) {
system("cp $from-file $to-file");
} else {
print ERRORLOG "Problem happened here\r\n";
}
# end forking
$manager->finish;
}
$stmt->finish();
}
$manager->wait_all_children;
Your goal is to parallelize your application in a way that is using multiple cores as independent resources. What you want to achieve is multi-threading, in particular Perl's ithreads that are using calls to the fork() function of the underlying system (and are heavy-weight for that reason). You can teach the Perl way of multi-threading to yourself from perlthrtut. Quote from perlthrtut:
When a new Perl thread is created, all the data associated with the current thread is copied to the new thread, and is subsequently private to that new thread! This is similar in feel to what happens when a Unix process forks, except that in this case, the data is just copied to a different part of memory within the same process rather than a real fork taking place.
That being said, regarding your questions:
You're not doing it right (sorry). [see my comment...] With multi-threading you don't need to call fork() by yourself for that, but Perl will do it for you.
You can check whether your Perl interpreter has been built with thread support e.g. by perl -V (note the capital V) and looking at the messages. If there is nothing to see about threads then your Perl interpreter is not capable of Perl-multithreading.
The reason that your application is already faster even with only one CPU core by using fork() is likely that while one process has to wait for slow resources such as the file system another process can use the same core as a computation resource in the meantime.
I'm creating a perl application which executes in multiple threads and each thread consuming time. This is what I have so far
use strict;
use warnings;
use threads;
my #file_list = ("file1", "file2", "file3");
my #jobs;
my #failed_jobs;
my $timeout = 10; #10 seconds timeout
foreach my $s (#file_list){
push #jobs, threads->create(sub{
#time consuming task
})
}
$_->join for #jobs;
The problem is that the time consuming task may sometimes get stuck (or take more than $timeout seconds of time to run). So when that happens, I want to get the name of the file and push it to #failed_jobs and then kill that thread. However, I want to continue with the other threads. When all threads are either killed or completed, I want to exit.
Can someone tell me how to modify my above code to achieve this?
Thanks
If you want the ability to kill the task, you don't want threads but processes.
I'd like to invoke multiple perl instances/scripts from one perl script. Please see the simple script below which illustrates the problme nicely
my #filenames = {"file1.xml","file2.xml","file3.xml",file4.xml"}
foreach my $file (#filenames)
{
#Scripts which parses the XML file
system("perl parse.pl $file");
#Go-On don't wait till parse.pl has finished
}
As I'm on a quad-core CPU and the parsing of a single file takes a while, I want to split the Job. Could someone point me in a good direction?
Thanks and best,
Tim
Taking advantage of multiple cores for implicitly parallel workloads has many ways to do it.
The most obvious is - suffix an ampersand after your system call, and it'll charge off and do it in the background.
my #filenames = ("file1.xml","file2.xml","file3.xml",file4.xml");
foreach my $file (#filenames)
{
#Scripts which parses the XML file
system("perl parse.pl $file &");
#Go-On don't wait till parse.pl has finished
}
That's pretty simplistic, but should do the trick. The downside of this approach is it doesn't scale too well - if you had a long list of files (say, 1000?) then they'd all kick off at once, and you may drain system resources and cause problems by doing it.
So if you want a more controlled approach - you can use either forking or threading. forking uses the C system call, and starts duplicate process instances.
use Parallel::ForkManager;
my $manager = Parallel::ForkManager -> new ( 4 ); #number of CPUs
my #filenames = ("file1.xml","file2.xml","file3.xml",file4.xml");
foreach my $file (#filenames)
{
#Scripts which parses the XML file
$manager -> start and next;
exec("perl", "parse.pl", $file) or die "exec: $!";
$manager -> finish;
#Go-On don't wait till parse.pl has finished
}
# and if you want to wait:
$manager -> wait_all_children();
And if you wanted to do something that involved capturing output and post-processing it, I'd be suggesting thinking in terms of threads and Thread::Queue. But that's unnecessary if there's no synchronisation required.
(If you're thinking that might be useful, I'll offer:
Perl daemonize with child daemons)
Edit: Amended based on comments. Ikegami correctly points out:
system("perl parse.pl $file"); $manager->finish; is wasteful (three processes per worker). Use: exec("perl", "parse.pl", $file) or die "exec: $!"; (one process per worker).
I have code that spawns two threads. The first is a system command which launches an application. The second monitors the program. I'm new to perl threads so I have a few questions...
my $thr1 = threads->new(system($cmd));
sleep(FIVEMINUTES);
my $thr2 = threads->new(\&check);
my $rth1 = $thr1->join();
my $rth2 = $thr2->join();
1) Do I need a second thread to monitor the program? You can think of my sub routine call to &check as a infinite while loop which checks a text file for stuff the application produces. Could I just do this:
my $thr1 = threads->new(system($cmd));
sleep(FIVEMINUTES);
✓
2) I'm trying to figure out what my parent is doing after I run this code. So after I launch line 1 it will spawn that new thread, sleep, then spawn that second thread and then sit at that first join and wait. It will not execute the rest of my code until it joins at that first join. Is this correct or am I wrong? If I am wrong, then how does it work?
3) My first thread the one that launches the application can be killed unexpectedly. when this happens, I have nothing to catch that and kill the threads. It just says:
"Thread 1 terminated abnormally: Undefined subroutine &main::65280 called at myScript.pl line 109." and then hangs there.
What could I do to get it to end the other threads? I need it to send an email before the program ends as well which I can do by just calling &email (another subroutine I made).
Thanks
First of all,
my $thr1 = threads->new(system($cmd));
should be
my $thr1 = threads->new(sub { system($cmd) });
or simply
my $thr1 = async { system($cmd) };
You don't need to start a third thread. As you suspected, the main thread and the one executing system are sufficient.
What if the command finishes executing in less than five minutes? The following replaces sleep with a signal.
use threads;
use threads::shared;
my $done :shared = 0;
my $thr1 = async {
system($cmd);
lock($done);
$done = 1;
cond_signal($done);
};
{ # Wait up to $timeout for the thread to end.
lock($done);
my $timeout = time() + 5*60;
1 while !$done && cond_timedwait($done, $timeout);
if (!$done) {
... there was a timeout ...
}
}
$thr1->join();
In 2004-2006 I had the same challenges for 24/7 running perl app on Winblows... The only approach working was to use xml state files on disk to communicate the status of each component of the system... and make sure if threads are used every stat file handling occurred within a closure code block {} (big gotcha) The app ran at least 3 years on 100 machines 24/7 without errors ...
If you are on a Unix-like OS I would suggest to use forks and interprocess communication.
Use cpan modules, do not reinvent the wheel..
Multithreading in Perl is a little hard to deal with, I would suggest using the fork() commands instead. I will attempt to answer your questions to the best of my ability.
1) It seems to me like two threads/processes are the way to go here, as you need to check asynchronously check your data.
2) Your parent works exactly as you describe.
3) The reason for your thread hanging could be that you never terminate your second thread. You said it was an infinite loop, is there any exit condition?