Track and kill a process on timeout using Perl script - linux

I want to write a Perl script which can monitor a running process. If the process executes for more than expected time,then it should be killed.
I am trying to do this on a Linux machine(Linux_x8664).
I cannot achieve the same using cronjob because I want to embed the same to another Perl script, which I have been using from a long time.
If you have any suggestions, Please suggest me.
I have a code to do that, But the problem is that my perl script is rinning a process using system command. And I want to track the pid of that invoked process and I want to kill it on timeout.
=========================
#!/usr/pde/bin/perl
my $pid;
my $finish=0;
# actions after timeout to keep SIGHANDLER short
#
sub timeout {
print "Timed out pid $pid\n";
# kill the process group, but not the parent process
local $SIG{INT}='IGNORE';
local $SIG{TERM}='IGNORE';
kill 'INT' = -$$;
# eventually try also with TERM and KILL if necessary
die 'alarm';
}
eval {
local $SIG{ALRM} = sub { $finish=1 };
alarm 5;
die "Can't fork!" unless defined ($pid=fork); # check also this!
if ($pid) { # parent
warn "child pid: $pid\n";
# Here's the code that checks for the timeout and do the work:
while (1) {
$finish and timeout() and last;
sleep 1;
}
waitpid ($pid, 0);
}
else { # child
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}
alarm 0;
};
warn "\$#=$#\n";`enter code here`
die "Timeout Exit\n" if $# and $# =~ /alarm/;
print "Exited normally.\n";
__END__

Based on your code - there is a reason why use strict and use warnings are strongly recommended.
Specifically:
Can't modify constant item in scalar assignment at line 17, near "$$;"
You aren't doing what you think you're doing there.
If you set it to
kill ( 'INT', -$$ );
Then you will send a SIGINT to the current process group - parent and child. I'm not sure why you're doing this when you don't want to kill the parent.
I'd suggest you can simplify this greatly by:
else { # child
alarm 5;
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}

Related

perl multithreading: capturing stdio of subthread childs with "mixed" results

I wrote a massively multithreaded application in perl which basically scans a file- or directory-structure for changes (either using inotify or polling). When it detects changes, it launches subthreads that execute programs with the changed files as an argument, according to a configuration.
This works quite nice so far, with the exception that my application also tries to capture stdout and stderr of the externally executed programs and write them to log files in a structured manner.
I am, however, experiencing an occasional but serious mixup of output here, in the way that every when and then (usually under heavy workload, of course, so that the normal tests always run fine) stdout from a program on thread A gets into the stdout pipe FH of another program running on thread B at the very same time.
My in-thread code to run the externally executed programs and capture the output from them looks like this:
my $out;
$pid = open($out, "( stdbuf -oL ".$cmd." | stdbuf -oL sed -e 's/^/:::LOG:::/' ) 2>&1 |") or xlog('async execution failed for: '.$cmd, LOG_LEVEL_NORMAL, LOG_ERROR);
# catch all worker output here
while(<$out>)
{
if( $_ =~ /^:::LOG:::/ )
{
push(#log, $wname.':::'.$acnt.':::'.time().$_);
} else {
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::'.$_);
}
if (time() - $last > 1)
{
mlog($acnt, #log);
$last = time();
#log = ();
}
}
close($out);
waitpid($pid, 0);
push(#log, $wname.':::'.$acnt.':::'.time().':::LOG:::--- thread finished ---');
stdbuf is being used here to suppress buffering delays whereever possible and the sed pipe is being used to avoid the need of handling multiple fds in the reader while still being able to separate normal output from errors.
Captured log lines are being stuffed into a local array by the while loop and every other second contents of that array are handed over to a thread-safe global logging method using semaphores that makes sure nothing gets mixed up.
To avoid unneccesary feedback loops from you: I certainly have made sure (using debug output) that the output really is mixed up on the thread level already and is not a result of locking mistakes later in the output chain!
My Question is: how can it be, that the thread-locally defined $out FH from thread A does receive output that definitely comes from a totally different program running in thread B and therefor should end up in the separately defined thread-local $out FH of thread B? Did I make a grave mistake at some point here or is it just that perl threading is a mess? And, finally, what would be a recommended way to separate the data properly (and preferably in some elegant way)?
Update: due to popular demand I have added the full thread method here:
sub async_command {
my $wname = shift;
my $cmd = shift;
my $acnt = shift;
my $delay = shift;
my $errlog = shift;
my $last = time();
my $pid = 0;
my #log;
my $out;
push(#log, $wname.':::'.$acnt.':::'.$last.':::LOG:::--- thread started ---'.($delay ? ' (sleeping for '.$delay.' seconds)':''));
push(#log, $wname.':::'.$acnt.':::'.$last.':::ERR:::--- thread started ---') if ($errlog);
if ($delay) { sleep($delay); }
# Start worker with output pipe. stdbuf prevents unwanted buffering
# sed tags stdout vs stderr
$pid = open($out, "( stdbuf -oL ".$cmd." | stdbuf -oL sed -e 's/^/:::LOG:::/' ) 2>&1 |") or xlog('async execution failed for: '.$cmd, LOG_LEVEL_NORMAL, LOG_ERROR);
# catch all worker output here
while(<$out>)
{
if( $_ =~ /^:::LOG:::/ )
{
push(#log, $wname.':::'.$acnt.':::'.time().$_);
} else {
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::'.$_);
}
if (time() - $last > 1)
{
mlog($acnt, #log);
$last = time();
#log = ();
}
}
close($out);
waitpid($pid, 0);
push(#log, $wname.':::'.$acnt.':::'.time().':::LOG:::--- thread finished ---');
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::--- thread finished ---') if ($errlog);
mlog($acnt, #log);
byebye();
}
So... here you can see that #log as well as $out are thread-local variables. The xlog (global log) and mlog-methods (worker logs) actually use Thread::Queue for further processing. I just dont want to use it more than once a second per thread to avoid too much locking overhead.
I have duplicated the push(#log... statements into xlog() calls for debugging. Since the worker name $wname is somewhat tied to the $cmd executed and $acnt is a number unique for each thread, I came to see clearly that there is log output being read from the $out FH that definitely comes from a different $cmd than the one executed in this thread, while $acnt and $wname stay the ones that actually belong to the thread. Also I can see that these log lines then do NOT appear on the $out FH in the other thread where they should be.

Log message in perl every 90 seconds in the parent process as long as the child process still runs

I just passed over from php to perl due to my company's request so even if this may be a silly question is kind of nerve wreaking right now.
I have one little perl script deployed on a server through a debian package. I have this all figured out so that's all cool.
Now this script is called from another server through an SSH connection and the script logs back to that server all its actions. I use Log::Log4perl for that.
One of the tasks takes a very long time and also runs some other scripts in the process. The ssh connection has a set timeout of 5 minutes unless I log something back. So I figured out I would create a child process to run the task and let the parent process log back every 90 (or whatever) seconds. My issue is that I don't want to use sleep because if the task is finished sooner it will mess up the log.
I have also tried using Time, Time::HiRes and alarm, but they all mess up my log one way or another.
This is my code:
$log->info("uid $uid: calling the configure script for operation $mode,on $dst_path");
my $pid = fork();
die "Could not fork\n" if not defined $pid;
if ( $pid == 0 ) {
configure( $script_dir, $mode, $node, $uid, $gid); # this also uses a parallel process in its execution, but we don't have a non blocking wait
}
while ( !waitpid( $pid, WNOHANG ) ) {
sleep(90);
if ( !$pid ) {
$log->info("Still waiting for the process to finish"); # this should come up every 90 seconds of so
}
}
$log->info("uid $uid: configure script executed"); # this should come up only once, now I get it every 90 seconds
# do other stuff here after the execution of the configure sub is done
Unfortunately I inherited this architecture as it is and cannot change it because there are a lot of services based on it.
If you don't want to sleep, you can call select with a timeout. To implement this reliably, you can employ the self-pipe trick which involves creating a pipe, writing to the pipe in a SIGCHLD handler, and making the select call wait on the pipe's read handle.
Here's a simple example:
#!/usr/bin/perl
use strict;
use warnings;
use Errno qw(EINTR);
use Fcntl qw(F_GETFL F_SETFL O_NONBLOCK);
use Symbol qw(gensym);
sub make_non_blocking {
my $handle = shift;
my $flags = fcntl($handle, F_GETFL, 0)
or die("F_GETFL: $!");
fcntl($handle, F_SETFL, $flags | O_NONBLOCK)
or die("F_SETFL: $!");
}
my ($read_handle, $write_handle) = (gensym, gensym);
pipe($read_handle, $write_handle)
or die("pipe: $!");
make_non_blocking($read_handle);
make_non_blocking($write_handle);
local $SIG{CHLD} = sub {
syswrite($write_handle, "\0", 1);
};
my $pid = fork();
die("fork: $!") if !defined($pid);
if ($pid == 0) {
sleep(10);
exit;
}
my $rin = '';
vec($rin, fileno($read_handle), 1) = 1;
while (1) {
my $nfound = select(my $rout = $rin, undef, undef, 2);
if ($nfound < 0) {
# Error. Must restart the select call on EINTR.
die("select: $!") if $! != EINTR;
}
elsif ($nfound == 0) {
# Timeout.
print("still running...\n");
}
else {
# Child exited and pipe was written to.
last;
}
}
waitpid($pid, 0);
close($read_handle);
close($write_handle);
I tried to run the code and noticed a few things that may be your issue, but without knowing what configure does, I can't be sure. Here's what I found:
The child process doesn't exit after calling configure
waitpid does not change the value of $pid, so $pid is always 0 in the child and always the pid of the child in the parent.
What this means is that the parent is never writing out "Still waiting for the process to finish", the child is writing it out every 90 seconds after it completes it's call to configure.
Additionally, the child should print that message ever 90 seconds forever because it's waiting for pid 0 to send it the CHLD signal which won't happen because it doesn't have a child with pid 0.
I updated your code with a few stubs that does what I think you want (on a slightly tighter timeline because I don't like to wait :) ). My code makes the following assumptions that you may wish to change:
Log the waiting message every second
The child always exits with a status value of 0
Here's my code:
#!/usr/bin/env perl
use strict;
use warnings;
use Log::Log4perl qw(:easy);
use POSIX qw(:sys_wait_h);
Log::Log4perl->easy_init();
my ($uid,$mode,$dst_path,$script_dir,$node,$gid) = (0..5);
my $log = get_logger();
$log->info("uid $uid: calling the configure script for operation $mode,on $dst_path");
my $pid = fork();
die "Could not fork\n" if not defined $pid;
if ( $pid == 0 ) {
configure( $script_dir, $mode, $node, $uid, $gid); # this also uses a parallel process in its execution, but we don't have a non blocking wait
exit(0);
}
my $zombie;
while ( ($zombie = waitpid( $pid, WNOHANG ) ) != $pid) {
$log->info("Still waiting for the process to finish"); # this should come up every 90 seconds of so
sleep(1);
}
$log->info("uid $uid: configure script executed"); # this should come up only once, now I get it every 90 seconds
# do other stuff here after the execution of the configure sub is done
sub configure {
sleep 10;
}

Why doesn't waitpid wait for the process to exit?

In the below script I am trying to figure out how waitpid works, but it doesn't wait for ssh process to exit. done is printed right away and not after the ssh process exists.
Question
How to I make waitpid only continue when the pid I give it have exited?
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
use POSIX ":sys_wait_h";
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
system("ssh 10.10.47.47 sleep 10");
$pm->finish;
}
$p = qx(/usr/bin/pgrep -P $p);
print "ssh pid is $p\n";
my $kid;
do {
$kid = waitpid($p, 0);
} while $kid > 0;
print "done\n";
I have also tried
while (1) {
$p = kill 0, $p;
print "x";
sleep 1 if $p;
print "*";
last unless $p;
}
but it doesn't even reach the first print for some reason and never exits.
The wait family of functions only work on child processes, even waitpid. The sleep process is not your child, it's your child's child. This is because system is essentially fork + exec. By using Parallel::ForkManager + system you're forking, then forking again, then executing sleep.
Since you've already forked, you should use exec. This has the extra advantage of not needing the call to pgrep and it's timing problem (ie. it's possible the parent will call pgrep before the child has executed system).
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
no warnings; # no warnings "exec" is not working
exec("sleep 10");
$pm->finish;
}
print "sleep pid is $p\n";
waitpid($p, 0);
For simplicity it's now using sleep. A warning from Perl that "Statement unlikely to be reached" must be suppressed because Perl doesn't realize $pm->start has forked. This should be no warnings "exec" but that's not working so I had to suppress them all.

Perl 5.8: possible to get any return code from backticks when SIGCHLD in use

When a CHLD signal handler is used in Perl, even uses of system and backticks will send the CHLD signal. But for the system and backticks sub-processes, neither wait nor waitpid seem to set $? within the signal handler on SuSE 11 linux. Is there any way to determine the return code of a backtick command when a CHLD signal handler is active?
Why do I want this? Because I want to fork(?) and start a medium length command and then call a perl package that takes a long time to produce an answer (and which executes external commands with backticks and checks their return code in $?), and know when my command is finished so I can take action, such as starting a second command. (Suggestions for how to accomplish this without using SIGCHLD are also welcome.) But since the signal handler destroys the backtick $? value, that package fails.
Example:
use warnings;
use strict;
use POSIX ":sys_wait_h";
sub reaper {
my $signame = shift #_;
while (1) {
my $pid = waitpid(-1, WNOHANG);
last if $pid <= 0;
my $rc = $?;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
# system can be made to work by not using $?, instead using system return value
my $rc = system("echo hello 1");
print "hello \$?=$?\n";
print "hello rc=$rc\n";
# But backticks, for when you need the output, cannot be made to work??
my #IO = `echo hello 2`;
print "hello \$?=$?\n";
exit 0;
Yields a -1 return code in all places I might try to access it:
hello 1
wait()=-1, rc=-1
hello $?=-1
hello rc=0
wait()=-1, rc=-1
hello $?=-1
So I cannot find anywhere to access the backticks return value.
This same issue has been bugging me for a few days now. I believe there are 2 solutions required depending on where you have your backticks.
If you have your backticks inside the child code:
The solution was to put the line below inside the child fork. I think your statement above "if I completely turn off the CHLD handler around the backticks then I might not get the signal if the child ends" is incorrect. You will still get a callback in the parent when the child exits because the signal is only disabled inside the child. So the parent still gets a signal when the child exits. It's just the child doesn't get a signal when the child's child (the part in backticks) exits.
local $SIG{'CHLD'} = 'DEFAULT'
I'm no Perl expert, I have read that you should set the CHLD signal to the string 'IGNORE' but this did not work in my case. In face I believe it may have been causing the problem. Leaving that out completely appears to also solve the problem which I guess is the same as setting it to DEFAULT.
If you have backticks inside the parent code:
Add this line to your reaper function:
local ($!, $?);
What is happening is the reaper is being called when your code inside the backticks completes and the reaper is setting $?. By making $? local it does not set the global $?.
So, building on MikeKull's answer, here is a working example where the fork'd child uses backticks and still gets the proper return code. This example is a better representation of what I was doing, while the original example did not use forks and could not convey the entire issue.
use warnings;
use strict;
use POSIX ":sys_wait_h";
# simple child which returns code 5
open F, ">", "exit5.sh" or die "$!";
print F<<EOF;
#!/bin/bash
echo exit5 pid=\$\$
exit 5
EOF
close F;
sub reaper
{
my $signame = shift #_;
while (1)
{
my $pid = waitpid(-1, WNOHANG);
print "no child waiting\n" if $pid < 0;
last if $pid <= 0;
my $rc = $? >> 8;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
if (!fork)
{
print "child pid=$$\n";
{ local $SIG{CHLD} = 'DEFAULT'; print `./exit5.sh`; }
print "\$?=" . ($? >> 8) . "\n";
exit 3;
}
# sig CHLD will interrupt sleep, so do multiple
sleep 2;sleep 2;sleep 2;
exit 0;
The output is:
child pid=32307
exit5 pid=32308
$?=5
wait()=32307, rc=3
no child waiting
So the expected return code 5 was received in the child when the parent's reaper was disabled before calling the child, but as indicated by ikegami the parent still gets the CHLD signal and a proper return code when the child exits.

How can I make Perl wait for child processes started in the background with system()?

I have some Perl code that executes a shell script for multiple parameters, to simplify, I'll just assume that I have code that looks like this:
for $p (#a){
system("/path/to/file.sh $p&");
}
I'd like to do some more things after that, but I can't find a way to wait for all the child processes to finish before continuing.
Converting the code to use fork() would be difficult. Isn't there an easier way?
Using fork/exec/wait isn't so bad:
my #a = (1, 2, 3);
for my $p (#a) {
my $pid = fork();
if ($pid == -1) {
die;
} elsif ($pid == 0) {
exec '/bin/sleep', $p or die;
}
}
while (wait() != -1) {}
print "Done\n";
You are going to have to change something, changing the code to use fork is probably simpler, but if you are dead set against using fork, you could use a wrapper shell script that touches a file when it is done and then have your Perl code check for the existence of the files.
Here is the wrapper:
#!/bin/bash
$*
touch /tmp/$2.$PPID
Your Perl code would look like:
for my $p (#a){
system("/path/to/wrapper.sh /path/to/file.sh $p &");
}
while (#a) {
delete $a[0] if -f "/tmp/$a[0].$$";
}
But I think the forking code is safer and clearer:
my #pids;
for my $p (#a) {
die "could not fork" unless defined(my $pid = fork);\
unless ($pid) { #child execs
exec "/path/to/file.sh", $p;
die "exec of file.sh failed";
}
push #pids, $pid; #parent stores children's pids
}
#wait for all children to finish
for my $pid (#pids) {
waitpid $pid, 0;
}
Converting to fork() might be difficult, but it is the correct tool. system() is a blocking call; you're getting the non-blocking behavior by executing a shell and telling it to run your scripts in the background. That means that Perl has no idea what the PIDs of the children might be, which means your script does not know what to wait for.
You could try to communicate the PIDs up to the Perl script, but that quickly gets out of hand. Use fork().

Resources