perl fork() exec() , child process gone wild - linux

I am using Linux and .sh is in tcsh.
I have made a very basic fork and exec, but I need help in implementing safeties to it.
Basically my perl script calls a .sh script in a child process. But when I do Ctrl+c to kill the parent, the signal gets ignored by the child.
1) How do I capture the SIGINT for the child process as well?
2) The child process that runs the .sh script still STDOUT to the screen of the xterm. How can I remove this? I was thinking of doing running the script in the background
exec("shell.sh args &");
But haven't tested as I need to figure out how to keep the child from going wild first.
3) The parent process(perl script) doesn't wait on the child(.sh script). So I've read a lot about the child becoming a zombie??? Will it happen after the script is done? And how would I stop it?
$pid = fork();
if($pid < 0){
print "Failed to fork process... Exiting";
exit(-1);
}
elsif ($pid ==0) {
#child process
exec("shell.sh args");
exit(1);
}
else { #execute rest of parent}

But when I do ctrl+c to kill the parent, the signal gets ignored by the child.
The signal is sent to two both the parent and the child.
$ perl -E'
if (my $pid = fork()) {
local $SIG{INT} = sub { say "Parent got SIGINT" };
sleep;
waitpid($pid, 0);
} else {
local $SIG{INT} = sub { say "Child got SIGINT" };
sleep;
}
'
^CParent got SIGINT
Child got SIGINT
If that child ignores it, it's because it started a new session or because it explicitly ignores it.
The child procces that runs the .sh script still STDOUT to the screen of the xterm. How can I remove this?
Do the following in the child before calling exec:
open(STDOUT, '>', '/dev/null');
open(STDERR, '>', '/dev/null');
Actually, I would use open3 to get some error checking.
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
open(local *CHILD_STDOUT, '>', '/dev/null') or die $!;
my $pid = open3(
'<&CHILD_STDIN',
'>&CHILD_STDOUT',
'>&CHILD_STDOUT',
'shell.sh', 'args',
);
The parent process(perl script) doesn't wait on the child(.sh script). So I've read alot about the child becoming a zombie???
Children are automatically reaped when the parent exits, or if they exit after the parent exits.
$ perl -e'
for (1..3) {
exec(perl => (-e => 1)) if !fork;
}
sleep 1;
system("ps");
' ; ps
PID TTY TIME CMD
26683 pts/13 00:00:00 bash
26775 pts/13 00:00:00 perl
26776 pts/13 00:00:00 perl <defunct> <-- zombie
26777 pts/13 00:00:00 perl <defunct> <-- zombie
26778 pts/13 00:00:00 perl <defunct> <-- zombie
26779 pts/13 00:00:00 ps
PID TTY TIME CMD
26683 pts/13 00:00:00 bash
26780 pts/13 00:00:00 ps
<-- all gone
If the parent exits before the children do, there's no problem.
If the parent exits shortly after the children do, there's no problem.
If the parent exits a long time after the children do, you'll want to reap them. You could do that using wait or waitpid (possibly from a SIGCHLD handler), or you could cause them to be automatically reaped using $SIG{CHLD} = 'IGNORE';. See perlipc.

Use waitpid in the parent thread: http://perldoc.perl.org/functions/waitpid.html
waitpid($pid, 0);
You can also redirect stdout of your exec to /dev/null:
exec("shell.sh args > /dev/null");

Related

Perl script to capture tcpdump traces on Linux

Hi I have written a script, which was working fine previously with 'snoop' commands. This script forks child in the script to start tcpdump. When i have to stop the dump I kill the child but when i look at the pcap generated in wireshark, it shows the error "The capture file appears to have been cut short in the middle of a packet". My commands are
my $snoopAPP = &startService("tcpdump -w /tmp/app.pcap -i bond0>/dev/null 2>&1" , '');
kill 9, -$snoopAPP;waitpid $snoopAPP, 0;
sub startService(){
#runs a program in background and returns PID which can be used later to kill the process
#arguments are 1, path , 2nd the name of the file
my $processPath = $_[0];chomp($processPath);
if ($_[1] ne ''){$processPath = $processPath . " >$path/$_[1].log";}
print "\nStarting ... \n-- $processPath\n";
my $pid = fork();
die "unable to fork $processPath: $!" unless defined($pid);
if (!$pid) { # child
setpgrp(0, 0);
exec("$processPath");
die "\nunable to exec: $!\n";
exit;
}
print " ---- PID: $pid\n";
return $pid;
}
Another post suggests to wait for tcpdump to exit, which I am doing already, but still it results in the same error message.
Try
kill 15, -$snoopAPP
Signal 9, SIGKILL, is an immediate terminate, and doesn't give the application the opportunity to finish up, so, well, the capture file stands a good chance of being cut short in the middle of a packet.
Signal 15, SIGTERM, can be caught by an application, so it can clean up before terminating. Tcpdump catches it and finishes writing out buffered output.

Track and kill a process on timeout using Perl script

I want to write a Perl script which can monitor a running process. If the process executes for more than expected time,then it should be killed.
I am trying to do this on a Linux machine(Linux_x8664).
I cannot achieve the same using cronjob because I want to embed the same to another Perl script, which I have been using from a long time.
If you have any suggestions, Please suggest me.
I have a code to do that, But the problem is that my perl script is rinning a process using system command. And I want to track the pid of that invoked process and I want to kill it on timeout.
=========================
#!/usr/pde/bin/perl
my $pid;
my $finish=0;
# actions after timeout to keep SIGHANDLER short
#
sub timeout {
print "Timed out pid $pid\n";
# kill the process group, but not the parent process
local $SIG{INT}='IGNORE';
local $SIG{TERM}='IGNORE';
kill 'INT' = -$$;
# eventually try also with TERM and KILL if necessary
die 'alarm';
}
eval {
local $SIG{ALRM} = sub { $finish=1 };
alarm 5;
die "Can't fork!" unless defined ($pid=fork); # check also this!
if ($pid) { # parent
warn "child pid: $pid\n";
# Here's the code that checks for the timeout and do the work:
while (1) {
$finish and timeout() and last;
sleep 1;
}
waitpid ($pid, 0);
}
else { # child
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}
alarm 0;
};
warn "\$#=$#\n";`enter code here`
die "Timeout Exit\n" if $# and $# =~ /alarm/;
print "Exited normally.\n";
__END__
Based on your code - there is a reason why use strict and use warnings are strongly recommended.
Specifically:
Can't modify constant item in scalar assignment at line 17, near "$$;"
You aren't doing what you think you're doing there.
If you set it to
kill ( 'INT', -$$ );
Then you will send a SIGINT to the current process group - parent and child. I'm not sure why you're doing this when you don't want to kill the parent.
I'd suggest you can simplify this greatly by:
else { # child
alarm 5;
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}

Why doesn't waitpid wait for the process to exit?

In the below script I am trying to figure out how waitpid works, but it doesn't wait for ssh process to exit. done is printed right away and not after the ssh process exists.
Question
How to I make waitpid only continue when the pid I give it have exited?
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
use POSIX ":sys_wait_h";
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
system("ssh 10.10.47.47 sleep 10");
$pm->finish;
}
$p = qx(/usr/bin/pgrep -P $p);
print "ssh pid is $p\n";
my $kid;
do {
$kid = waitpid($p, 0);
} while $kid > 0;
print "done\n";
I have also tried
while (1) {
$p = kill 0, $p;
print "x";
sleep 1 if $p;
print "*";
last unless $p;
}
but it doesn't even reach the first print for some reason and never exits.
The wait family of functions only work on child processes, even waitpid. The sleep process is not your child, it's your child's child. This is because system is essentially fork + exec. By using Parallel::ForkManager + system you're forking, then forking again, then executing sleep.
Since you've already forked, you should use exec. This has the extra advantage of not needing the call to pgrep and it's timing problem (ie. it's possible the parent will call pgrep before the child has executed system).
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
no warnings; # no warnings "exec" is not working
exec("sleep 10");
$pm->finish;
}
print "sleep pid is $p\n";
waitpid($p, 0);
For simplicity it's now using sleep. A warning from Perl that "Statement unlikely to be reached" must be suppressed because Perl doesn't realize $pm->start has forked. This should be no warnings "exec" but that's not working so I had to suppress them all.

How to properly exit a child process within a thread?

I am trying to handle timeouts within threads. My script has 4 threads, each thread needs to execute commands, and kill the command process if it takes too long.
What I am doing is:
my $pid;
if (!($pid = fork))
{
my $pid2;
if (!($pid2 = fork))
{
exec_cmd $command;
}
local $SIG{ALRM} = sub {kill 9, $pid2;};
alarm $timeout;
waitpid $pid2, 0;
exit(0);
}
waitpid $pid, 0;
$ret = $?;
This is executed inside a thread, so when the child exits, other threads are still unjoined.
I think you are asking, how can I enforce a time limit on the execution of a child process spawned from a perl thread, and capture that child's exit code?
The easiest thing you could do (on a UNIX system) is to set an alarm on the child process itself:
my $pid = fork();
if (defined($pid) and $pid == 0) {
alarm($timeout); # Preserved across exec()
exec(...);
die "exec(): $!";
}
Exit status will still be available in waitpid/$? as usual.
The safest thing you could do is not to fork while multithreading. It's dangerous both for the application and the implementation. The application, because the child will have running copies of the parent's threads. The implementation, because it's relatively easy to coerce "Unbalanced scopes/saves/tmps/context" errors from threads when doing so.

How can I make Perl wait for child processes started in the background with system()?

I have some Perl code that executes a shell script for multiple parameters, to simplify, I'll just assume that I have code that looks like this:
for $p (#a){
system("/path/to/file.sh $p&");
}
I'd like to do some more things after that, but I can't find a way to wait for all the child processes to finish before continuing.
Converting the code to use fork() would be difficult. Isn't there an easier way?
Using fork/exec/wait isn't so bad:
my #a = (1, 2, 3);
for my $p (#a) {
my $pid = fork();
if ($pid == -1) {
die;
} elsif ($pid == 0) {
exec '/bin/sleep', $p or die;
}
}
while (wait() != -1) {}
print "Done\n";
You are going to have to change something, changing the code to use fork is probably simpler, but if you are dead set against using fork, you could use a wrapper shell script that touches a file when it is done and then have your Perl code check for the existence of the files.
Here is the wrapper:
#!/bin/bash
$*
touch /tmp/$2.$PPID
Your Perl code would look like:
for my $p (#a){
system("/path/to/wrapper.sh /path/to/file.sh $p &");
}
while (#a) {
delete $a[0] if -f "/tmp/$a[0].$$";
}
But I think the forking code is safer and clearer:
my #pids;
for my $p (#a) {
die "could not fork" unless defined(my $pid = fork);\
unless ($pid) { #child execs
exec "/path/to/file.sh", $p;
die "exec of file.sh failed";
}
push #pids, $pid; #parent stores children's pids
}
#wait for all children to finish
for my $pid (#pids) {
waitpid $pid, 0;
}
Converting to fork() might be difficult, but it is the correct tool. system() is a blocking call; you're getting the non-blocking behavior by executing a shell and telling it to run your scripts in the background. That means that Perl has no idea what the PIDs of the children might be, which means your script does not know what to wait for.
You could try to communicate the PIDs up to the Perl script, but that quickly gets out of hand. Use fork().

Resources