Standalone child in backtick command - linux

Here is a main script that exec the perl script "fork.pl"
#!/bin/bash
OUTPUT=`./fork.pl`
echo "$OUTPUT"
And the fork.pl:
#!/usr/bin/perl
use strict;
use warnings;
use POSIX;
my $pid = fork();
if ($pid == 0) {
sleep(5);
print("child: $pid\n");
}
else {
print("parent: $pid\n")
}
The backtick implies a wait, but I would like to not wait for the last child.
thanks

One of the ways to not to wait for the termination, is to start in the background while redirecting the output to a file. Then try to read the lines with the shell's read.
For example, a hack to read the first line:
./fork.pl > temp.out &
sleep 1
read OUTPUT < temp.out
Alternatively, without sleep, but limited to a do/done block:
./fork.pl | while read OUTPUT; do
# use $OUTPUT here
break # first line only, or loop conditionally
done

It needs to detach from parent and to redirect the input/output :
if ($pid == 0) {
my $mysid = setsid();
open (STDIN, "</dev/null");
open (STDOUT, ">/dev/null");
open (STDERR, ">&STDOUT");
sleep(5);
print("child: $pid\n");
}

Related

Track and kill a process on timeout using Perl script

I want to write a Perl script which can monitor a running process. If the process executes for more than expected time,then it should be killed.
I am trying to do this on a Linux machine(Linux_x8664).
I cannot achieve the same using cronjob because I want to embed the same to another Perl script, which I have been using from a long time.
If you have any suggestions, Please suggest me.
I have a code to do that, But the problem is that my perl script is rinning a process using system command. And I want to track the pid of that invoked process and I want to kill it on timeout.
=========================
#!/usr/pde/bin/perl
my $pid;
my $finish=0;
# actions after timeout to keep SIGHANDLER short
#
sub timeout {
print "Timed out pid $pid\n";
# kill the process group, but not the parent process
local $SIG{INT}='IGNORE';
local $SIG{TERM}='IGNORE';
kill 'INT' = -$$;
# eventually try also with TERM and KILL if necessary
die 'alarm';
}
eval {
local $SIG{ALRM} = sub { $finish=1 };
alarm 5;
die "Can't fork!" unless defined ($pid=fork); # check also this!
if ($pid) { # parent
warn "child pid: $pid\n";
# Here's the code that checks for the timeout and do the work:
while (1) {
$finish and timeout() and last;
sleep 1;
}
waitpid ($pid, 0);
}
else { # child
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}
alarm 0;
};
warn "\$#=$#\n";`enter code here`
die "Timeout Exit\n" if $# and $# =~ /alarm/;
print "Exited normally.\n";
__END__
Based on your code - there is a reason why use strict and use warnings are strongly recommended.
Specifically:
Can't modify constant item in scalar assignment at line 17, near "$$;"
You aren't doing what you think you're doing there.
If you set it to
kill ( 'INT', -$$ );
Then you will send a SIGINT to the current process group - parent and child. I'm not sure why you're doing this when you don't want to kill the parent.
I'd suggest you can simplify this greatly by:
else { # child
alarm 5;
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}

Perl: Run multiple system commands at once

In perl, I have some code like
my $enter = `curl -s -m 10 http://URL`;
How would I use threading to run this function 10 times at once?
I found this but I am not sure how to use it to set a specific amount of threads
Edit: I guess I misunderstood what Threads::Queue was doing. My original question still stands for simultaneously running multiple commands at once.
You can use fork(). In this example, I use the Parallel::ForkManager module. $max_forks is the number of processes to run simultaneously (set to two for an example), and you'd put your system/curl code after ### add curl logic here, and remove the print() and sleep() example statements from there as well.
#!/usr/bin/perl
use warnings;
use strict;
use Parallel::ForkManager;
my $max_forks = 2;
my $fork = new Parallel::ForkManager($max_forks);
my #urls = (
'http://perlmonks.org',
'http://stackoverflow.com',
'http://slashdot.org',
'http://wired.com',
);
# on start callback
$fork->run_on_start(
sub {
my $pid = shift;
print "Starting PID $pid\n";
}
);
# on finish callback
$fork->run_on_finish(
sub {
my ( $pid, $exit, $ident, $signal, $core) = #_;
if ($core){
print "PID $pid core dumped.\n";
}
else {
print "PID $pid exited with exit code $exit " .
" and signal $signal\n";
}
}
);
# forking code
for my $url (#urls){
$fork->start and next;
### add curl logic here
print "$url\n";
sleep(2);
$fork->finish;
}
$fork->wait_all_children;

Why doesn't waitpid wait for the process to exit?

In the below script I am trying to figure out how waitpid works, but it doesn't wait for ssh process to exit. done is printed right away and not after the ssh process exists.
Question
How to I make waitpid only continue when the pid I give it have exited?
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
use POSIX ":sys_wait_h";
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
system("ssh 10.10.47.47 sleep 10");
$pm->finish;
}
$p = qx(/usr/bin/pgrep -P $p);
print "ssh pid is $p\n";
my $kid;
do {
$kid = waitpid($p, 0);
} while $kid > 0;
print "done\n";
I have also tried
while (1) {
$p = kill 0, $p;
print "x";
sleep 1 if $p;
print "*";
last unless $p;
}
but it doesn't even reach the first print for some reason and never exits.
The wait family of functions only work on child processes, even waitpid. The sleep process is not your child, it's your child's child. This is because system is essentially fork + exec. By using Parallel::ForkManager + system you're forking, then forking again, then executing sleep.
Since you've already forked, you should use exec. This has the extra advantage of not needing the call to pgrep and it's timing problem (ie. it's possible the parent will call pgrep before the child has executed system).
my $pm = Parallel::ForkManager->new(5);
my $pid = $pm->start;
my $p = $pid;
if (!$pid) {
no warnings; # no warnings "exec" is not working
exec("sleep 10");
$pm->finish;
}
print "sleep pid is $p\n";
waitpid($p, 0);
For simplicity it's now using sleep. A warning from Perl that "Statement unlikely to be reached" must be suppressed because Perl doesn't realize $pm->start has forked. This should be no warnings "exec" but that's not working so I had to suppress them all.

Read unbuffered data from pipe in Perl

I am trying to read unbufferd data from a pipe in Perl. For example in the program below:
open FILE,"-|","iostat -dx 10 5";
$old=select FILE;
$|=1;
select $old;
$|=1;
foreach $i (<FILE>) {
print "GOT: $i\n";
}
iostat spits out data every 10 seconds (five times). You would expect this program to do the same. However, instead it appears to hang for 50 seconds (i.e. 10x5), after which it spits out all the data.
How can I get the to return whatever data is available (in an unbuffered manner), without waiting all the way for EOF?
P.S. I have seen numerous references to this under Windows - I am doing this under Linux.
#!/usr/bin/env perl
use strict;
use warnings;
open(PIPE, "iostat -dx 10 1 |") || die "couldn't start pipe: $!";
while (my $line = <PIPE>) {
print "Got line number $. from pipe: $line";
}
close(PIPE) || die "couldn't close pipe: $! $?";
If it is fine to wait in your Perl script instead on the linux command, this should work.
I don't think Linux will give control back to the Perl script before the command execution is completed.
#!/usr/bin/perl -w
my $j=0;
while($j!=5)
{
open FILE,"-|","iostat -dx 10 1";
$old=select FILE;
$|=1;
select $old;
$|=1;
foreach $i (<FILE>)
{
print "GOT: $i";
}
$j++;
sleep(5);
}
I have below code working for me
#!/usr/bin/perl
use strict;
use warnings;
open FILE,"-|","iostat -dx 10 5";
while (my $old=<FILE>)
{
print "GOT: $old\n";
}
The solutions so far did not work for me with regards to unbuffering (Windows ActiveState Perl 5.10).
According to http://perldoc.perl.org/PerlIO.html, "To get an unbuffered stream specify an unbuffered layer (e.g. :unix ) in the open call:".
So
open(PIPE, '-|:unix', 'iostat -dx 10 1') or die "couldn't start pipe: $!";
while (my $line = <PIPE>) {
print "Got $line";
}
close(PIPE);
which worked in my case.

How can I make Perl wait for child processes started in the background with system()?

I have some Perl code that executes a shell script for multiple parameters, to simplify, I'll just assume that I have code that looks like this:
for $p (#a){
system("/path/to/file.sh $p&");
}
I'd like to do some more things after that, but I can't find a way to wait for all the child processes to finish before continuing.
Converting the code to use fork() would be difficult. Isn't there an easier way?
Using fork/exec/wait isn't so bad:
my #a = (1, 2, 3);
for my $p (#a) {
my $pid = fork();
if ($pid == -1) {
die;
} elsif ($pid == 0) {
exec '/bin/sleep', $p or die;
}
}
while (wait() != -1) {}
print "Done\n";
You are going to have to change something, changing the code to use fork is probably simpler, but if you are dead set against using fork, you could use a wrapper shell script that touches a file when it is done and then have your Perl code check for the existence of the files.
Here is the wrapper:
#!/bin/bash
$*
touch /tmp/$2.$PPID
Your Perl code would look like:
for my $p (#a){
system("/path/to/wrapper.sh /path/to/file.sh $p &");
}
while (#a) {
delete $a[0] if -f "/tmp/$a[0].$$";
}
But I think the forking code is safer and clearer:
my #pids;
for my $p (#a) {
die "could not fork" unless defined(my $pid = fork);\
unless ($pid) { #child execs
exec "/path/to/file.sh", $p;
die "exec of file.sh failed";
}
push #pids, $pid; #parent stores children's pids
}
#wait for all children to finish
for my $pid (#pids) {
waitpid $pid, 0;
}
Converting to fork() might be difficult, but it is the correct tool. system() is a blocking call; you're getting the non-blocking behavior by executing a shell and telling it to run your scripts in the background. That means that Perl has no idea what the PIDs of the children might be, which means your script does not know what to wait for.
You could try to communicate the PIDs up to the Perl script, but that quickly gets out of hand. Use fork().

Resources