Gracefully exit from a PowerShell background wait-event loop on Linux - linux

I have a PowerShell script I am kicking off in the background on Linux.
> ./test_wait.ps1 &
I am trying to exit the background process so that the finally block is executed. But none of the signals (or Stop-Process) works. The finally block is always skipped. If I start it in the foreground and use Ctrl-C then everything works. But when it is a background process it doesn't respond to SIGINT, and other signals just cause it to immediately exit. Am I missing something? Or am I going to have approach this another way?
#!/usr/bin/pwsh
$Start_Time = (Get-date).second
$n = 1
Try
{
While($true) {
$n ++
Wait-Event 1
}
}
Finally
{
$End_Time = (Get-date).second
$Time_Diff = $End_Time - $Start_Time
"Total time in seconds $Time_Diff" > out.log
}

Related

Is there a better way to detect that my Perl script was called from "firstboot"?

I have a Perl script that needs to act in a particular way if it was invoked by the firstboot script or invoked by a process that firstboot spawned. I have this routine handleFirstBoot and it seems to work ok, but there is probably better way to write this routine. So please take a look ...
sub handleFirstBoot {
my $child_id = shift || $$;
my $parent_id;
foreach (`ps -ef`) {
my ($uid,$pid,$ppid) = split;
next unless ($pid eq $child_id);
$parent_id = $ppid;
last;
}
if ( $parent_id == 0 ) {
debug "firstboot is NOT an ancestor.\n";
return;
}
my $psout = `ps -p $parent_id | tail -1 |sed -e's/^ //g'| sed -e's/ */ /g'|cut -d' ' -f4`;
if ( $psout =~ /firstboot/ ) {
debug "firstboot IS an ancestor. Set post option.\n";
$opt{'post'} = 1;
return;
} else {
# recursive case
handleFirstBoot($parent_id);
}
}
Can I offer an alternative approach - from the comments, the problem you are trying to solve is a startup script stalling because it's waiting for this one to return.
So can I suggest that fork() is probably your friend here?
my $pid = fork();
if ( $pid ) {
exit;
}
sleep $delay_time;
do_stuff();
What will happen is - your script will be called, and the caller will return immediately, but a parallel instance will spawn and delay the random delay interval - and for bonus points, this will work the same in cron too.
But as you seem to note in the comments - the 'good' solution is not to do it that way at all - I would suggest that looking at say, anacron which is available on most Linux systems would be exactly the tool for this particular job.

Track and kill a process on timeout using Perl script

I want to write a Perl script which can monitor a running process. If the process executes for more than expected time,then it should be killed.
I am trying to do this on a Linux machine(Linux_x8664).
I cannot achieve the same using cronjob because I want to embed the same to another Perl script, which I have been using from a long time.
If you have any suggestions, Please suggest me.
I have a code to do that, But the problem is that my perl script is rinning a process using system command. And I want to track the pid of that invoked process and I want to kill it on timeout.
=========================
#!/usr/pde/bin/perl
my $pid;
my $finish=0;
# actions after timeout to keep SIGHANDLER short
#
sub timeout {
print "Timed out pid $pid\n";
# kill the process group, but not the parent process
local $SIG{INT}='IGNORE';
local $SIG{TERM}='IGNORE';
kill 'INT' = -$$;
# eventually try also with TERM and KILL if necessary
die 'alarm';
}
eval {
local $SIG{ALRM} = sub { $finish=1 };
alarm 5;
die "Can't fork!" unless defined ($pid=fork); # check also this!
if ($pid) { # parent
warn "child pid: $pid\n";
# Here's the code that checks for the timeout and do the work:
while (1) {
$finish and timeout() and last;
sleep 1;
}
waitpid ($pid, 0);
}
else { # child
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}
alarm 0;
};
warn "\$#=$#\n";`enter code here`
die "Timeout Exit\n" if $# and $# =~ /alarm/;
print "Exited normally.\n";
__END__
Based on your code - there is a reason why use strict and use warnings are strongly recommended.
Specifically:
Can't modify constant item in scalar assignment at line 17, near "$$;"
You aren't doing what you think you're doing there.
If you set it to
kill ( 'INT', -$$ );
Then you will send a SIGINT to the current process group - parent and child. I'm not sure why you're doing this when you don't want to kill the parent.
I'd suggest you can simplify this greatly by:
else { # child
alarm 5;
exec (q[perl -e 'while (1) {print 1}' tee test.txt]);
exit; # the child shouldn't execute code hereafter
}

Log message in perl every 90 seconds in the parent process as long as the child process still runs

I just passed over from php to perl due to my company's request so even if this may be a silly question is kind of nerve wreaking right now.
I have one little perl script deployed on a server through a debian package. I have this all figured out so that's all cool.
Now this script is called from another server through an SSH connection and the script logs back to that server all its actions. I use Log::Log4perl for that.
One of the tasks takes a very long time and also runs some other scripts in the process. The ssh connection has a set timeout of 5 minutes unless I log something back. So I figured out I would create a child process to run the task and let the parent process log back every 90 (or whatever) seconds. My issue is that I don't want to use sleep because if the task is finished sooner it will mess up the log.
I have also tried using Time, Time::HiRes and alarm, but they all mess up my log one way or another.
This is my code:
$log->info("uid $uid: calling the configure script for operation $mode,on $dst_path");
my $pid = fork();
die "Could not fork\n" if not defined $pid;
if ( $pid == 0 ) {
configure( $script_dir, $mode, $node, $uid, $gid); # this also uses a parallel process in its execution, but we don't have a non blocking wait
}
while ( !waitpid( $pid, WNOHANG ) ) {
sleep(90);
if ( !$pid ) {
$log->info("Still waiting for the process to finish"); # this should come up every 90 seconds of so
}
}
$log->info("uid $uid: configure script executed"); # this should come up only once, now I get it every 90 seconds
# do other stuff here after the execution of the configure sub is done
Unfortunately I inherited this architecture as it is and cannot change it because there are a lot of services based on it.
If you don't want to sleep, you can call select with a timeout. To implement this reliably, you can employ the self-pipe trick which involves creating a pipe, writing to the pipe in a SIGCHLD handler, and making the select call wait on the pipe's read handle.
Here's a simple example:
#!/usr/bin/perl
use strict;
use warnings;
use Errno qw(EINTR);
use Fcntl qw(F_GETFL F_SETFL O_NONBLOCK);
use Symbol qw(gensym);
sub make_non_blocking {
my $handle = shift;
my $flags = fcntl($handle, F_GETFL, 0)
or die("F_GETFL: $!");
fcntl($handle, F_SETFL, $flags | O_NONBLOCK)
or die("F_SETFL: $!");
}
my ($read_handle, $write_handle) = (gensym, gensym);
pipe($read_handle, $write_handle)
or die("pipe: $!");
make_non_blocking($read_handle);
make_non_blocking($write_handle);
local $SIG{CHLD} = sub {
syswrite($write_handle, "\0", 1);
};
my $pid = fork();
die("fork: $!") if !defined($pid);
if ($pid == 0) {
sleep(10);
exit;
}
my $rin = '';
vec($rin, fileno($read_handle), 1) = 1;
while (1) {
my $nfound = select(my $rout = $rin, undef, undef, 2);
if ($nfound < 0) {
# Error. Must restart the select call on EINTR.
die("select: $!") if $! != EINTR;
}
elsif ($nfound == 0) {
# Timeout.
print("still running...\n");
}
else {
# Child exited and pipe was written to.
last;
}
}
waitpid($pid, 0);
close($read_handle);
close($write_handle);
I tried to run the code and noticed a few things that may be your issue, but without knowing what configure does, I can't be sure. Here's what I found:
The child process doesn't exit after calling configure
waitpid does not change the value of $pid, so $pid is always 0 in the child and always the pid of the child in the parent.
What this means is that the parent is never writing out "Still waiting for the process to finish", the child is writing it out every 90 seconds after it completes it's call to configure.
Additionally, the child should print that message ever 90 seconds forever because it's waiting for pid 0 to send it the CHLD signal which won't happen because it doesn't have a child with pid 0.
I updated your code with a few stubs that does what I think you want (on a slightly tighter timeline because I don't like to wait :) ). My code makes the following assumptions that you may wish to change:
Log the waiting message every second
The child always exits with a status value of 0
Here's my code:
#!/usr/bin/env perl
use strict;
use warnings;
use Log::Log4perl qw(:easy);
use POSIX qw(:sys_wait_h);
Log::Log4perl->easy_init();
my ($uid,$mode,$dst_path,$script_dir,$node,$gid) = (0..5);
my $log = get_logger();
$log->info("uid $uid: calling the configure script for operation $mode,on $dst_path");
my $pid = fork();
die "Could not fork\n" if not defined $pid;
if ( $pid == 0 ) {
configure( $script_dir, $mode, $node, $uid, $gid); # this also uses a parallel process in its execution, but we don't have a non blocking wait
exit(0);
}
my $zombie;
while ( ($zombie = waitpid( $pid, WNOHANG ) ) != $pid) {
$log->info("Still waiting for the process to finish"); # this should come up every 90 seconds of so
sleep(1);
}
$log->info("uid $uid: configure script executed"); # this should come up only once, now I get it every 90 seconds
# do other stuff here after the execution of the configure sub is done
sub configure {
sleep 10;
}

Breaking out of a while loop with system commands in Perl using Ctrl-C (SIGINT)?

Consider the following example, test.pl:
#!/usr/bin/env perl
use 5.10.1;
use warnings;
use strict;
$SIG{'INT'} = sub {print "Caught Ctrl-C - Exit!\n"; exit 1;};
$| = 1; # turn off output line buffering
use Getopt::Long;
my $doSystemLoop = 0;
GetOptions( "dosysloop"=>\$doSystemLoop );
print("$0: doSystemLoop is:$doSystemLoop (use " . (($doSystemLoop)?"system":"Perl") . " loop); starting...\n");
my $i=0;
if (not($doSystemLoop)) { # do Perl loop
while ($i < 1e6) {
print("\tTest value is $i");
$i++;
sleep 1;
print(" ... ");
sleep 1;
print(" ... \n");
}
} else { # do system call loop
while ($i < 1e6) {
system("echo","-ne","\tTest value is $i");
$i++;
system("sleep 1");
system("echo","-ne"," ... ");
system("sleep 1");
system("echo","-e"," ... ");
}
}
So, if I call this program, so it uses a usual Perl loop, everything is as expected:
$ perl test.pl
test.pl: doSystemLoop is:0 (use Perl loop); starting...
Test value is 0 ... ...
Test value is 1 ... ...
Test value is 2 ... ^CCaught Ctrl-C - Exit!
$
... that is, I hit Ctrl-C, program exits instantly.
However, if the while loop's commands consist mostly of system calls, then it becomes nearly impossible to exit with Ctrl-C:
$ perl test.pl --dosysloop
test.pl: doSystemLoop is:1 (use system loop); starting...
Test value is 0 ... ...
Test value is 1 ... ...
Test value is 2 ... ^C ...
Test value is 3 ... ^C ...
Test value is 4 ... ^C ...
Test value is 5^C ... ^C ...
Test value is 6^C ... ^C ...
Test value is 7^C ... ^C ...
Test value is 8^C ... ^C ...
Test value is 9^C ... ^C ...
Test value is 10 ... ^C ...
Test value is 11^C ... ^C ...
Test value is 12^C ... ...
Test value is 13^Z
[1]+ Stopped perl test.pl --dosysloop
$ killall perl
$ fg
perl test.pl --dosysloop
Terminated
$
So in the snippet above, I'm hitting Ctrl-C (the ^C) like mad, and the program ignores me completely :/ Then I cheat by hitting Ctrl-Z (the ^Z), which stops the process and sets in the background; then in the resulting shell I do killall perl, and after that I execute the fg command, which places the Perl job back in the foreground - where it finally terminates due to the killall.
What I would like to have, is run a system loop like this, with the possibility to break out of it/exit it with the usual Ctrl-C. Is this possible to do, and how do I do that?
Perl's signal handling mechanism defers the handling of signals until a safe point. Deferred signals are checked between Opcodes of the perl VM. As system and friends count as a single opcode, signals are only checked once the exec'd command has terminated.
This can be circumvented by forking, and then waiting in a loop for the child process to terminate. The child can also be terminated early via a signal handler.
sub launch_and_wait {
my $wait = 1;
my $child;
local $SIG{CHLD} = sub {
$wait = 0;
};
local $SIG{INT} = sub {
$wait = 0;
kill KILL => $child if defined $child;
};
if ($child = fork()) {
# parent
while ($wait) {
print "zzz\n";
sleep 1;
}
wait; # try to join the child
} else {
# child
exec {$_[0]} #_;
}
}
launch_and_wait sleep => 60;
print "Done\n";
There are probably lots of ways this can go wrong (getting a SIGINT before the child was spawned…). I also omitted any error handling.
Check the exit status of the system() command for any signals. An external command interrupted with SIGINT will get a "2" here:
while () {
system("sleep", 1);
if ($? & 127) {
my $sig = $? & 127;
die "Caught signal INT" if $sig == 2; # you may also abort on other signals if you like
}
}

perl thread dies after detach

I need to create a thread and let it do some stuff. The main reason is to quit from main programm as fast as possible.
But after I detach the thread and end the main programm, the thread dies. In my example the file doesn´t contain expected data.
If I let the main programm sleep for 5 sec everything works fine.
How can I let run the thread after quitting from the main prog?
use threads;
print "Starting main program\n";
my $t = threads->new(\&sub1)->detach();
print "End of main program\n";
sub sub1 {
print "started thread\n";
open (FILE, ">>./test.txt") or die($!);
print FILE localtime(time())."\n";
foreach (1..3){
print FILE "$_\n";
sleep 1;
}
close(FILE);
return 1;
}
Quite simply, you can't.
$thr->detach()
Makes the thread unjoinable, and causes any eventual return value to be discarded. When the program exits, any detached threads that are still running are silently terminated.
But you could do:
threads->new(\&sub1)->join();
But that hardly buys you much. Because threads are mainly meant to be used to do other things while you continue to do some thing. There is little reason to stop the main line of execution, and then run a thread, when you could just as well have called sub1();
You can't use threads, but you could fork and let parent process exit without waiting for the child process to finish:
my $pid = fork();
if ($pid == 0) {
print "child proc\n";
sleep 5;
print "child, 5 seconds older\n";
exit(0);
} else {
print "parent proc. goodbye\n";
}
I don't think this isn't a very good approach since you've now lost control over the child process. Why not just run it main and background the process?
Edit: This of course also has the potential to generate zombie processes, which will consume system resources. and brains.

Resources