Release of flock in case of errors? - linux

Imagine the following Perl code (here in pseudo code):
successfully acquired flock for FILEHANDLER # line 1
some error or maybe simply a call to exit() # line 2
close FILEHANDLER (which also releases the lock) # line 3
In this case I wouldn't release the lock, as the Perl script ends in line 2. In that case, is the lock ever released by the operating system? Does it see "hey, the script that acquired the lock crashed" and release the lock? Does it release the lock immediately? Also, is there one Perl instance running for each script, so that it's clear which script crashed/stopped without releasing the lock?

In that case, is the lock ever released by the operating system?
Does it see "hey, the script that acquired the lock crashed" and release the lock?
Does it release the lock immediately?
All of these questions are system dependent. Perl 5 does not implement a file locking function, it just provides a common interface to flock(2), fcntl(2) locking, or lockf(3) (depending on what is available in the OS). There may also be a difference between what happens when a program exits, segfaults, or is killed with a sigkill.
A quick test under Linux shows that a lock is removed under normal exit conditions:
$ perl -le 'open my $fh, ">", "f" or die $!; print flock($fh, 6) ? "got lock" : "was already locked", "\n"'
got lock
$ perl -le 'open my $fh, ">", "f" or die $!; print flock($fh, 6) ? "got lock" : "was already locked", "\n"'
got lock
Let's see what happens when we die:
$ perl -le 'open my $fh, ">", "f" or die $!; print flock($fh, 6) ? "got lock" : "was already locked", "\n"; die "died"'
got lock
died at -e line 1.
$ perl -le 'open my $fh, ">", "f" or die $!; print flock($fh, 6) ? "got lock" : "was already locked", "\n"; die "died"'
got lock
died at -e line 1.
To get a segfault, we will need access to C, I am using Inline to get it:
$ cat segfault.pl
#!/usr/bin/perl
use strict;
use warnings;
use Inline "C";
open my $fh, ">", "f" or die $!;
print flock($fh, 6) ? "got lock" : "was already locked", "\n";
crash();
__DATA__
__C__
void crash() {
int* ptr = NULL;
*ptr = 5;
}
$ perl segfault.pl
got lock
Segmentation fault
$ perl segfault.pl
got lock
Segmentation fault
And finally, here is what happens when a program is sent SIGKILL:
$ cat fork.pl
#!/usr/bin/perl
use strict;
use warnings;
$SIG{CHLD} = "IGNORE"; #auto-reap children
die "could not fork: $!" unless defined(my $pid = fork);
unless ($pid) {
#child
open my $fh, ">", "f" or die $!;
print flock($fh, 6) ? "got lock" : "was already locked", "\n";
sleep(100);
exit;
}
kill 9, $pid;
die "could not fork: $!" unless defined($pid = fork);
unless ($pid) {
#child
open my $fh, ">", "f" or die $!;
print flock($fh, 6) ? "got lock" : "was already locked", "\n";
exit;
}
$ perl fork.pl
got lock
got lock
From these experiments, we can see that the lock is released in Linux for each of the cases you were concerned with.
Also, is there one perl instance running for each script, so that it's clear which script crashed/stopped without releasing the lock?
Yes, Perl 5 has one perl process per script. Even if you fork, the child gets its own perl process. Threading does not provide a separate perl process.
Note: if a parent process gets a lock and does not give it up before locking, then the child will have the same lock even if the parent exits.

When the program exits, the OS automatically releases all locks acquired by the program and closes all files opened by the program.

Related

Perl: How to pass IPC::Open3 redirected STDOUT/STDERR fhs

I'm trying to capture the output my perl code generates both from print and similar statements and external commands.
Due to design constraints I can't use solutions like Capture::Tiny. I need to forward the output to the buffer variable as soon as it is generated and I need to be able to differentiate between STDOUT and STDERR. Ideally a solution for external commands would essentially work just like system apart from being able to capture STDOUT and STDERR instead of printing them.
My code is supposed to:
Save the old STDOUT/STDERR file handles.
Create a new ones for both STDERR and STDOUT.
Redirect all the output to this place.
Print a couple of things.
Restore the old filehandles.
Do something with the captured output, e.g. print it.
However I'm unable to capture the output generated from external commands. I can't do it with IPC::Run3 nor with IPC::Open3.
#!/usr/bin/perl -CSDAL
use warnings;
use strict;
use IPC::Open3;
#use IPC::Run3;
# Save old filehandles
open(my $oldout, ">&STDOUT") or die "Can't dup STDOUT: $!";
open(my $olderr, ">&STDERR") or die "Can't dup STDERR: $!";
my $buffer = "";
close(STDOUT);
close(STDERR);
open(STDOUT, '>', \$buffer) or die "Can't redirect STDOUT: $!";
*STDERR = *STDOUT; # In this example STDOUT and STDERR are printed to the same buffer.
print "1: Test\n";
#run3 ["date"], undef, \*STDOUT, \*STDERR; # This doesn't work as expected
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", "date");
waitpid($pid,0); # Nor does this.
print STDERR "2: Test\n";
open(STDOUT, ">&", $oldout) or die "Can't dup \$oldout: $!";
open(STDERR, ">&", $olderr) or die "Can't dup \$olderr: $!";
print "Restored!\n";
print $buffer;
Expected result:
Restored!
1: Test
Mo 25. Mär 13:44:53 CET 2019
2: Test
Actual result:
Restored!
1: Test
2: Test
I don't have a solution to offer you, however I can provide some explanations as to the behavior you are seeing.
First, IPC::Open3 is not supposed to work when your filehandles are variables; see this question for more explanations.
Now, why isn't IPC::Run3 working? First, notice that if don't redirect STDERR and run
run3 ["date"], undef, \$buffer, { append_stdout => 1 };
instead of
run3 ["date"], undef, \*STDOUT;
then it works as expected. (you need to add { append_stdout => 1 } or your previous outputs to $buffer will be overwritten)
To understand what's happening, in your program, after
open(STDOUT, '>', \$buffer) or die "Can't redirect STDOUT: $!";
Add
print STDERR ref(\$buffer), "\n"
print STDERR ref(\*STDOUT), "\n"
Which will print
SCALAR
GLOB
That's exactly what IPC::Run3::run3 will do to know what to do with the "stdout" you give it (see the source: _fh_for_child_output, which is called by run3):
if it's a scalar, then a temporary file is used (the corresponding line is $fh = $fh_cache{$what} ||= tempfile, where tempfile is a function from File::Temp.
On the other hand, when stdout is a GLOB (or tied to IO::Handle), that filehandle is used directly (that's this line of code).
Which explains why when you call run3 with \$buffer it works, but not with \*STDOUT.
When redirecting STDERR as well, and calling
run3 ["date"], undef, \$buffer, \$buffer, { append_stdout => 1, append_stderr => 1 };
, things start to appear weird. I don't understand what's happening, but I'll share here what I found, and hopefully someone will make sense of it.
I modified the source of IPC::Run3 and added
open my $FP, '>', 'logs.txt' or die "Can't open: $!";
at the beginning of the sub run3. When running, I only see
Restored!
1: Test
on STDOUT (my terminal), but logs.txt contains the date (something in the lines of Mon Mar 25 17:49:44 CET 2019).
Investing a bit reveals that fileno $FP returns 1 (which, unless I mistaken, is usually STDOUT (but you closed it, so I'm no so surprised that its descriptor can be reused)), and fileno STDOUT returns 2 (this might depend on your Perl version and other opened filehandles though). What seems to be happening is that system assumes that STDOUT is the file descriptor 1 and thus prints to $FP instead of STDOUT (I'm just guessing though).
Please feel free to comment/edit if you understand what's happening.
I ended up with the following code:
#!/usr/bin/perl -CSDAL
use warnings;
use strict;
use IPC::Run3;
use IO::Scalar;
use Encode;
use utf8;
# Save old filehandles
open(my $oldout, ">&STDOUT") or die "Can't dup STDOUT: $!";
open(my $olderr, ">&STDERR") or die "Can't dup STDERR: $!";
open(my $FH, "+>>:utf8", undef) or die $!;
$FH->autoflush;
close(STDOUT);
close(STDERR);
open(STDOUT, '>&', $FH) or die "Can't redirect STDOUT: $!";
open(STDERR, '>&', $FH) or die "Can't redirect STDOUT: $!";
print "1: Test\n";
run3 ["/bin/date"], undef, $FH, $FH, { append_stdout => 1, append_stderr => 1 };
print STDERR "2: Test\n";
open(STDOUT, ">&", $oldout) or die "Can't dup \$oldout: $!";
open(STDERR, ">&", $olderr) or die "Can't dup \$olderr: $!";
print "Restored!\n";
seek($FH, 0, 0);
while(<$FH>)
{
# No idea why this is even required
print Encode::decode_utf8($_);
}
close($FH);
This is far from what I originally wanted, but appears to be working at least.
The issues I have with this are:
I need an anonymous file handle creating clutter on the hard disk.
For some reason I need to fix the encoding manually.
Thank you very much to the people who dedicated their time helping me out here.
Is there a reason you need to use the parent's STDOUT and STDERR? IPC::Open3 is easily capable of redirecting the child's STDOUT and STDERR to unrelated handles in the parent which you can read from.
use strict;
use warnings;
use IPC::Open3;
my $pid = open3 undef, my $outerr, undef, 'date';
my $output = do { local $/; readline $outerr };
waitpid $pid, 0;
my $exit = $? >> 8;
This will read STDOUT and STDERR together, if you want to read them separately you need to pass my $stderr = Symbol::gensym as the third argument (as shown in the IPC::Open3 docs), and use a non-blocking loop to avoid deadlocking when reading both handles. IO::Async::Process or similar can fully automate this for you, but IPC::Run3 provides a much simpler solution if you only need to store the output in scalar variables. IPC::Run3 and Capture::Tiny can also both easily be fatpacked for deployment in scripts.
This is not an answer yet, but it seems like open3 requires STDOUT to be a regular tty file handle at the time you call open3, for example:
use feature qw(say);
use strict;
use warnings;
use IPC::Open3;
use Symbol 'gensym';
{
local *STDOUT; # <-- if you comment out this line open3 works as expected
my ($chld_in, $chld_out);
my $chld_err = gensym;
my $pid;
eval {
$pid = open3($chld_in, $chld_out, $chld_err, "date");
};
if ( $# ) {
say "IPC::Open::open3 failed: '$#'";
}
print "-> $_" for <$chld_out>;
waitpid $pid, 0;
# say "Cannot print to invalid handle..";
}
say "ok";
Output:
ma. 25. mars 16:00:01 +0100 2019
ok
Note that the arrow -> in the beginning of the line is missing,
so nothing can be read from $chld_out in this case. However, if I comment out the line:
local *STDOUT;
The output is:
-> ma. 25. mars 16:01:10 +0100 2019
ok

Modifying output of child process to add a timestamp

I have a perl script which forks child processes.
sub my_exec{
my($args,$stdout, $stderr) = #_;
my $processes = fork();
die("Cant fork") unless defined($processes);
if(processes == 0){
if(defined $stdout){
close(STDOUT);
open STDOUT, $stdout;
}
if(defined $stderr){
close(STDERR);
open STDERR, $stderr;
}
exec #$args;
}else{
...
}
}
My main issue is that I want to add a timestamp to every line of output to stderr. I was wondering if it could be done here. as you can see, stderr isn't always changed. I am assuming I could do it via some sort of pipe? I would also like to redirect the parent script(Daemon with both stdout and stderr redirected to files) to use timestamps as well.
Thanks
Say you write my_exec as below.
sub my_exec {
my($args,$stdout,$stderr) = #_; # caller untaints
open my $oldout, ">&STDOUT" or die "$0: save STDOUT: $!";
my $pid = open my $pipe, "-|" // die "$0: fork: $!";
if ($pid) {
if (defined $stderr) {
open STDERR, ">", $stderr or die "$0: open: $!";
}
while (<$pipe>) {
print STDERR scalar(localtime), ": ", $_;
}
close $pipe or die $! ? "$0: error closing $args->[0] pipe: $!"
: "$0: exit status " . ($? >> 8) . " from $args->[0]";
}
else {
open STDERR, ">&STDOUT" or die "$0: pipe STDERR: $!";
if (defined $stdout) {
open STDOUT, ">", $stdout or die "$0: open: $!";
}
else {
open STDOUT, ">&", $oldout or die "$0: restore STDOUT: $!";
}
exec #$args or die "$0: exec #$args: $!";
}
}
The main point is described in the documentation on open:
If you open a pipe on the command - (that is, specify either |- or -| with the one– or two–argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
The point of the implicit fork is setting up a pipe between the parent and child processes.
The filehandle behaves normally for the parent, but I/O to that filehandle is piped from the STDOUT of the child process. In the child process, the filehandle isn’t opened—I/O happens from the new STDOUT.
That is almost perfect, except you want to modify the standard error, not the standard output.
This means we need to save the parent’s STDOUT so the child can restore it. This is what is happening with $oldout.
Duping the child’s (redirected) STDOUT onto its STDERR arranges for the underlying daemon’s standard error to run through the pipe, which the parent reads, modifies, and outputs.
One slightly tricky point is where the redirections are processed. If the caller wants to redirect STDOUT, that needs to happen in the child. But to redirect STDERR, the parent needs to do so because this gives the parent the opportunity to modify the stream.
The code for a complete example is of the following form. You mentioned a daemon, so I enabled Perl’s dataflow analysis known as taint mode.
#! /usr/bin/perl -T
use strict;
use warnings;
use v5.10.0; # for defined-or //
$ENV{PATH} = "/bin:/usr/bin";
sub my_exec {
# paste code above
}
#my_exec ["./mydaemon"];
#my_exec ["./mydaemon"], "my-stdout";
my_exec ["./mydaemon"], "my-stdout", "my-stderr";
With a simple mydaemon of
#! /usr/bin/env perl
print "Hello world!\n";
warn "This is a warning.\n";
die "All done.\n";
the output is goes to separate files.
1. my-stdout:
Hello world!
2. my-stderr:
Tue Nov 5 17:58:20 2013: This is a warning.
Tue Nov 5 17:58:20 2013: All done.
./wrapper: exit status 255 from ./mydaemon at ./wrapper line 23.
fork is so low level. IPC::Open3 is the minimum you should use.
use IPC::Open3 qw( open3 );
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
open(local *CHILD_STDOUT, '>', $stdout) or die $!;
my $pid = open3(
'<&CHILD_STDIN',
'>&CHILD_STDOUT',
\local *CHILD_STDERR,
$prog, #$args
);
open(my $stderr_fh, '>', $stderr) or die $!;
while (<CHILD_STDERR>) {
print $stderr_fh $ts . $_;
}
waitpid($pid, 0);

how to use O_ASYNC and fcntl in perl?

i want to use O_ASYNC option and when the pipe can read ,
the SIGIO's handler will run .
but the following code are not work . any one can help me ?
#!/bin/env perl
use Fcntl;
$SIG{IO}= sub {
print "catch SIGIO!\n";
};
my $flags=0;
open(FH,"-|","sleep 4 ;echo aaa") or die "$!";
fcntl(FH,F_GETFL,$flags) or die "$!";
fcntl(FH,F_SETFL,$flags | O_NONBLOCK | O_ASYNC) or die "$!";
sleep(5);
print "complete\n";
my perl version is 5.16.1 , operation system is Redhat 5u4 ,kernel 2.6.18, x86_64
Under Linux, you must both request asynchronous notification (O_ASYNC) and specify a recipient (F_SETOWN). So, you need only add one line to your example to make it work:
#!/bin/env perl
use Fcntl;
$SIG{IO}= sub {
print "catch SIGIO!\n";
};
my $flags=0;
open(FH,"-|","sleep 4 ;echo aaa") or die "$!";
fcntl(FH,F_GETFL,$flags) or die "$!";
fcntl(FH,F_SETFL,$flags | O_NONBLOCK | O_ASYNC) or die "$!";
fcntl(FH,F_SETOWN,0 + $$) or die "$!"; # <-- Note that we force $$ to be numeric
sleep(5);
print "complete\n";
Running the above:
$ perl so-12640993.pl
catch SIGIO!
complete
SIGIO based asynchronous IO is edge-triggered, not level triggered.
Before any filehandle will send you a SIGIO you must first "arm" it. To do that, you need to perform whatever is the operation - sysread() in this case - until you get an undef/EAGAIN. At that point the filehandle will now be armed for SIGIO and will send a signal when it is next read-ready. You can then read it until it yields EAGAIN, which will arm it once more.

Perl: Signals and Threads. How to kill thread with qx() inside

i have a script, that parse log and find errors and warnings.
And i want to use user-friendly interpretation of this log.
For this reason, i use notepad.
Here is code:
use v5.16;
use strict;
use warnings;
use Win32::Clipboard;
use threads;
use utf8;
my $kp = Win32::Clipboard->new();
my $output = shift || "out_log.txt";
#my ($input, $output)=#ARGV;
#open my $ih, "<", $input or die "can't open input file with log\n";
open my $oh, ">", $output or die "can't open output file with log\n";
my #alls=split /\n/,$kp->Get();
for my $k(0..$#alls){
$_ = $alls[$k];
if(/^ERR.+|^WARN.+/){
print {$oh} qq(at position $k --> ).$_."\n";
}
}
my $thread =
threads->create(sub{
$SIG{INT}=sub{die"All good\n";};
qx(notepad $output);
}
);
print qq(type 'y' for quit);
do{
print "want to quit?\n>" ;
chomp;
do{
say "I will kill this thread";
$thread->kill('INT') if defined($thread);
say "and delete output";
unlink $output;
exit(0);
}if (m/y/);
}while(<>);
It falls down, when i trying to kill thread which run notepad.
How to do this, using signals and threads? Is it possible?
And your ideas about solution, please.
Thanks!
This isn't working because your SIGINT never gets passed to notepad. So it never gets closed. (And that handler - probably never gets processed).
You need to approach this differently. Look at Win32::Process for some examples of how to spawn/kill a notepad process.
my $ProcessObj;
Win32::Process::Create( $ProcessObj,
"C:\\Windows\\system32\\notepad.exe",
"notepad", 0, NORMAL_PRIORITY_CLASS, "." )
or die $!;
And then you can use
$ProcessObj -> Kill(1);
I'd suggest using Thread::Semaphore or some sort of shared variable to decide if you want to kill your notepad.

Read unbuffered data from pipe in Perl

I am trying to read unbufferd data from a pipe in Perl. For example in the program below:
open FILE,"-|","iostat -dx 10 5";
$old=select FILE;
$|=1;
select $old;
$|=1;
foreach $i (<FILE>) {
print "GOT: $i\n";
}
iostat spits out data every 10 seconds (five times). You would expect this program to do the same. However, instead it appears to hang for 50 seconds (i.e. 10x5), after which it spits out all the data.
How can I get the to return whatever data is available (in an unbuffered manner), without waiting all the way for EOF?
P.S. I have seen numerous references to this under Windows - I am doing this under Linux.
#!/usr/bin/env perl
use strict;
use warnings;
open(PIPE, "iostat -dx 10 1 |") || die "couldn't start pipe: $!";
while (my $line = <PIPE>) {
print "Got line number $. from pipe: $line";
}
close(PIPE) || die "couldn't close pipe: $! $?";
If it is fine to wait in your Perl script instead on the linux command, this should work.
I don't think Linux will give control back to the Perl script before the command execution is completed.
#!/usr/bin/perl -w
my $j=0;
while($j!=5)
{
open FILE,"-|","iostat -dx 10 1";
$old=select FILE;
$|=1;
select $old;
$|=1;
foreach $i (<FILE>)
{
print "GOT: $i";
}
$j++;
sleep(5);
}
I have below code working for me
#!/usr/bin/perl
use strict;
use warnings;
open FILE,"-|","iostat -dx 10 5";
while (my $old=<FILE>)
{
print "GOT: $old\n";
}
The solutions so far did not work for me with regards to unbuffering (Windows ActiveState Perl 5.10).
According to http://perldoc.perl.org/PerlIO.html, "To get an unbuffered stream specify an unbuffered layer (e.g. :unix ) in the open call:".
So
open(PIPE, '-|:unix', 'iostat -dx 10 1') or die "couldn't start pipe: $!";
while (my $line = <PIPE>) {
print "Got $line";
}
close(PIPE);
which worked in my case.

Resources