How to handle a hang scenario in perl? After accessing a file - linux

I need to access a file that is in a nfs mountpath.
After I access, I need to see a hang. If the hang succeeds, then my scenario passes.
If I see a "permission denied" or if access succeeds, the scenario fails.
How do I hadle the hang? After hang, how do I exit/kill that operation and proceed with my program's next set of steps. I am currently doing this.
Can I do something like this if(sleep = 10 seconds) {
The subroutine takes the command to execute, file path.
sub access_timeout($$) {
my $cmd = shift;
my $file_path = shift;
print(qq{Running the command "$cmd $file_path" on the client});
# Here, I need to handle sleep. Sleep is expected case here. something like if ($result = sleep(10)) { success}
my $output = $client=>execute(command => qq{$cmd $file_path && echo _OK_});
if ($output =~ /(.*)_OK_/s) {
croak(qq{Expected a hang, access should not work});
} elsif ($output =~ /permission denied/s || $output =~ /No such file or directory/s) {
croak(qq{expected a hang but there is response});
}
}

Try alarm. This will throw a signal, identified as SIGALRM. All the rest can be gotten from the link.

Related

Mojolicous: Limiting number of Promises / IOLoop->subprocess

I'm using Mojolicious non-blocking methods (Promises) to request data from external systems. 1) I'd like to notify the user immediately that the process has started; 2) I'd like to scale this program.
The code below works for a small set of numbers (few hundreds), with more numbers, I get an error [error] Can't create pipe: Too many open files at /path/lib/perl5/Mojo/IOLoop.pm line 156. Question 1) How can I limit the number of Promises I spawn (map in my code below):
#!/usr/bin/env perl
use Mojolicious::Lite;
use Mojolicious::Plugin::TtRenderer;
sub isPrime
{
my ($n) = #_;
my $e = sqrt($n);
for (my $i=2; $i<$e; $i++) {
return 0 if $n%$i==0;
}
return 1;
}
sub makeApromise
{
my ($number) = #_;
my $promise = Mojo::Promise->new;
Mojo::IOLoop->subprocess(
sub { # first callback is executed in subprocess
my %response;
# Simulate a long computational process
$response{'number'} = $number;
$response{'isPrime'} = isPrime($number);
return \%response;
},
sub { # second callback resolves promise with subprocess result
my ($self, $err, #result) = #_;
return $promise->reject($err) if $err;
$promise->resolve(#result);
},
);
return $promise;
}
plugin 'tt_renderer'; # automatically render *.html.tt templates
any '/' => sub {
my ($self) = #_;
my $lines = $self->param( 'textarea' );
if ($lines) {
my #numbers;
foreach my $number (split(/\r?\n/, $lines)) {
push(#numbers, $number) if $number =~ /^\d+$/;
}
if (#numbers) {
####################################
### This is the problem below... ###
my #promises = map { makeApromise($_) } #numbers;
####################################
# MojoPromise Wait
Mojo::Promise->all(#promises)
->then(sub {
my #values = map { $_->[0] } #_;
foreach my $response (#values) {
#print STDERR $response->{'number'}, " => ", $response->{'isPrime'}, "\n";
# Prepare email...
}
# Send an email...
})
#->wait # Don't wait? I want to tell the user to wait for an email as quickly as possible...
if #promises;
}
$self->stash(done => "1",);
}
$self->render(template => 'index', format => 'html', handler => 'tt');
};
app->start;
__DATA__
## index.html.tt
<!DOCTYPE html>
<html lang="en">
<head>
<title>Make A Promise</title>
</head>
<body>
[% IF done %]
<h3>Thank you! You will receive an email shortly with the results.</h3>
[% ELSE %]
<h3>Enter numbers...</h3>
<form role="form" action="/" method="post">
<textarea name="textarea" rows="5" autofocus required></textarea>
<button type="submit">Submit</button>
</form>
[% END %]
</body>
</html>
I commented out the wait; however, it appears the code is still blocking. Question 2) How can I notify the user immediately that the process has already started? (i.e. when I stash the done variable)
The problem isn't the number of promises but the number of subprocesses. One way to limit this is to simply limit how many you create at a time in your program logic. Instead of spawning them all at once in a map, set a limit and retrieve that many from #numbers (perhaps using splice) and spawn those subprocesses; create an ->all promise that waits on those and attach a ->then to that promise to retrieve your next chunk of numbers, and so on.
Another option is to use Future::Utils fmap_concat which can take care of the rate-limiting code by have you provide a number of the maximum outstanding Futures. Your promise-returning function can apply Mojo::Promise::Role::Futurify to chain a following Future to use in this manner.
#!/usr/bin/env perl
use Mojolicious::Lite;
use Mojo::File 'path';
use Mojo::IOLoop;
use Mojo::Promise;
use Future::Utils 'fmap_concat';
get '/' => sub {
my $c = shift;
my $count = $c->param('count') // 0;
my #numbers = 1..$count;
if (#numbers) {
my $result_f = fmap_concat {
my $number = shift;
my $p = Mojo::Promise->new;
Mojo::IOLoop->subprocess(sub {
sleep 2;
return $number+1;
}, sub {
my ($subprocess, $err, #result) = #_;
return $p->reject($err) if $err;
$p->resolve(#result);
});
return $p->with_roles('Mojo::Promise::Role::Futurify')->futurify;
} foreach => \#numbers, concurrent => 20;
$result_f->on_done(sub {
my #values = #_;
foreach my $response (#values) {
$c->app->log->info($response);
}
})->on_fail(sub {
my $error = shift;
$c->app->log->fatal($error);
})->retain;
$c->stash(done => 1);
}
$c->render(text => "Processing $count numbers\n");
};
app->start;
As for the wait method, this does nothing when the event loop is already running, which in a webapp response handler it will be, if you started the application in a Mojolicious daemon (as opposed to a PSGI or CGI server which don't support asynchronous responses). The ->stash and ->render calls outside of the callbacks will be run immediately after setting up the subprocesses. Then the response handler will complete, and the event loop will have control again, which will fire the appropriate ->then callbacks once the promises resolve. The render should not be waiting for anything beyond the setting up of subprocesses; since you said there may be hundreds, that could be the slowdown you're experiencing. Make sure you are using Mojolicious 7.86 or newer as Subprocess was changed so the fork will not happen until the next tick of the event loop (after your response handler has completed).
I'll also note that Subprocesses aren't really designed for this; they're designed for executing slow code that still returns an eventual result to the browser in a response (and Mojolicious::Plugin::Subprocess is nice for this use case). One problem I can see is that if you restart the application, any still pending subprocesses will just be ignored. For jobs that you want to set off and forget, you might consider a job queue like Minion which has great integration into Mojolicious apps, and runs via a separate worker process.

expect error handling - spawn id not open

I'm writing an expect script which can log out in hundreds of routers and change their config.
My problem is, there is a bug on the routers firmware which causes them to close the connection after the password is send.
If I log in again, it works perfectly (so only the first log in after reboot causes the exception).
When the connection is closed the expect script is terminated.
I would like if i could gracefully catch the exception, and try again.
The code which fails is this part:
# go through each IP
for {set i $start} {$i <= $end} {incr i} {
set ip "10.$octet2.$i.x"
puts "\n\n\n#### doing $ip...\n" ; flush stdout
# log in to the IP
spawn ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l $user $ip
expect {
"continue connecting (yes/no)?" { send "yes\r" ; exp_continue }
"login as: " { send "$user\r" ; exp_continue }
"Password: " { send "$pwd\r" }
"No route to host" { continue }
timeout { continue }
}
# execute commands from file
foreach c "$commands" { eval $c }
}
The error I get looks like this:
Password:
Connection to 10.x.x.x closed by remote host.
Connection to 10.x.x.x closed.
send: spawn id exp11 not open
while executing
"send "exit\r""
("eval" body line 1)
invoked from within
"eval $c "
("foreach" body line 1)
invoked from within
"foreach c "$commands" { eval $c }"
("for" body line 18)
invoked from within
"for {set i $start} {$i <= $end} {incr i} {
set ip "10.$octet2.$i.x"
puts "\n\n\n#### doing $ip...\n" ; flush stdout
# log in to the IP
spa..."
(file "./multido.exp" line 39)
Any help is really appreciated!
You can catch the exception using the tcl command catch to surround the command that might error. You would extend your code's inner loop to resemble this:
set tryrun 1
while {$tryrun} {
spawn ssh ...
expect ...
set tryrun 0
foreach c "$commands" {
if {[catch {eval $c} result]} {
puts "failed: $result"
set tryrun 1
}
}
}
Perhaps a simpler solution would be to look for the pattern "closed by remote host" in your expect, and using this to repeat a similar loop.

Thread-safe alternative to File::Tee?

I was wanting to implement some logging for a threaded script I have, and I came across File::Tee. However, when attempting to ppm the module on a Windows box, it's not found (and according to activestate, not supported on Windows).
I really liked that you could lock file access though, by doing something like:
tee STDOUT, {mode => '>>', open => '$ENV{DOM}\\threaded_build.log', lock => 1};
tee STDERR, {mode => '>>', open => '$ENV{DOM}\\threaded_debug.log', lock => 1};
Is there a cross-platform, thread-safe alternative?
File::Tee takes extra care to handle output generated by external programs run through system or XS code that doesn't go through perlio. I think that's what makes it incompatible with Windows.
IO::Tee is more cross-platform and I don't think making it thread safe would be too hard to do. The sync code in File::Tee just looks like:
flock($teefh, LOCK_EX) if $target->{lock};
print $teefh $cp;
flock($teefh, LOCK_UN) if $target->{lock};
You could accomplish the same thing in IO::Tee by modifying a couple of methods:
use Fcntl ':flock';
no warnings 'redefine';
sub IO::Tee::PRINT
{
my $self = shift;
my $ret = 1;
foreach my $fh (#$self) {
flock($fh, LOCK_EX);
undef $ret unless print $fh #_;
flock($fh, LOCK_UN);
}
return $ret;
}
sub IO::Tee::PRINTF
{
my $self = shift;
my $fmt = shift;
my $ret = 1;
foreach my $fh (#$self) {
flock($fh, LOCK_EX);
undef $ret unless printf $fh $fmt, #_;
flock($fh, LOCK_UN);
}
return $ret;
}

Threads getting stuck in ssh connection (perl)

I'm working on a scrip which idea is to create threads and simultaneously go throughout a list of machines and check for things. It appears that when a thread goes into it's separate terminal using "ssh ......" it gets stuck and I can't kill it. They also have a timer which doesn't seem to be working.
Here is the code:
sub call_cmd{
my $host = shift;
my $cmd = shift;
my $command = $cmd;
my $message;
open( DIR, "$cmd|" ) || die "No cmd: $cmd $!";
while(<DIR>){
$message.=$_;
print "\n $host $message \n";
}
close DIR;
print "I'm here";
}
sub ssh_con($$){
my $host = $_[0];
my $cmd = "ssh $host -l $_[1]";
call_cmd($host,$cmd);
}
I get the output message which the ssh returns, but I never get to the next print.
This is the code for creating the threads.
foreach(#machines){
my $_ = threads->create(thread_creation,$_);
$SIG{ALRM} = sub { $_->kill('ALRM') };
push(#threads,$_);
}
sub thread_creation(){
my $host = $_;
eval{
$SIG{ALRM} = sub { die; };
alarm(5);
ssh_con($host,"pblue");
alarm(0);
}
}
Output :
home/pblue> perl tsh.pl
ssh XXXXX -l pblue
ssh XXXXX -l pblue
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp
Aside from your code being a little odd, I have encountered your issue - specifically in Perl 5.8.8 on RHEL 5.
It seems there's a race condition, where if you spawn two ssh processes within a thread simultaneously, they deadlock. The only solution I have found is a workaround whereby you declare:
my $ssh_lock : shared;
And then 'open' your ssh as a filehandle:
my $ssh_data:
{
lock ( $ssh_lock );
open ( my $ssh_data, "|-", "ssh -n $hostname $command" );
}
#lock out of scope, so released
while ( <$ssh_data> ) {
#do something
}
However this may well be a moot point on newer versions of perl/newer operating systems. I certainly couldn't reproduce it particularly reliably though, and it went away entirely when I started using fork() instead.
That said - your code is doing some rather strange things. Not least that the command you are running is:
ssh $host -l pblue
Which is a valid command, but it'll start ssh interactively - but because you're multithreading, that'll do very strange things with standard in and stdout.
You should also be very careful with signals with multithreading - it doesn't work too well, because of the nature of the inter-process communication. Setting an ALARM signal
For a similar sort of thing - e.g. running commands via ssh - I've had a degree of success with an approach like this:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
use Thread::Queue;
my #servers_to_check = qw ( hostname1 hostname2 hostname3 hostname4 );
my $num_threads = 10;
my $task_q = Thread::Queue->new;
my $ssh_lock : shared;
sub worker_thread {
my ($command_to_run) = #_;
while ( my $server = $task_q->dequeue ) {
my $ssh_results;
{
lock($ssh_lock);
my $pid = open( $ssh_results, "-|",
"ssh -n $server $command_to_run" );
}
while (<$ssh_results>) {
print;
}
close($ssh_results);
}
}
for ( 1 .. $num_threads ) {
threads->create( \&worker_thread, "df -k" );
}
$task_q->enqueue(#servers_to_check);
$task_q->end;
foreach my $thr ( threads->list ) {
$thr->join();
}

Perl POE::Wheel::FollowTail running in a thread not modifying global variables

In this program POE::Wheel::FollowTail works well for following the tail of a file, it is also running in a separate thread to simply monitor the progress of a compile job.
Inside the InputEvent handler there's a crude regex to extract compile results, and there everything is working fine, but I cannot get any result values to be accessible outside this sub. Even if I put result variables in the global scope they are not modified.
The program consists of one process running the compile job, another watching the log, and the main loop waiting.
Global scope:
my $Pass = 0;
my $Done = 0;
Then to kick off the monitoring:
threads->create(\&StartWatcher);
Where the watch-log file sub looks like this:
sub StartWatcher
{
my $logfile = "filename.log";
# Create the logfile watcher
POE::Session->create
(
inline_states =>
{
_start => sub
{
$_[HEAP]{tailor} = POE::Wheel::FollowTail->new( Filename => $logfile, InputEvent => "got_log_line", );
},
got_log_line => sub
{
$Pass += () = $_[ARG0] =~ /^\d+.*vcproj \- 0 error\(s\), \d+ warning\(s\)/g;
$Done += () = $_[ARG0] =~ /^\d+.*vcproj \- \d+ error\(s\), \d+ warning\(s\)/g;
print "POE InputEvent Pass: $Pass, Done: $Done\n"; # Debug output
},
}
);
POE::Kernel->run();
}
The $logfile is being written by a Visual Studio compile job started using Win32::Process::Create and the main Perl execution is sitting in this loop waiting for the compiler to terminate, and producing a status output every second.
while('true')
{
$ProcessObj->Wait(100); # milliseconds wait
$ProcessObj->GetExitCode($exitcode);
if ( $exitcode == STILL_ACTIVE )
{
"Compiling... [$Done/$Count] Pass: $Pass Failed: $failed"
if($RunCounter++ % 10 == 0);
next;
}
last;
}
The output produced is similar to this:
POE InputEvent Pass: 1, Done: 1
Compiling... [0/91] Pass: 0 Failed: 0
ie. in the InputEvent handler got_log_line the two global variables have been incremented, yet in the Perl main loop they are still at zero. I realise that I could do the print output from the InputEvent handler but why doesn't it modify global variables?
What is going wrong?
Threading in perl doesn't work in same way as in other languages, the program space is not shared. In thread creation, current thread is copied into new one, which separated from the parent one (each thread has it's own instrance of perl interpret). If you want to communicate between the threads, look at threads::shared, Thread::Queue and Thread::Semaphore.
Following from anydot's suggestion above here's the answer:
Before starting the thread, create a shared queue
use threads;
use Thread::Queue;
use threads::shared;
my $queue:shared = Thread::Queue->new();
In the input event, create some shared data to enqueue
my %data:shared = ();
$data{PASS_VCPRJ_COUNT} = () = $_[ARG0] =~ /^\d+.*vcproj.*0 error.*\d+ warning/g;
$data{DONE_VCPRJ_COUNT} = () = $_[ARG0] =~ /^\d+.*vcproj.*d+ error.*\d+ warning/g;
$queue->enqueue(\%data) if($data{DONE_VCPRJ_COUNT} ne 0 || $data{PASS_VCPRJ_COUNT} ne 0);
Then in the screen update code, dequeue it, here non-blocking
if (defined(my $item = $queue->dequeue_nb()))
{
foreach my $key(%$item)
{print "$key $item->{$key}\n";}
}
There are other ways, I'm sure but this works for me.
Many thanks.

Resources