Threads getting stuck in ssh connection (perl) - multithreading

I'm working on a scrip which idea is to create threads and simultaneously go throughout a list of machines and check for things. It appears that when a thread goes into it's separate terminal using "ssh ......" it gets stuck and I can't kill it. They also have a timer which doesn't seem to be working.
Here is the code:
sub call_cmd{
my $host = shift;
my $cmd = shift;
my $command = $cmd;
my $message;
open( DIR, "$cmd|" ) || die "No cmd: $cmd $!";
while(<DIR>){
$message.=$_;
print "\n $host $message \n";
}
close DIR;
print "I'm here";
}
sub ssh_con($$){
my $host = $_[0];
my $cmd = "ssh $host -l $_[1]";
call_cmd($host,$cmd);
}
I get the output message which the ssh returns, but I never get to the next print.
This is the code for creating the threads.
foreach(#machines){
my $_ = threads->create(thread_creation,$_);
$SIG{ALRM} = sub { $_->kill('ALRM') };
push(#threads,$_);
}
sub thread_creation(){
my $host = $_;
eval{
$SIG{ALRM} = sub { die; };
alarm(5);
ssh_con($host,"pblue");
alarm(0);
}
}
Output :
home/pblue> perl tsh.pl
ssh XXXXX -l pblue
ssh XXXXX -l pblue
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp

Aside from your code being a little odd, I have encountered your issue - specifically in Perl 5.8.8 on RHEL 5.
It seems there's a race condition, where if you spawn two ssh processes within a thread simultaneously, they deadlock. The only solution I have found is a workaround whereby you declare:
my $ssh_lock : shared;
And then 'open' your ssh as a filehandle:
my $ssh_data:
{
lock ( $ssh_lock );
open ( my $ssh_data, "|-", "ssh -n $hostname $command" );
}
#lock out of scope, so released
while ( <$ssh_data> ) {
#do something
}
However this may well be a moot point on newer versions of perl/newer operating systems. I certainly couldn't reproduce it particularly reliably though, and it went away entirely when I started using fork() instead.
That said - your code is doing some rather strange things. Not least that the command you are running is:
ssh $host -l pblue
Which is a valid command, but it'll start ssh interactively - but because you're multithreading, that'll do very strange things with standard in and stdout.
You should also be very careful with signals with multithreading - it doesn't work too well, because of the nature of the inter-process communication. Setting an ALARM signal
For a similar sort of thing - e.g. running commands via ssh - I've had a degree of success with an approach like this:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
use Thread::Queue;
my #servers_to_check = qw ( hostname1 hostname2 hostname3 hostname4 );
my $num_threads = 10;
my $task_q = Thread::Queue->new;
my $ssh_lock : shared;
sub worker_thread {
my ($command_to_run) = #_;
while ( my $server = $task_q->dequeue ) {
my $ssh_results;
{
lock($ssh_lock);
my $pid = open( $ssh_results, "-|",
"ssh -n $server $command_to_run" );
}
while (<$ssh_results>) {
print;
}
close($ssh_results);
}
}
for ( 1 .. $num_threads ) {
threads->create( \&worker_thread, "df -k" );
}
$task_q->enqueue(#servers_to_check);
$task_q->end;
foreach my $thr ( threads->list ) {
$thr->join();
}

Related

Perl threads, sockets and STDIN interference

I am trying to use a perl threads and sockets, to create a simple client/server application. The problem comes in client side, when trying to mix STDIN with socket reading in different threads. Note, I succeeded with using Tkx as workaround to STDIN. But my intention is to build short samples for teaching purposes only, in several programming languages. I intend to create the program minimal, with no UI, as simple as possible.
Here is the code of client, which has the problems:
use strict;
use IO::Socket::INET;
use threads;
our $sock = new IO::Socket::INET ( PeerAddr => 'localhost', PeerPort => 1234, Proto => 'tcp') or die "cannot connect to localhost";
my $thr = threads->create(\&msg_proc, $sock); #, $sock);
for (my $msg; $msg ne "exit";)
{
$msg = <STDIN>;
$msg =~ s/[\r\n\s\0]*\Z//g;
$sock->send ("$msg\n");# or die "can't send";
};
$sock->close();
print "exit main thread\n";
$thr->join() if $thr->is_running();
print "exit all\n";
sub msg_proc
{
my $svr = shift;
my $i = 0;
while ($svr)
{
sleep(1);
$svr->send ("{slept $i}\n") or die "server closed connection";
my $svr_msg = <$svr> or die "server closed connection";
$svr_msg =~ s/[\r\n\s\0]*\Z//g;
print "From server: <<$i $svr_msg>>\n";
$i++;
}
print "sock exit\n";
}
The problem starts when I remove the line $svr->send in the thread procedure msg_proc. In this case the client fails to read normally from STDIN, looks like interfering with socket reading operation <$svr>. Looks like it interferes with <STDIN> operation. For some reason these two can't coexist in parallel. Note, the C++ and Java versions of these demos do not have this problem.
This is the code of the server:
use strict;
use IO::Socket::INET;
use threads;
my $sock = IO::Socket::INET -> new (LocalPort => 1234, Proto => 'tcp', Listen => 1, Reuse => 1) or die "Could not create socket: $!\n";
print "server started\n";
my $i = 0;
while (my $s = $sock->accept())
{
print "New connection\n";
my $thr = threads->create(\&client_proc, $s, $i);
$s->send("welcome $i\n");
$i++;
}
sub client_proc
{
my $client = shift;
my $client_no = shift;
print "## client $client_no started\n";
$client->send("hello $client_no\n");
for (my $msg; $msg ne "exit";)
{
$msg = <$client> or die "client $client_no closed connection";
$msg =~ s/[\r\n\s\0]*\Z//;
print "From client $client_no: '$msg' len: ", length ($msg), "\n";
$client->send( "client $client_no: $msg\n" ) or die "client $client_no closed connection";
};
$client->close();
print "## client $i exit\n";
}
As I understand from here, the use of interpreter-based threads in perl is officially discouraged. But I can't understand, what interpreter-based threads actually means, and what is exactly the alternative provided. Does this mean using threads is discouraged in perl at all?
Note: I am using Strawberry Perl 5.32.1 64 bit under Windows 11, zip package, no msi. But under ActiveState Perl the problem was identically the same.

In perl socket programming how to send a data from client and receive it from server and how to get number of clients processes and client ID?

How can I get the number of clients connected to the server? If say I have opened 4 terminals and ran client.pl 4 times on localhost for testing purposes, how can I get the client ID and client count at server script? I am using Ubuntu on VirtualBox. I am in multi-threaded environment.
'''
#!/usr/bin/perl
#server
use warnings;
use strict;
use IO::Socket;
use threads;
use threads::shared;
$|++;
print "$$ Server started\n";; # do a "top -p -H $$" to monitor server threads
our #clients : shared;
#clients = ();
my $server = new IO::Socket::INET(
Timeout => 7200,
Proto => "tcp",
LocalPort => 9000,
Reuse => 1,
Listen => 3
);
my $num_of_client = -1;
while (1) {
my $client;
do {
$client = $server->accept;
} until ( defined($client) );
my $peerhost = $client->peerhost();
print "accepted a client $client, $peerhost, id = ", ++$num_of_client, "\n";
my $fileno = fileno $client;
push (#clients, $fileno);
#spawn a thread here for each client
my $thr = threads->new( \&processit, $client, $fileno, $peerhost )->detach();
}
# end of main thread
sub processit {
my ($lclient,$lfileno,$lpeer) = #_; #local client
if($lclient->connected){
# Here you can do your stuff
# I use have the server talk to the client
# via print $client and while(<$lclient>)
print $lclient "$lpeer->Welcome to server\n";
while(<$lclient>){
# print $lclient "$lpeer->$_\n";
print "clients-> #clients\n";
foreach my $fn (#clients) {
open my $fh, ">&=$fn" or warn $! and die;
print $fh "$_"
}
}
}
#close filehandle before detached thread dies out
close( $lclient);
#remove multi-echo-clients from echo list
#clients = grep {$_ !~ $lfileno} #clients;
}
__END__
'''
Pass it to processit() with the rest of the parameters.

Multithreading Perl script and crontab/init script

I have an issue with a Perl script using thread.
It works fine when I launch it manually but when I launch it using crontab I have this feedback:
Perl exited with active threads:
0 running and unjoined
1 finished and unjoined
0 running and detached
The PATH variable and SHELL variable are correct on the crontad.
I try to make a init script (to launch as a service) and same error:
Feb 24 08:04:48 SERVER kernel: perl[103293]: segfault at 4a8 ip
00007f6cfd075dd9 sp 00007fffb93437c0 error 4 in
libperl.so[7f6cfcfdf000+183000] Feb 24 08:04:49 SERVER
test_ping[102238]: Perl exited with active threads: Feb 24 08:04:49
SERVER test_ping[102238]: 0 running and unjoined Feb 24 08:04:49
SERVER test_ping[102238]: 1 finished and unjoined Feb 24 08:04:49
SERVER test_ping[102238]: 0 running and detached
So I have also tried to modified the perl with:
for my $thread (threads->list) {
$thread->join();
}
Instead of
for my $thread (threads->list) {
$thread->detach();
}
And after this modification when I launch manually the script, this one seems to be stuck/freezing.
So to resume this is all my check:
Executed manually it work
Via crontab it doesn't work, check of PATH variable and SHELL
variable are ok
Via init script, doesn't work
Try to modify the perl script to join all thread but it the script
is freezing after that.
Anyone has a suggestion ? Something else to check/do ?
Thk
use lib '/usr/local/perf/lib';
use lib '/usr/share/perl5';
use threads;
use Thread::Queue;
use SNMP::Persist qw(&define_oid &start_persister &define_subtree);
use Schedule::ByClock;
use Time::HiRes qw( usleep ualarm gettimeofday tv_interval );
use strict;
#use warnings;
use constant DEBUG => 0;
use constant DEBUG2 => 1;
if ($#ARGV + 1 != 2) {
print "usage: test_ping.pl OUTPUTFILE INPUTFILE \n";
exit;
}
my $output_file=$ARGV[0];
my $data_file=$ARGV[1];
shift #ARGV;
shift #ARGV;
#start the thread serving answers
start_persister();
#create queue for processing commands
my $q_queue = new Thread::Queue;
my $r_queue = new Thread::Queue;
#create threads for processing queues
for(my $i= 0; $i < $thread_num; $i++) {
threads->create(\&process) -> detach();
}
my $datestring=localtime();
my %subtree;
my #raw_data;
my ($q_line, #q_split);
my ($r_line, #r_split);
my $index=0;
# open file to get data
open(DAT, $data_file) || die("Could not open file!");
#raw_data=<DAT>;
close(DAT);
# enqueue all lines to be process by threads
foreach $q_line (#raw_data) {
chomp($q_line);
$q_line =~ s/^\s+//;
$q_line =~ s/\s+$//;
next if ($q_line =~ /^#.*/);
next if ($q_line eq "");
next if ($q_line =~ /^\|/);
#q_split=split(/\|/,$q_line);
next if (!($q_split[0] eq "icmp" || $q_split[0] eq "tcp" || $q_split[0] eq "ldap" || $q_split[0] eq "dig" ));
$q_queue->enqueue(++$index ."|". $q_line);
}
while ($index != 0 && ($r_line = $r_queue->dequeue)) {
open(my $fh, '>>', $output_file) or die "Could not open file '$output_file' $!";
print $fh $datestring."|";
print $fh $r_line."\n";
close $fh;
#r_split=split(/\|/,$r_line);
$index--;
}
for my $thread (threads->list) {
$thread->detach();
}
Below the process fonction:
sub process {
# my #hotefqdn = split(/\./, `hostname`);
# my $hote=$hotefqdn[0];
my ($q_line,#q_split,$q_index,$q_query);
my ($q_module,$q_type,$q_name,$q_host,$q_port,$q_ssl,$q_send,$q_expect,$q_quit);
my ($q_lookup,$q_record);
my ($q_base_dn,$q_attr,$q_binddn,$q_password,$q_warn_time,$q_crit_time,$q_timeout);
my ($r_tab);
while ($q_line = $q_queue->dequeue) {
#q_split=split(/\|/,$q_line);
$q_index=$q_split[0];
$q_module=$q_split[1];
if ($q_module eq "icmp") {
$q_type=$q_split[2];
$q_name=$q_split[3];
$q_host=$q_split[4];
$q_query="$q_host (ping)";
print "query=$q_query\n" if(DEBUG);
$r_tab=icmp_query($q_host);
}
elsif ($q_module eq "tcp") {
$q_type=$q_split[2];
$q_name=$q_split[3];
$q_query="$q_host ($q_type:$q_port)";
print "query=$q_query\n" if(DEBUG);
$r_tab=tcp_query($q_host,$q_port,$q_ssl,$q_send,$q_expect,$q_quit);
}
elsif ($q_module eq "ldap") {
$q_type=$q_split[2];
$q_name=$q_split[3];
print "query=$q_query\n" if(DEBUG);
$r_tab=ldap_query($q_host,$q_base_dn,$q_port,$q_attr,$q_binddn,$q_password,$q_warn_time,$q_crit_time,$q_timeout);
}
elsif ($q_module eq "dig") {
$q_type=$q_split[2];
$q_name=$q_split[3];
$q_query="$q_lookup($q_record) #".$q_host;
print "query=$q_query\n" if(DEBUG);
$r_tab=dig_query($q_host,$q_port,$q_lookup,$q_record,$q_expect);
}
$r_queue->enqueue($q_index."|".$q_name."|".$q_type."|".$q_query."|".$r_tab->{'min'}."|".$r_tab->{'med'}."|".$r_tab->{'avg'}."|".$r_tab->{'max'}."|".$r_tab->{'dev'}."|".$r_tab->{'loss'}."|".$r_tab->{'err'});
}
}
First of all, don't detach your threads. When you do, you can't wait for them to finish.
for (my $i= 0; $i < $thread_num; $i++) {
threads->create(\&process) -> detach();
}
...
for my $thread (threads->list) {
$thread->detach();
}
should be
for (1..$thread_num) {
threads->create(\&process);
}
...
... Tell the threads to finish up ...
for my $thread (threads->list) {
$thread->join();
}
Now on the question: Why don't your threads finish? Well, you never tell them to exit, so they never do! You need to ask to them to exit, and that can be achieved by adding the following:
$q_queue->end();
What follows is what you get after you apply the above fixes. I've also moved all thread-related code out of process, since it doesn't belong there. Finally, I removed the reliance on $index by moving the output code into its own thread.
sub process {
my ($q_line) = #_;
...
return join("|", $q_index, $q_name, $q_type, $q_query, #$r_tab{qw( min med avg max dev loss err )});
}
my $request_q = Thread::Queue->new();
my $response_q = Thread::Queue->new();
my #worker_threads;
for (1..$thread_num) {
push #worker_threads, async {
while (defined( my $request = $request_q->dequeue() )) {
$response_q->enqueue( process($request) );
}
};
}
my $output_thread = do {
my $datestring = localtime();
open(my $fh, '>', $output_file)
or die("Can't create file \"$output_file\": $!\n");
async {
while (defined( my $response = $response_q->dequeue() )) {
print($fh "$datestring|$response\n");
}
}
};
{
my %protos = map { $_ => 1 } qw( icmp tcp ldap dig );
open(my $fh, '<', $data_file)
or die("Can't open file \"$data_file\": $!\n");
my $index = 0;
while (<$fh>) {
s/^\s+//;
s/\s+\z//;
next if $_ eq "" || /^#/;
my ($proto) = split /\|/;
next if !$protos{$proto};
$request_q->enqueue(++$index ."|". $_);
}
}
$request_q->end();
$_->join() for #worker_threads;
$response_q->end();
$output_threads->join();

How to handle a hang scenario in perl? After accessing a file

I need to access a file that is in a nfs mountpath.
After I access, I need to see a hang. If the hang succeeds, then my scenario passes.
If I see a "permission denied" or if access succeeds, the scenario fails.
How do I hadle the hang? After hang, how do I exit/kill that operation and proceed with my program's next set of steps. I am currently doing this.
Can I do something like this if(sleep = 10 seconds) {
The subroutine takes the command to execute, file path.
sub access_timeout($$) {
my $cmd = shift;
my $file_path = shift;
print(qq{Running the command "$cmd $file_path" on the client});
# Here, I need to handle sleep. Sleep is expected case here. something like if ($result = sleep(10)) { success}
my $output = $client=>execute(command => qq{$cmd $file_path && echo _OK_});
if ($output =~ /(.*)_OK_/s) {
croak(qq{Expected a hang, access should not work});
} elsif ($output =~ /permission denied/s || $output =~ /No such file or directory/s) {
croak(qq{expected a hang but there is response});
}
}
Try alarm. This will throw a signal, identified as SIGALRM. All the rest can be gotten from the link.

Implementation of a watchdog in perl

I need to contain execution of an external process (a command line call) into a fixed time window.
After few readings I coded up this implementation:
#/bin/perl -w
use IPC::System::Simple qw( capture );
use strict;
use threads;
use threads::shared;
use warnings;
my $timeout = 4;
share($timeout);
my $stdout;
share($stdout);
my $can_proceed = 1;
share($can_proceed);
sub watchdogFork {
my $time1 = time;
my $ml = async {
my $sleepTime = 2;
my $thr = threads->self();
$stdout = capture("sleep $sleepTime; echo \"Good morning\n\";");
print "From ml: " . $stdout;
$thr->detach();
};
my $time2;
do {
$time2 = time - $time1;
} while ( $time2 < $timeout );
print "\n";
if ( $ml->is_running() ) {
print "From watchdog: timeout!\n";
$can_proceed = 0;
$ml->detach();
}
}
my $wd = threads->create('watchdogFork');
$wd->join();
print "From main: " . $stdout if ($can_proceed);
When $timeout > $sleepTime it returns:
From ml: Good morning
From main: Good morning
Other hand, when $timeout < $sleepTime:
From watchdog: timeout!
The behaviour obtained is correct but I think that this approach is slightly raw.
I was wondering if there are libraries that could help to refine the source code improving readability and performances. Any suggestion?
IPC::Run allows you to run child processes and interact with their stdin, stdout, and stderr. You can also set timeouts, which throw an exception when exceeded:
use IPC::Run qw(harness run timeout);
my #cmd = qw(sleep 10);
my $harness = harness \#cmd, \undef, \my $out, \my $err, timeout(3);
eval {
run $harness or die "sleep: $?";
};
if ($#) {
my $exception = $#; # Preserve $# in case another exception occurs
$harness->kill_kill;
print $exception; # and continue with the rest of the program
}
Note that there are some limitations when running on Windows.
You can use timeout_system from Proc::Background:
use Proc::Background qw(timeout_system);
my $wait_status = timeout_system($seconds, $command, $arg1, $arg2);
my $exit_code = $wait_status >> 8;
The process will be killed after $seconds seconds.

Resources