Perl threads, sockets and STDIN interference - multithreading

I am trying to use a perl threads and sockets, to create a simple client/server application. The problem comes in client side, when trying to mix STDIN with socket reading in different threads. Note, I succeeded with using Tkx as workaround to STDIN. But my intention is to build short samples for teaching purposes only, in several programming languages. I intend to create the program minimal, with no UI, as simple as possible.
Here is the code of client, which has the problems:
use strict;
use IO::Socket::INET;
use threads;
our $sock = new IO::Socket::INET ( PeerAddr => 'localhost', PeerPort => 1234, Proto => 'tcp') or die "cannot connect to localhost";
my $thr = threads->create(\&msg_proc, $sock); #, $sock);
for (my $msg; $msg ne "exit";)
{
$msg = <STDIN>;
$msg =~ s/[\r\n\s\0]*\Z//g;
$sock->send ("$msg\n");# or die "can't send";
};
$sock->close();
print "exit main thread\n";
$thr->join() if $thr->is_running();
print "exit all\n";
sub msg_proc
{
my $svr = shift;
my $i = 0;
while ($svr)
{
sleep(1);
$svr->send ("{slept $i}\n") or die "server closed connection";
my $svr_msg = <$svr> or die "server closed connection";
$svr_msg =~ s/[\r\n\s\0]*\Z//g;
print "From server: <<$i $svr_msg>>\n";
$i++;
}
print "sock exit\n";
}
The problem starts when I remove the line $svr->send in the thread procedure msg_proc. In this case the client fails to read normally from STDIN, looks like interfering with socket reading operation <$svr>. Looks like it interferes with <STDIN> operation. For some reason these two can't coexist in parallel. Note, the C++ and Java versions of these demos do not have this problem.
This is the code of the server:
use strict;
use IO::Socket::INET;
use threads;
my $sock = IO::Socket::INET -> new (LocalPort => 1234, Proto => 'tcp', Listen => 1, Reuse => 1) or die "Could not create socket: $!\n";
print "server started\n";
my $i = 0;
while (my $s = $sock->accept())
{
print "New connection\n";
my $thr = threads->create(\&client_proc, $s, $i);
$s->send("welcome $i\n");
$i++;
}
sub client_proc
{
my $client = shift;
my $client_no = shift;
print "## client $client_no started\n";
$client->send("hello $client_no\n");
for (my $msg; $msg ne "exit";)
{
$msg = <$client> or die "client $client_no closed connection";
$msg =~ s/[\r\n\s\0]*\Z//;
print "From client $client_no: '$msg' len: ", length ($msg), "\n";
$client->send( "client $client_no: $msg\n" ) or die "client $client_no closed connection";
};
$client->close();
print "## client $i exit\n";
}
As I understand from here, the use of interpreter-based threads in perl is officially discouraged. But I can't understand, what interpreter-based threads actually means, and what is exactly the alternative provided. Does this mean using threads is discouraged in perl at all?
Note: I am using Strawberry Perl 5.32.1 64 bit under Windows 11, zip package, no msi. But under ActiveState Perl the problem was identically the same.

Related

In perl socket programming how to send a data from client and receive it from server and how to get number of clients processes and client ID?

How can I get the number of clients connected to the server? If say I have opened 4 terminals and ran client.pl 4 times on localhost for testing purposes, how can I get the client ID and client count at server script? I am using Ubuntu on VirtualBox. I am in multi-threaded environment.
'''
#!/usr/bin/perl
#server
use warnings;
use strict;
use IO::Socket;
use threads;
use threads::shared;
$|++;
print "$$ Server started\n";; # do a "top -p -H $$" to monitor server threads
our #clients : shared;
#clients = ();
my $server = new IO::Socket::INET(
Timeout => 7200,
Proto => "tcp",
LocalPort => 9000,
Reuse => 1,
Listen => 3
);
my $num_of_client = -1;
while (1) {
my $client;
do {
$client = $server->accept;
} until ( defined($client) );
my $peerhost = $client->peerhost();
print "accepted a client $client, $peerhost, id = ", ++$num_of_client, "\n";
my $fileno = fileno $client;
push (#clients, $fileno);
#spawn a thread here for each client
my $thr = threads->new( \&processit, $client, $fileno, $peerhost )->detach();
}
# end of main thread
sub processit {
my ($lclient,$lfileno,$lpeer) = #_; #local client
if($lclient->connected){
# Here you can do your stuff
# I use have the server talk to the client
# via print $client and while(<$lclient>)
print $lclient "$lpeer->Welcome to server\n";
while(<$lclient>){
# print $lclient "$lpeer->$_\n";
print "clients-> #clients\n";
foreach my $fn (#clients) {
open my $fh, ">&=$fn" or warn $! and die;
print $fh "$_"
}
}
}
#close filehandle before detached thread dies out
close( $lclient);
#remove multi-echo-clients from echo list
#clients = grep {$_ !~ $lfileno} #clients;
}
__END__
'''
Pass it to processit() with the rest of the parameters.

Thread-safe alternative to File::Tee?

I was wanting to implement some logging for a threaded script I have, and I came across File::Tee. However, when attempting to ppm the module on a Windows box, it's not found (and according to activestate, not supported on Windows).
I really liked that you could lock file access though, by doing something like:
tee STDOUT, {mode => '>>', open => '$ENV{DOM}\\threaded_build.log', lock => 1};
tee STDERR, {mode => '>>', open => '$ENV{DOM}\\threaded_debug.log', lock => 1};
Is there a cross-platform, thread-safe alternative?
File::Tee takes extra care to handle output generated by external programs run through system or XS code that doesn't go through perlio. I think that's what makes it incompatible with Windows.
IO::Tee is more cross-platform and I don't think making it thread safe would be too hard to do. The sync code in File::Tee just looks like:
flock($teefh, LOCK_EX) if $target->{lock};
print $teefh $cp;
flock($teefh, LOCK_UN) if $target->{lock};
You could accomplish the same thing in IO::Tee by modifying a couple of methods:
use Fcntl ':flock';
no warnings 'redefine';
sub IO::Tee::PRINT
{
my $self = shift;
my $ret = 1;
foreach my $fh (#$self) {
flock($fh, LOCK_EX);
undef $ret unless print $fh #_;
flock($fh, LOCK_UN);
}
return $ret;
}
sub IO::Tee::PRINTF
{
my $self = shift;
my $fmt = shift;
my $ret = 1;
foreach my $fh (#$self) {
flock($fh, LOCK_EX);
undef $ret unless printf $fh $fmt, #_;
flock($fh, LOCK_UN);
}
return $ret;
}

Implementation of a watchdog in perl

I need to contain execution of an external process (a command line call) into a fixed time window.
After few readings I coded up this implementation:
#/bin/perl -w
use IPC::System::Simple qw( capture );
use strict;
use threads;
use threads::shared;
use warnings;
my $timeout = 4;
share($timeout);
my $stdout;
share($stdout);
my $can_proceed = 1;
share($can_proceed);
sub watchdogFork {
my $time1 = time;
my $ml = async {
my $sleepTime = 2;
my $thr = threads->self();
$stdout = capture("sleep $sleepTime; echo \"Good morning\n\";");
print "From ml: " . $stdout;
$thr->detach();
};
my $time2;
do {
$time2 = time - $time1;
} while ( $time2 < $timeout );
print "\n";
if ( $ml->is_running() ) {
print "From watchdog: timeout!\n";
$can_proceed = 0;
$ml->detach();
}
}
my $wd = threads->create('watchdogFork');
$wd->join();
print "From main: " . $stdout if ($can_proceed);
When $timeout > $sleepTime it returns:
From ml: Good morning
From main: Good morning
Other hand, when $timeout < $sleepTime:
From watchdog: timeout!
The behaviour obtained is correct but I think that this approach is slightly raw.
I was wondering if there are libraries that could help to refine the source code improving readability and performances. Any suggestion?
IPC::Run allows you to run child processes and interact with their stdin, stdout, and stderr. You can also set timeouts, which throw an exception when exceeded:
use IPC::Run qw(harness run timeout);
my #cmd = qw(sleep 10);
my $harness = harness \#cmd, \undef, \my $out, \my $err, timeout(3);
eval {
run $harness or die "sleep: $?";
};
if ($#) {
my $exception = $#; # Preserve $# in case another exception occurs
$harness->kill_kill;
print $exception; # and continue with the rest of the program
}
Note that there are some limitations when running on Windows.
You can use timeout_system from Proc::Background:
use Proc::Background qw(timeout_system);
my $wait_status = timeout_system($seconds, $command, $arg1, $arg2);
my $exit_code = $wait_status >> 8;
The process will be killed after $seconds seconds.

Threads getting stuck in ssh connection (perl)

I'm working on a scrip which idea is to create threads and simultaneously go throughout a list of machines and check for things. It appears that when a thread goes into it's separate terminal using "ssh ......" it gets stuck and I can't kill it. They also have a timer which doesn't seem to be working.
Here is the code:
sub call_cmd{
my $host = shift;
my $cmd = shift;
my $command = $cmd;
my $message;
open( DIR, "$cmd|" ) || die "No cmd: $cmd $!";
while(<DIR>){
$message.=$_;
print "\n $host $message \n";
}
close DIR;
print "I'm here";
}
sub ssh_con($$){
my $host = $_[0];
my $cmd = "ssh $host -l $_[1]";
call_cmd($host,$cmd);
}
I get the output message which the ssh returns, but I never get to the next print.
This is the code for creating the threads.
foreach(#machines){
my $_ = threads->create(thread_creation,$_);
$SIG{ALRM} = sub { $_->kill('ALRM') };
push(#threads,$_);
}
sub thread_creation(){
my $host = $_;
eval{
$SIG{ALRM} = sub { die; };
alarm(5);
ssh_con($host,"pblue");
alarm(0);
}
}
Output :
home/pblue> perl tsh.pl
ssh XXXXX -l pblue
ssh XXXXX -l pblue
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp
XXXXX Last login: Mon Sep 30 10:39:01 2013 from ldm052.wdf.sap.corp
Aside from your code being a little odd, I have encountered your issue - specifically in Perl 5.8.8 on RHEL 5.
It seems there's a race condition, where if you spawn two ssh processes within a thread simultaneously, they deadlock. The only solution I have found is a workaround whereby you declare:
my $ssh_lock : shared;
And then 'open' your ssh as a filehandle:
my $ssh_data:
{
lock ( $ssh_lock );
open ( my $ssh_data, "|-", "ssh -n $hostname $command" );
}
#lock out of scope, so released
while ( <$ssh_data> ) {
#do something
}
However this may well be a moot point on newer versions of perl/newer operating systems. I certainly couldn't reproduce it particularly reliably though, and it went away entirely when I started using fork() instead.
That said - your code is doing some rather strange things. Not least that the command you are running is:
ssh $host -l pblue
Which is a valid command, but it'll start ssh interactively - but because you're multithreading, that'll do very strange things with standard in and stdout.
You should also be very careful with signals with multithreading - it doesn't work too well, because of the nature of the inter-process communication. Setting an ALARM signal
For a similar sort of thing - e.g. running commands via ssh - I've had a degree of success with an approach like this:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
use Thread::Queue;
my #servers_to_check = qw ( hostname1 hostname2 hostname3 hostname4 );
my $num_threads = 10;
my $task_q = Thread::Queue->new;
my $ssh_lock : shared;
sub worker_thread {
my ($command_to_run) = #_;
while ( my $server = $task_q->dequeue ) {
my $ssh_results;
{
lock($ssh_lock);
my $pid = open( $ssh_results, "-|",
"ssh -n $server $command_to_run" );
}
while (<$ssh_results>) {
print;
}
close($ssh_results);
}
}
for ( 1 .. $num_threads ) {
threads->create( \&worker_thread, "df -k" );
}
$task_q->enqueue(#servers_to_check);
$task_q->end;
foreach my $thr ( threads->list ) {
$thr->join();
}

How to use threads in Perl?

I want to use threads in Perl to increase the speed of my program ... for example i want to use 20 threads in this code:
use IO::Socket;
my $in_file2 = 'rang.txt';
open DAT,$in_file2;
my #ip=<DAT>;
close DAT;
chomp(#ip);
foreach my $ip(#ip)
{
$host = IO::Socket::INET->new(
PeerAddr => $ip,
PeerPort => 80,
proto => 'tcp',
Timeout=> 1
)
and open(OUT, ">>port.txt");
print OUT $ip."\n";
close(OUT);
}
In the above code we give a list of ips and scan a given port. I want use threads in this code. Is there any other way to increase the speed of my code?
Thanks.
Instead of using threads, you might want to look into AnyEvent::Socket, or Coro::Socket, or POE, or Parallel::ForkManager.
Read the Perl threading tutorial.
Perl can do both threading and forking. "threads" is officially not recommended - in no small part because it's not well understood, and - perhaps slightly counterintutively - isn't lightweight like threads are in some programming languages.
If you are particularly keen to thread, the 'worker' model of threading works much better than spawning a thread per task. You might do the latter in some languages - in perl it's very inefficient.
As such you might do something like this:
#!/usr/bin/env perl
use strict;
use warnings;
use threads;
use Thread::Queue;
use IO::Socket;
my $nthreads = 20;
my $in_file2 = 'rang.txt';
my $work_q = Thread::Queue->new;
my $result_q = Thread::Queue->new;
sub ip_checker {
while ( my $ip = $work_q->dequeue ) {
chomp($ip);
$host = IO::Socket::INET->new(
PeerAddr => $ip,
PeerPort => 80,
proto => 'tcp',
Timeout => 1
);
if ( defined $host ) {
$result_q->enqueue($ip);
}
}
}
sub file_writer {
open( my $output_fh, ">>", "port.txt" ) or die $!;
while ( my $ip = $result_q->dequeue ) {
print {$output_fh} "$ip\n";
}
close($output_fh);
}
for ( 1 .. $nthreads ) {
push( #workers, threads->create( \&ip_checker ) );
}
my $writer = threads->create( \&file_writer );
open( my $dat, "<", $in_file2 ) or die $!;
$work_q->enqueue(<$dat>);
close($dat);
$work_q->end;
foreach my $thr (#workers) {
$thr->join();
}
$result_q->end;
$writer->join();
This uses a queue to feed a set of (20) worker threads with an IP list, and work their way through them, collating and printing results through the writer thread.
But as threads aren't really recommended any more, a better way might be to use Parallel::ForkManager which with your code might go a bit like this:
#!/usr/bin/env perl
use strict;
use warnings;
use Fcntl qw ( :flock );
use IO::Socket;
my $in_file2 = 'rang.txt';
open( my $input, "<", $in_file2 ) or die $!;
open( my $output, ">", "port.txt" ) or die $!;
my $manager = Parallel::ForkManager->new(20);
foreach my $ip (<$input>) {
$manager->start and next;
chomp($ip);
my $host = IO::Socket::INET->new(
PeerAddr => $ip,
PeerPort => 80,
proto => 'tcp',
Timeout => 1
);
if ( defined $host ) {
flock( $output, LOCK_EX ); #exclusive or write lock
print {$output} $ip, "\n";
flock( $output, LOCK_UN ); #unlock
}
$manager->finish;
}
$manager->wait_all_children;
close($output);
close($input);
You need to be particularly careful of file IO when multiprocessing, because the whole point is your execution sequence is no longer well defined. So it's insanely easy to end up with different threads clobbering files that another thread has open, but hasn't flushed to disk.
I note your code - you seem to rely on failing a file open, in order to not print to it. That's not a nice thing to do, especially when your file handle is not lexically scoped.
But in both multiprocessing paradigms I outlined above (there are others, these are the most common) you still have to deal with the file IO serialisation. Note that your 'results' will be in a random order in both, because it'll very much depend on when the task completes. If that's important to you, then you'll need to collate and sort after your threads or forks complete.
It's probably generally better to look towards forking - as said above, in threads docs:
The "interpreter-based threads" provided by Perl are not the fast, lightweight system for multitasking that one might expect or hope for. Threads are implemented in a way that make them easy to misuse. Few people know how to use them correctly or will be able to provide help.
The use of interpreter-based threads in perl is officially discouraged.

Resources