my($stdout, $stderr, $exit)
so i am trying to send command-lines to a remote server using a code but I don't know if i am doing it right, here is the code
#!/usr/bin/perl
use warnings;
use Net::SSH::Perl;
use strict;
#my($stdout, $stderr, $exit) = $ssh->cmd("ls -l /home/$usr")
my $host = "$host";
my $usr = "$usr";
my $pwd = "$pwd";
my $ssh = Net::SSH::Perl->new($host);
$ssh->login($usr,$pwd);
$ssh->cmd("script /home/$usr/textput.txt/");
$ssh->cmd("ls -l /$usr/jrivas/");
$ssh->cmd("exit");
When i run the code it seems to work fine until i go check if it made the script in the server but it wasn't there so i am imagining is the way i am sending the commands.
UPDATE
I've managed to connect to my server and make the .txt file but i havent been able to automatically save it to my out pc instead of the server
here is my code so far
#!/usr/bin/perl
use Net::SSH2;
use Net::SSH::Expect;
use Net::SSH::Perl;
use Net::SSH;
#my($stdout, $stderr, $exit) = $ssh->cmd("ls -l /home/$usr")
my $host = "host";
my $usr = "root";
my $pwd = "pwd";
#my $ssh = Net::SSH::Perl->new($host);
#$ssh->login($usr,$pwd);
my $ssh = Net::SSH::Expect->new (
host => "$host",
password=> "$pwd",
user => "$usr",
raw_pty => 1
);
my $login_output = $ssh->login();
if ($login_output !~ /Welcome/) {
die "Login has failed. Login output was $login_output";
}
my $script = $ssh->exec("script /home/jrivas/script_output.txt");
my $script = $ssh->exec("ls -l /home/jrivas");
my $script = $ssh->exec("ls -a /home/jrivas");
my($out, $err, $exit) = $ssh->cmd("cat script_output.txt");
my $script = $ssh->exec("exit");
#$ssh->cmd("script /home/jrivas/textputq.txt/");
#$ssh->cmd("ls -l /home/jrivas");
#$ssh->cmd("exit");
So next thing is to get the script_output.txt to save in my computer automatically
Thanks to #zdim for helping me :)
Related
I'm trying to develop a perl script that looks through all of the user's directories for a particular file name without the user having to specify the entire pathname to the file.
For example, let's say the file of interest was data.list. It's located in /home/path/directory/project/userabc/data.list. At the command line, normally the user would have to specify the pathname to the file like in order to access it, like so:
cd /home/path/directory/project/userabc/data.list
Instead, I want the user just to have to enter script.pl ABC in the command line, then the Perl script will automatically run and retrieve the information in the data.list. which in my case, is count the number of lines and upload it using curl. the rest is done, just the part where it can automatically locate the file
Even though very feasible in Perl, this looks more appropriate in Bash:
#!/bin/bash
filename=$(find ~ -name "$1" )
wc -l "$filename"
curl .......
The main issue would of course be if you have multiple files data1, say for example /home/user/dir1/data1 and /home/user/dir2/data1. You will need a way to handle that. And how you handle it would depend on your specific situation.
In Perl that would be much more complicated:
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
# Import the module File::Find, which will do all the real work
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
# Here, we "import" specific variables from the File::Find module
# The purpose is to be able to just type '$name' instead of the
# complete '$File::Find::name'.
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
# We declare the sub here; the content of the sub will be created later.
sub wanted;
# This is a simple way to get the first argument. There is no
# checking on validity.
our $filename=$ARGV[0];
# Traverse desired filesystem. /home is the top-directory where we
# start our seach. The sub wanted will be executed for every file
# we find
File::Find::find({wanted => \&wanted}, '/home');
exit;
sub wanted {
# Check if the file is our desired filename
if ( /^$filename\z/) {
# Open the file, read it and count its lines
my $lines=0;
open(my $F,'<',$name) or die "Cannot open $name";
while (<$F>){ $lines++; }
print("$name: $lines\n");
# Your curl command here
}
}
You will need to look at the argument-parsing, for which I simply used $ARGV[0] and I do dont know what your curl looks like.
A more simple (though not recommended) way would be to abuse Perl as a sort of shell:
#!/usr/bin/perl
#
my $fn=`find /home -name '$ARGV[0]'`;
chomp $fn;
my $wc=`wc -l '$fn'`;
print "$wc\n";
system ("your curl command");
Following code snippet demonstrates one of many ways to achieve desired result.
The code takes one parameter, a word to look for in all subdirectories inside file(s) data.list. And prints out a list of found files in a terminal.
The code utilizes subroutine lookup($dir,$filename,$search) which calls itself recursively once it come across a subdirectory.
The search starts from current working directory (in question was not specified a directory as start point).
use strict;
use warnings;
use feature 'say';
my $search = shift || die "Specify what look for";
my $fname = 'data.list';
my $found = lookup('.',$fname,$search);
if( #$found ) {
say for #$found;
} else {
say 'Not found';
}
exit 0;
sub lookup {
my $dir = shift;
my $fname = shift;
my $search = shift;
my $files;
my #items = glob("$dir/*");
for my $item (#items) {
if( -f $item && $item =~ /\b$fname\b/ ) {
my $found;
open my $fh, '<', $item or die $!;
while( my $line = <$fh> ) {
$found = 1 if $line =~ /\b$search\b/;
if( $found ) {
push #{$files}, $item;
last;
}
}
close $fh;
}
if( -d $item ) {
my $ret = lookup($item,$fname,$search);
push #{$files}, $_ for #$ret;
}
}
return $files;
}
Run as script.pl search_word
Output sample
./capacitor/data.list
./examples/data.list
./examples/test/data.list
Reference:
glob,
Perl file test operators
I am using perl expect to enter password in an interactive program. Perl is outputting correct password but setting it something else.
#!/bin/perl
use Expect;
my $hostname = qx! /usr/bin/hostname !;
my $Passwd = `./ensite_passwd solidcore $hostname`;
my $sadminPasswd = quotemeta $Passwd;
chomp $sadminPasswd;
chop $sadminPasswd;
print "password is $sadminPasswd \n\n";
my $sadminPasswd = Expect->spawn("/sbin/sadmin", "passwd")
or die "Cannot spawn /sbin/sadmin $!\n";
$sadminPasswd->expect(300,
[qr/New Password:/ => sub {
my $fh = shift;
$fh->send("${sadminPasswd}\n");
print "sent '${sadminPasswd}'\n";
exp_continue;
}
],
[qr/Retype Password:/ => sub {
my $fh = shift;
$fh->send("p2c4f8j5\n");
#$fh->send("${sadminPasswd}\n");
#print "sent '${sadminPasswd}'\n";
print "sent 'p2c4f8j5'\n";
}
]);
$sadminPasswd->soft_close();
I am getting below output:
swdvssd0046$ sudo perl test.pl
password is p2c4f8j5
New Password:sent 'Expect=GLOB(0x23ae188)'
Retype Password:sent 'p2c4f8j5'
Passwords do not match.
swdvssd0046$
I don't understand 'Expect=GLOB(0x23ae188)' at all.
I know password for this host would be "p2c4f8j5" that's why I have manually entered it in confirm password expect code.
Any idea what am I missing?
use warnings; ... "my" variable $sadminPasswd masks earlier declaration in same scope
Thanks toolic for answer!
The basic code:
my $ContentDate = `date -d '1 hour ago' '+%Y-%m-%e %H:'`;
my $Fail2banNo = `grep Ban /var/log/fail2ban.log | grep $ContentDate | wc -l`;
if (($Fail2banNo > $fail2ban)) {
} else {
}
Why wont Perl complete these commands correctly? $fail2ban is already defined to 0, so that's not the issue.
The fail2ban.log does contain a line that should match(when running command from shell it matches):
2018-07-19 xx:11:50,200 fail2ban.actions[3725]: WARNING [iptables] Ban x.x.x.x
The error i keep getting is:
grep: 10:: No such file or directory
sh: -c: line 1: syntax error near unexpected token `|'
sh: -c: line 1: ` | wc -l'
Argument "" isn't numeric in numeric gt (>) at /usr/local/bin/tmgSupervision.pl line 3431.
All the commands run fine from bash/shell, seems at if perl is not happy with grep being piped to another grep? I've tried many different ways of adding the variable($ContentDate) into the grep without helping.
I notice that the answer you've accepted has a rather over-complicated way to calculate the timestamp an hour ago, so I present this alternative which uses Perl's built-in date and time handling in a more efficient manner.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Time::Piece;
my $log = '/var/log/fail2ban.log';
my $fail2ban = 0;
my $ContentDate = localtime(time - 3600)->strftime('%Y-%m-%e %H:');
my $Fail2banNo = qx{grep Ban $log | grep "$ContentDate" | wc -l};
if ($Fail2banNo > $fail2ban) {
say 'Yes';
}
else {
say 'No';
}
But the only change you actually needed was to change:
my $Fail2banNo = `grep Ban /var/log/fail2ban.log | grep $ContentDate | wc -l`;
to:
my $Fail2banNo = `grep Ban /var/log/fail2ban.log | grep "$ContentDate" | wc -l`;
Quoting $ContentDate because it contains a space.
The first problem is that $ContentDate ends with a line feed.
The second problem is that you improperly create the shell literal from $ContentDate (using grep 2018-07-19 08: instead of something like grep '2018-07-19 08:').
Let's first fix your answer.
use String::ShellQuote qw( shell_quote );
my $fail2ban_log_qfn = '/var/log/fail2ban.log';
my $content_date = `date -d '1 hour ago' '+%Y-%m-%e %H:'`;
chomp($content_date);
my $grep_cmd1 = shell_quote('grep', 'Ban', $fail2ban_log_qfn);
my $grep_cmd2 = shell_quote('grep', '--', $content_date);
my $fail2ban_count = `$grep_cmd1 | $grep_cmd2 | wc -l`;
chomp($fail2ban_count);
No need to shell out to get the date, though.
use POSIX qw( strftime );
use String::ShellQuote qw( shell_quote );
my $fail2ban_log_qfn = '/var/log/fail2ban.log';
my $content_date = strftime('%Y-%m-%e %H:', localtime(time - 3600));
my $grep_cmd1 = shell_quote('grep', 'Ban', $fail2ban_log_qfn);
my $grep_cmd2 = shell_quote('grep', '--', $content_date);
my $fail2ban_count = `$grep_cmd1 | $grep_cmd2 | wc -l`;
chomp($fail2ban_count);
No need to shell out to to count the matching lines either.
use POSIX qw( strftime );
my $fail2ban_log_qfn = '/var/log/fail2ban.log';
my $fail2ban_count = 0;
{
my $content_date = strftime('%Y-%m-%e %H:', localtime(time - 3600));
open(my $fh, '<', $log_qfn)
or die("Can't open \"$fail2ban_log_qfn\": $!\n");
while (<$fh>) {
++$fail2ban_count if /Ban/ && /\Q$content_date/;
}
}
Try this
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
my $log = "/var/log/fail2ban.log";
my $fail2ban = 0;
my $ContentDate = backintime(3600); #backintime in seconds from now
my $Fail2banNo = qx{grep Ban $log | grep \"$ContentDate\"| wc -l};
if ($Fail2banNo > $fail2ban) {
say "Yes";
}
else {
say "No";
}
sub backintime {
my ($to_today) = #_;
my $temps = time - 60*60*24*0;
$temps = $temps - $to_today;
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime($temps);
if($sec < 10) {$sec = "0" . $sec;}
if($min < 10) {$min = "0" . $min;}
if($hour < 10) {$hour = "0" . $hour;}
if($mday < 10) {$mday = "0" . $mday;}
$year+=1900; $mon++; if($mon < 10) {$mon = "0" . $mon;}
my $sectime = $year."-".$mon."-".$mday." ".$hour.":";
return($sectime);
}
I've written a script which is designed to find all files not owned by either an existing user or group. However, despite having created a test user and then removing it leaving behind its /home directory, the script is not finding it. Clearly I have an error in the script's logic. I just can't find it.
#!/usr/bin/perl
# Directives which establish our execution environment
use warnings;
use strict;
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
# Variables used throughout the script
my $OUTDIR = "/var/log/tivoli/";
my $MTAB = "/etc/mtab";
my $PERMFILE = "orphan_files.txt";
my $TMPFILE = "orphan_files.tmp";
my $ROOT = "/";
my(#devNum, #uidNums, #gidNums);
# Create an array of the file stats for "/"
my #rootStats = stat("${ROOT}");
# Compile a list of mountpoints that need to be scanned
my #mounts;
open MT, "<${MTAB}" or die "Cannot open ${MTAB}, $!";
# We only want the local HDD mountpoints
while (<MT>) {
if ($_ =~ /ext[34]/) {
my #line = split;
push(#mounts, $line[1]);
}
}
close MT;
# Build an array of each mountpoint's device number for future comparison
foreach (#mounts) {
my #stats = stat($_);
push(#devNum, $stats[0]);
print $_ . ": " . $stats[0] . "\n";
}
# Build an array of the existing UIDs on the system
while((my($name, $passwd, $uid, $gid, $quota, $comment, $gcos, $dir, $shell)) = getpwent()) {
push(#uidNums, $uid);
}
# Build an array of existing GIDs on the system
while((my($name, $passwd, $gid, $members)) = getgrent()){
push(#gidNums, $gid);
}
# Create a regex to compare file device numbers to.
my $devRegex = do {
chomp #devNum;
local $" = '|';
qr/#devNum/;
};
# Create a regex to compare file UIDs to.
my $uidRegex = do {
chomp #uidNums;
local $" = '|';
qr/#uidNums/;
};
# Create a regex to compare file GIDs to.
my $gidRegex = do {
chomp #gidNums;
local $" = '|';
qr/#gidNums/;
};
print $gidRegex . "\n";
# Create the output file path if it doesn't already exist.
mkdir "${OUTDIR}" or die "Cannot execute mkdir on ${OUTDIR}, $!" unless (-d "${OUTDIR}");
# Create our filehandle for writing our findings
open ORPHFILE, ">${OUTDIR}${TMPFILE}" or die "Cannot open ${OUTDIR}${TMPFILE}, $!";
foreach (#mounts) {
# The anonymous subroutine which is executed by File::Find
find sub {
my #fileStats = stat($File::Find::name);
# Is it in a basic directory, ...
return if $File::Find::dir =~ /sys|proc|dev/;
# ...an actual file vs. a link, directory, pipe, etc, ...
return unless -f;
# ...local, ...
return unless $fileStats[0] =~ $devRegex;
# ...and unowned? If so write it to the output file
if (($fileStats[4] !~ $uidRegex) || ($fileStats[5] !~ $gidRegex)) {
print $File::Find::name . " UID: " . $fileStats[4] . "\n";
print $File::Find::name . " GID: " . $fileStats[5] . "\n";
print ORPHFILE "$File::Find::name\n";
}
}, $_;
}
close ORPHFILE;
# If no world-writable files have been found ${TMPFILE} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${OUTDIR}${TMPFILE}") {
unlink "${OUTDIR}${TMPFILE}";
} else {
rename("${OUTDIR}${TMPFILE}","${OUTDIR}${PERMFILE}") or die "Cannot rename file ${OUTDIR}${TMPFILE}, $!";
}
The test user's home directory showing ownership (or lack thereof):
drwx------ 2 20000 20000 4096 Apr 9 19:59 test
The regex for comparing a files GID to those existing on the system:
(?-xism:0|1|2|3|4|5|6|7|8|9|10|12|14|15|20|30|39|40|50|54|63|99|100|81|22|35|19|69|32|173|11|33|18|48|68|38|499|76|90|89|156|157|158|67|77|74|177|72|21|501|502|10000|10001|10002|10004|10005|10006|5001|5002|5005|5003|10007|10008|10009|10012|10514|47|51|6000|88|5998)
What am I missing with my logic?
I really recommend using find2perl for doing anything with locating files by different attributes. Although not as pretty as File::Find or File::Find::Rule it does the work for you.
mori#liberty ~ $ find2perl -nouser
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
sub wanted;
my (%uid, %user);
while (my ($name, $pw, $uid) = getpwent) {
$uid{$name} = $uid{$uid} = $uid;
}
# Traverse desired filesystems
File::Find::find({wanted => \&wanted}, '.');
exit;
sub wanted {
my ($dev,$ino,$mode,$nlink,$uid,$gid);
(($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&
!exists $uid{$uid}
&& print("$name\n");
}
'20000' =~ /(0|1|2|...)/
matches. You probably want to anchor the expression:
/^(0|1|2|...)$/
(The other answer is better, just adding this for completeness.)
I am using Strawberry Perl on Windows XP to download multiple html pages, I want each in a variable.
Right now I am doing this, but as I see it, it gets one page at a time:
my $page = `curl -s http://mysite.com/page -m 2`;
my $page2 = `curl -s http://myothersite.com/page -m 2`;
I looked into Parallel::ForkManager, but couldnt get it to work.
Also tried to use the windows command start before curl but that doesn't get the page.
Is there a more simple way to do this?
The Parallel::ForkManager module should work for you, but because it uses fork instead of threads, the variables in the parent and each of the child processses is separate and they must communicate a different way.
This program uses the -o option of curl to save the pages in files. The file for, say, http://mysite.com/page is saved in file http\mysite.com\page and can be retrieved from there by the parent process.
use strict;
use warnings;
use Parallel::ForkManager;
use URI;
use File::Spec;
use File::Path 'make_path';
my $pm = Parallel::ForkManager->new(10);
foreach my $site (qw( http://mysite.com/page http://myothersite.com/page )) {
my $pid = $pm->start;
next if $pid;
fetch($site);
$pm->finish;
}
$pm->wait_all_children;
sub fetch {
my ($url) = #_;
my $uri = URI->new($url);
my $filename = File::Spec->catfile($uri->scheme, $uri->host, $uri->path);
my ($vol, $dir, $file) = File::Spec->splitpath($filename);
make_path $dir;
print `curl http://mysite.com/page -m 2 -o $filename`;
}
Update
Here is a version that uses threads with threads::shared to return each page into a hash shared between all the threads. The hash must be marked as shared, and locked before it is modified to prevent concurrent access.
use strict;
use warnings;
use threads;
use threads::shared;
my %pages;
my #threads;
share %pages;
foreach my $site (qw( http://mysite.com/page http://myothersite.com/page )) {
my $thread = threads->new('fetch', $site);
push #threads, $thread;
}
$_->join for #threads;
for (scalar keys %pages) {
printf "%d %s fetched\n", $_, $_ == 1 ? 'page' : 'pages';
}
sub fetch {
my ($url) = #_;
my $page = `curl -s $url -m 2`;
lock %pages;
$pages{$url} = $page;
}