A way to parse terminal output / input ? (.bashrc ?) - linux

Is there a way to parse input and output from bash commands in an interactive terminal before they reach the screen ? I was thinking maybe something in .bashrc, but I'm new to using bash.
For example:
I type "ls /home/foo/bar/"
That gets passed through a script that replaces all instances of 'bar' with 'eggs'
"ls /home/foo/eggs/" gets executed
The output gets sent back to the replace script
The output of the script is sent to the screen

Yes. Here's something I wrote for my own use, to wrap old command line Fortran programs that ask for file-paths. It allows escaping back to the shell for e.g. running 'ls'. This only works one way, i.e. intercepts user-input and then passes it on to a program, but gets you most of what you want. You can adapt it to your needs.
#!/usr/bin/perl
# shwrap.pl - Wrap any process for convenient escape to the shell.
# ire_and_curses, September 2006
use strict;
use warnings;
# Check args
my $executable = shift || die "Usage: shwrap.pl executable";
my #escape_chars = ('#'); # Escape to shell with these chars
my $exit = 'exit'; # Exit string for quick termination
open my $exe_fh, "|$executable #ARGV" or die "Cannot pipe to program $executable: $!";
# Set magic buffer autoflush on...
select((select($exe_fh), $| = 1)[0]);
# Accept input until the child process terminates or is terminated...
while ( 1 ) {
chomp(my $input = <STDIN>);
# End if we receive the special exit string...
if ( $input =~ m/$exit/ ) {
close $exe_fh;
print "$0: Terminated child process...\n";
exit;
}
foreach my $char ( #escape_chars ) {
# Escape to the shell if the input starts with an escape character...
if ( my ($command) = $input =~ m/^$char(.*)/ ) {
system $command;
}
# Otherwise pass the input on to the executable...
else {
print $exe_fh "$input\n";
}
}
}

Related

finding a file in directory using perl script

I'm trying to develop a perl script that looks through all of the user's directories for a particular file name without the user having to specify the entire pathname to the file.
For example, let's say the file of interest was data.list. It's located in /home/path/directory/project/userabc/data.list. At the command line, normally the user would have to specify the pathname to the file like in order to access it, like so:
cd /home/path/directory/project/userabc/data.list
Instead, I want the user just to have to enter script.pl ABC in the command line, then the Perl script will automatically run and retrieve the information in the data.list. which in my case, is count the number of lines and upload it using curl. the rest is done, just the part where it can automatically locate the file
Even though very feasible in Perl, this looks more appropriate in Bash:
#!/bin/bash
filename=$(find ~ -name "$1" )
wc -l "$filename"
curl .......
The main issue would of course be if you have multiple files data1, say for example /home/user/dir1/data1 and /home/user/dir2/data1. You will need a way to handle that. And how you handle it would depend on your specific situation.
In Perl that would be much more complicated:
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
# Import the module File::Find, which will do all the real work
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
# Here, we "import" specific variables from the File::Find module
# The purpose is to be able to just type '$name' instead of the
# complete '$File::Find::name'.
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
# We declare the sub here; the content of the sub will be created later.
sub wanted;
# This is a simple way to get the first argument. There is no
# checking on validity.
our $filename=$ARGV[0];
# Traverse desired filesystem. /home is the top-directory where we
# start our seach. The sub wanted will be executed for every file
# we find
File::Find::find({wanted => \&wanted}, '/home');
exit;
sub wanted {
# Check if the file is our desired filename
if ( /^$filename\z/) {
# Open the file, read it and count its lines
my $lines=0;
open(my $F,'<',$name) or die "Cannot open $name";
while (<$F>){ $lines++; }
print("$name: $lines\n");
# Your curl command here
}
}
You will need to look at the argument-parsing, for which I simply used $ARGV[0] and I do dont know what your curl looks like.
A more simple (though not recommended) way would be to abuse Perl as a sort of shell:
#!/usr/bin/perl
#
my $fn=`find /home -name '$ARGV[0]'`;
chomp $fn;
my $wc=`wc -l '$fn'`;
print "$wc\n";
system ("your curl command");
Following code snippet demonstrates one of many ways to achieve desired result.
The code takes one parameter, a word to look for in all subdirectories inside file(s) data.list. And prints out a list of found files in a terminal.
The code utilizes subroutine lookup($dir,$filename,$search) which calls itself recursively once it come across a subdirectory.
The search starts from current working directory (in question was not specified a directory as start point).
use strict;
use warnings;
use feature 'say';
my $search = shift || die "Specify what look for";
my $fname = 'data.list';
my $found = lookup('.',$fname,$search);
if( #$found ) {
say for #$found;
} else {
say 'Not found';
}
exit 0;
sub lookup {
my $dir = shift;
my $fname = shift;
my $search = shift;
my $files;
my #items = glob("$dir/*");
for my $item (#items) {
if( -f $item && $item =~ /\b$fname\b/ ) {
my $found;
open my $fh, '<', $item or die $!;
while( my $line = <$fh> ) {
$found = 1 if $line =~ /\b$search\b/;
if( $found ) {
push #{$files}, $item;
last;
}
}
close $fh;
}
if( -d $item ) {
my $ret = lookup($item,$fname,$search);
push #{$files}, $_ for #$ret;
}
}
return $files;
}
Run as script.pl search_word
Output sample
./capacitor/data.list
./examples/data.list
./examples/test/data.list
Reference:
glob,
Perl file test operators

Redirect STDERR in OPEN pipe comand. Perl Linux

The version of perl I'm restricted to is v5.10.0;
I have a script, "script1" that needs to call another script "script2."
In order to bypass the shell and avoid command line injection I am using "open" to do this in perl.
open(my $fh, "-|", "/path/to/script2", "-n", "$param") or say "failure $#";
I can not modify script2.
I need to get the output of the stderr from script2 into a variable in script1.
I got the "-|" syntax from https://perldoc.perl.org/functions/open.html but the description there is mostly an example of how to do it and not what it is doing so I can't figure out how to redirect err to out or even at least redirect err to some other variable.
I'm hoping for something that will look similar to one of these two options:
# option 1)
open(my $fh, "-|", "/path/to/script2", "-n", "$param") or say "failure $#";
while($line = <$fh>) { # stderr and stdout are both fed to $line
print $line;
}
# option 2)
open(STDERR, "+<", $stderr);
open(my $fh, "-|", "/path/to/script2", "-n", "$param") or say "failure $#";
while($line = <$fh>) { #stdout is fed to $line
print $line;
}
while($line = <$stderr>) { #stderr from script2 is fed to $line
print $line;
}
I would do this using IPC::Run3:
use strict;
use warnings;
use IPC::Run3;
run3 ['script2.pl', '-n', $param], undef, \my $out, \my $err;
This will give you the STDERR of script2.pl in the variable $err.
From the documentation:
run3($cmd, $stdin, $stdout, $stderr, \%options)
All parameters after $cmd are optional.
The parameters $stdin, $stdout and $stderr indicate how the child's
corresponding filehandle (STDIN, STDOUT and STDERR, resp.) will be
redirected. Because the redirects come last, this allows STDOUT and
STDERR to default to the parent's by just not specifying them -- a
common use case.
$cmd
Usually $cmd will be an ARRAY reference and the child is invoked via
system #$cmd;
Note that passing $cmd as an array reference will avoid running the command through the shell under the same conditions as for the system() command
If the objective is to capture streams one nice module is Capture::Tiny
use warnings;
use strict;
use Capture::Tiny qw(capture);
my $cmd = '/path/to/script2';
my #args = ('-n', $param);
my ($stdout, $stderr, $exit) = capture {
system($cmd, #args);
};
You can put the command and arguments in a list, #cmd = ('ls', '-l', './'). If the command and arguments are lumped into a scalar the shell may get used (if there are shell metacharacters).
This portable module allows you to
capture almost anything sent to STDOUT or STDERR, regardless of whether it comes from Perl, from XS code or from an external program.
There is also capture_stderr function if you only want that stream.
Note that many tools, including system, have a provision for bypassing the shell: pass command and/or arguments as a list. (However, if you need STDERR you cannot get it directly with system alone, without using shell redirection.)
If there is a possibility that #args would be empty this should be invoked as
system ( {$cmd} $cmd, #args );
and the shell is still avoided. A nice way to try it out is with $cmd = 'echo "From shell"'. Thanks to ikegami for noting all this in a comment.
See exec for this use of the indirect object notation, which
forces interpretation of the LIST as a multivalued list, even if there is only a single scalar in the list
and this ensures that the shell is never called.

How to use `diff` on files whose paths contain whitespace

I am trying to find the differences between files, but the filename and directory name contain white space. I am trying to execute the command in a Perl script.
diff /home/users/feroz/logs/back_up20161112/Security File/General Security.csv /home/users/feroz/logs/back_up20161113/Security File/General Security.csv
Perl
open( my $FH, '>', $logfile ) or die "Cannot open the file '$logfile' $!";
foreach $filename ( keys %filenames ) {
$old_file = $parent_directory . $previous_date . $search_directory . "$filenames{$filename}";
$new_file = $parent_directory . $current_date . $search_directory . "$filenames{$filename}";
if ( !-e $old_file ) {
#print ("\nFile does not exist in previos date backup");
print $FH "\nERROR:'$old_file' ---- does not exist in the backup directory ";
}
elsif ( !-e $new_file ) {
#print ("\n The file does not exist in current directory");
print $FH "\nERROR:'$new_file' --- does not exist in the present directory ";
}
else {
# print $FH "\nDifference between the files $filenames{$filename} of $previous_date and $current_date ";
my $cmd = 'diff $old_file $new_file| xargs -0';
open( my $OH, '|-', $cmd ) or die "Failed to read the output";
while ( <OH> ) {
print $FH "$_";
}
close $OH;
}
}
To be absolutly safe, use ShellQuote
use String::ShellQuote;
my $old_file2 = shell_quote($old_file);
my $new_file2 = shell_quote($new_file);
`diff $old_file2 $new_file2`;
Thank you for showing your Perl code
Single quotes don't interpolate, so that will pass the strings $old_file and $new_file to the command instead of those variables' contents. The shell will then try to interpret them as shell variables
I suggest that you write this instead
my $cmd = qq{diff '$old_file' '$new_file' | xargs -0};
open( my $OH, '-|', $cmd ) or die "Failed to read the output";
That will use double quotes (qq{...}) around the command string so that the variables are interpolated. The file paths have single quotes around them to indicate that the shell should treat them as individual strings
This won't work if there's a chance that your file paths could contain a single quote, but that's highly unusual
Pass arguments out-of-band to avoid the need to shell-quote them, rather than interpolating them into a string which is parsed by a shell as a script. Substituting filenames as literal text into a script generates exposure to shell injection attacks -- the shell-scripting equivalent to the family of database security bugs known as SQL injection.
Without Any Shell At All
The pipe to xargs -0 appears to be serving no purpose here. Eliminating it allows this to be run without any shell involved at all:
open(my $fh, "-|", "diff", $old_file, $new_file)
With Shell Arguments Passed Out-Of-Band From Script Text
If you really do want the shell to be invoked, the safe thing to do is to keep the script text an audited constant, and have it retrieve arguments from either the argv list passed to the shell or the environment.
# Putting $1 and $2 in double quotes ensures that the shell treats contents as literal
# the "_" is used for $0 in the shell.
$shell_script='diff "$1" "$2" | xargs -0'
open(my $fh, "-|",
"sh", "-c", $shell_script,
"_", $old_file, $new_file);
You can either
Put the whitespace path segment inside quotes
diff /home/users/feroz/logs/back_up20161112/"Security File"/General Security.csv /home/users/feroz/logs/back_up20161113/"Security File"/General Security.csv
or escape the whitespace
diff /home/users/feroz/logs/back_up20161112/Security\ File/General Security.csv /home/users/feroz/logs/back_up20161113/Security\ File/General Security.csv`

Standalone child in backtick command

Here is a main script that exec the perl script "fork.pl"
#!/bin/bash
OUTPUT=`./fork.pl`
echo "$OUTPUT"
And the fork.pl:
#!/usr/bin/perl
use strict;
use warnings;
use POSIX;
my $pid = fork();
if ($pid == 0) {
sleep(5);
print("child: $pid\n");
}
else {
print("parent: $pid\n")
}
The backtick implies a wait, but I would like to not wait for the last child.
thanks
One of the ways to not to wait for the termination, is to start in the background while redirecting the output to a file. Then try to read the lines with the shell's read.
For example, a hack to read the first line:
./fork.pl > temp.out &
sleep 1
read OUTPUT < temp.out
Alternatively, without sleep, but limited to a do/done block:
./fork.pl | while read OUTPUT; do
# use $OUTPUT here
break # first line only, or loop conditionally
done
It needs to detach from parent and to redirect the input/output :
if ($pid == 0) {
my $mysid = setsid();
open (STDIN, "</dev/null");
open (STDOUT, ">/dev/null");
open (STDERR, ">&STDOUT");
sleep(5);
print("child: $pid\n");
}

Providing a status update when "ENTER" is pressed, while program is working

I have a PERL script that loops through and calls a binary with a different argument. I am using IPC::Run. I would like when the user presses a key such as "ENTER" a status message is displayed such as
"Currently working on 14 of 28 total scripts (50% complete)"
My script is as follows:
foreach my $file (#files) {
$file =~ s/$file_dir//;
#Run the test case, store the output in $stdout
run [ "php", "PROGRAM.phar", "$file" ], ">", \my $stdout;
print LOG_FILE "Return code $?\n";
print LOG_FILE "Output: $stdout");
}
Basically how would I interrupt the binary in order to display my status message?
If I correct this usage of IPC::Run is not multithreaded. It will execute the commands one by one and it is not possible to print messages because there is only one process.
Like:
use Parallel::ForkManager;
$pm = new Parallel::ForkManager($MAX_PROCESSES);
my $input;
foreach $data (#all_data) {
# Forks and returns the pid for the child:
my $pid = $pm->start and next;
... do some work with $data in the child process ...
$pm->finish; # Terminates the child process
chomp($input= <STDIN>);
print "Some statistics\n" if $input =~ m!\n!;
}
Regards,

Resources