Executing a bash script from a Perl program - linux

I'm trying to write a Perl program which will execute a bash script. The Perl script looks like this
#!/usr/bin/perl
use diagnostics;
use warnings;
require 'userlib.pl';
use CGI qw(:standard);
ReadParse();
my $q = new CGI;
my $dir = $q->param('X');
my $s = $q->param('Y');
ui_print_header(undef, $text{'edit_title'}.$dir, "");
print $dir."<br>";
print $s."<br>";
print "Under Construction <br>";
use Cwd;
my $pwd = cwd();
my $directory = "/Logs/".$dir."/logmanager/".$s;
my $command = $pwd."/script ".$directory."/".$s.".tar";
print $command."<br>";
print $pwd."<br>";
chdir($directory);
my $pwd1 = cwd();
print $pwd1."<br>";
system($command, $directory) or die "Cannot open Dir: $!";
The script fail with the following error:
Can't exec "/usr/libexec/webmin/foobar/script
/path/filename.tar": No such file or directory at /usr/libexec/webmin/foobar/program.cgi line 23 (#3)
(W exec) A system(), exec(), or piped open call could not execute the
named program for the indicated reason. Typical reasons include: the
permissions were wrong on the file, the file wasn't found in
$ENV{PATH}, the executable in question was compiled for another
architecture, or the #! line in a script points to an interpreter that
can't be run for similar reasons. (Or maybe your system doesn't support #! at all.)
I've checked that the permissions are correct, the tar file I'm passing to my bash script exists, and also tried from the command line to run the same command I'm trying to run from the Perl script ( /usr/libexec/webmin/foobar/script /path/filename.tar ) and it works properly.

In Perl, calling system with one argument (in scalar context) and calling it with several scalar arguments (in list context) does different things.
In scalar context, calling
system($command)
will start an external shell and execute $command in it. If the string in $command has arguments, they will be passed to the call, too. So for example
$command="ls /";
system($commmand);
will evaluate to
sh -c "ls /"
where the shell is given the entire string, i.e. the command with all arguments. Also, the $command will run with all the normal environment variables set. This can be a security issue, see here and here for a few examples why.
On the other hand, if you call system with an array (in list context), Perl will not call a shell and give it the $command as argument, but rather try to execute the first element of the array directly and give it the other arguments as parameters. So
$command = "ls";
$directory = "/";
system($command, $directory);
will call ls directly, without spawning a shell in between.
Back to your question: your code says
my $command = $pwd."/script ".$directory."/".$s.".tar";
system($command, $directory) or die "Cannot open Dir: $!";
Note that $command here is something like /path/to/script /path/to/foo.tar, with the argument already being part of the string. If you call this in scalar context
system($command)
all will work fine, because
sh -c "/path/to/script /path/to/foo.tar"
will execute script with foo.tar as argument. But if you call it in list context, it will try to locate an executable named /path/to/script /path/to/foo.tar, and this will fail.

I found the problem.
changed the system command removing the second parameter and now it's working
system($command) or die "Cannot open Dir: $!";
In fairness I did not understand what was wrong on first example but now works fine, if anyone can explain probably it can be interesting understand

There are multiple ways to execute bash command/ scripts in perl.
System
backquate
exec

Related

Pass a indicator from Bash back to Perl over SSH via STDIN

We have a Linux server which can run a diagnostic script, diag.pl, which coordinates reporting over other servers.
diag.pl iterates over the child servers, and for each of them, SSHs in and runs a bash script, which passes information back:
my $cmd=sprintf("ssh %s sudo /usr/lib/support/report.sh -e %s | uudecode -o \"%s-outfile.tgz\") 2>%1 |", $server, $specialparam, $servername)
The line of code in report.sh that sends the data back is:
uuencode --base64 ${REPORT}.tar.gz /dev/stdout
I would like to update report.sh to send back an additional line of information, something like:
echo "special-file-found=${SFF}" > /tmp/sff.cfg
uuencode --base64 /tmp/sff.cfg > /dev/stdout
Once the special file has been found, the Perl script will update so that it no longer sends the specialparam back to subsequent report.sh calls.
Is there a good way to send that input so that it will be easy for Perl to catch it?
What have I tried
Setting a user.comment attr on the tar.gz using setattr, but the comment does not survive the uuencoding
Currently thinking that my best bet is to use the pseudocode above, creating a new file to encode and send along, and update the Perl script to check it with each new transmission until it finds the special file.
I take it that the objective is to modify a shell script which returns to the caller an encoded file, so that it sends yet more information, specifically a string to be used as a flag in the caller.
It is not clear how the shell script is run from the Perl script, but there are ways to do this so that the caller gets back separate "lines" that are printed, either as they are emitted or altogether after the run completes.
Then you can just add to the shell script the needed extra print to STDOUT, and in the caller check each line of shell output to see whether it conforms to some "protocol;" for example, whether it is, or starts with, special-file-found string. Then you can set flags for further calls or write control file for following runs, etc. Otherwise, the line is the encoded file.
A made-up basic example using pipe-open (see by the end of the page)
use warnings;
use strict;
use feature 'say';
my #cmd = qw(ls -l ./);
my $file_found = quotemeta 'special-file-found';
my ($flag, $binfile);
my $pid = open(my $out, '-|', #cmd) // die "Can't open #cmd: $!";
while (<$out>) {
chomp;
if (/^$file_found/) {
$flag = 1;
}
else {
$binfile = $_;
# whatever else need be done, or perhaps last;
}
}
close $out;
This example runs the command ls -l ./ but instead of it you can run any executable, like #cmd = ('report.sh', 'arg1', 'arg2',...).
Another way is to use backticks (qx) and assign its return to an array, in which case each element receives a line of output.
Yet another, better, way is to use a module which manages external commands. For example, from simple to more capable: IPC::System::Simple, Capture::Tiny, IPC::Run3, IPC::Run.

'less' the file specified by the output of 'which'

command 'which' shows the link to a command.
command 'less' open the file.
How can I 'less' the file as the output of 'which'?
I don't want to use two commands like below to do it.
=>which script
/file/to/script/fiel
=>less /file/to/script/fiel
This is a use case for command substitution:
less -- "$(which commandname)"
That said, if your shell is bash, consider using type -P instead, which (unlike the external command which) is built into the shell:
less -- "$(type -P commandname)"
Note the quotes: These are important for reliable operation. Without them, the command may not work correctly if the filename contains characters inside IFS (by default, whitespace) or can be evaluated as a glob expression.
The double dashes are likewise there for correctness: Any argument after them is treated as positional (as per POSIX Utility Syntax Guidelines), so even if a filename starting with a dash were to be returned (however unlikely this may be), it ensures that less treats that as a filename rather than as the beginning of a sequence of options or flags.
You may also wish to consider honoring the user's pager selection via the environment variable $PAGER, and using type without -P to look for aliases, shell functions and builtins:
cmdsource() {
local sourcefile
if sourcefile="$(type -P -- "$1")"; then
"${PAGER:-less}" -- "$sourcefile"
else
echo "Unable to find source for $1" >&2
echo "...checking for a shell builtin:" >&2
type -- "$1"
fi
}
This defines a function you can run:
cmdsource commandname
You should be able to just pipe it over, try this:
which script | less

Installation script in Perl not functioning correctly

I have a program that gets installed using the following Perl script. The installation does not work and I get the message"No installer found." Obviously, nothing was done as the script just simply dies.
Here is the Perl install script (it is for installing a program called Simics):
#!/usr/bin/perl
use strict;
use warnings;
# Find the most recent installer in the current working directory.
my $installer;
my $highest_build = 0;
opendir my $d, "." or die $!;
foreach (readdir $d) {
if (-f && -x && /^build-(\d+)-installer/) {
if ($1 > $highest_build) {
$highest_build = $1;
$installer = $_;
}
}
}
closedir $d;
die "No installers found.\n" unless defined $installer;
exec "./$installer", #ARGV;
Stepping through your code above, this line:
foreach (readdir $d) {
reads the name of each of the files in the directory you opened to the handle "$d" and assigns each of those files in turn to the thing variable ($). (This variable is one of those weird but brilliant Perl idiosyncrasies. You don't have to mention $ in most cases; it's just there.)
Then in the next line:
if (-f && -x && /^build-(\d+)-installer/) {
The "-f" and the "-x" are file test operators. Since neither one has an explicit argument (e.g., -f "myfile.txt") they will use the implied thing variable, $_. The -f operator just checks to see if something is a file and the -x checks to see if the file is executable, (as indicated by the executable bit being set.) The third part, /^build-(\d+)-installer/, checks to see if it matches that pattern.
As you mentioned in your comment above, the directory listing shows
-rw------- 1 nikk nikk 52238 Feb 27 20:50 build-4607-installer.pl
The rw------- shows the file permissions for each of three groups, the owner ("nikk") and the group that owns the file (second "nikk"). The first three characters, starting with rw-, show that nikk can read and write from the file - but not execute. The listing would show rwx if nikk could execute the file. The next two groups of three characters, --- and --- show that neither the group nikk nor anyone else on the machine can read, write, or execute.
More information on Unix file system permissions
The lack of execute permission is causing the "-x" test to fail. There are two ways of fixing this. Either remove the -x from the if test so that it looks like this:
if (-f && /^build-(\d+)-installer/) {
Or add execute permission to the file. To do that just for the owner (assuming your program is running as user nikk or as root, do this:
chmod u+x build-4607-installer.pl
More information on chmod.
I hope that's helpful!

What effect does this line have in a shell script?

I've seen this line in many shell scripts but I don't understand the effect it has. Could someone explain please?
tempfile=`tempfile 2>/dev/null` || tempfile=/tmp/test$$
It creates a temporary file and puts the path to it in the $tempfile variable.
`tempfile 2>/dev/null`
runs the tempfile command (man tempfile) and discards any error messages. If it succeeds, it returns the name of the newly created temporary file. If it fails, it returns non-zero, in which case the next part of the command runs.
For a command this || that, that only runs if this fails, i.e. returns non-zero.
$$ is a variable in bash that expands to the process ID of the shell. (Compare the results of ps and echo $$.) So tempfile=/tmp/test$$ will expand to something like tempfile=/tmp/test2278.
Presumably, later in the script, something writes to $tempfile.
The shell has a separate namespace for command and variables (making it a Lisp-2, LOL) which is exploited in your script line. tempfile is a command which is run to compute the value of the tempfile variable which is unrelated to it in any way. tempfile produces a pathname suitable for use as the name of a temporary file. 2> /dev/null redirects any error message from tempfile into /dev/null (2 is the standard error file descriptor). The command1 || command2 logic means, "execute command2 if command1 fails". If we can't get a temporary name from tempfile, then we use /tmp/test$$, where $$ is a special built-in shell parameter which expands to the shell's own process ID.
tempfile creates a temporay file with a file name similar to /tmp/tmp.XXXXXX
2>/dev/null redirects the command output to the /dev/null device, which just throws it away. This redirection just ignore any errors on creating a temporary file.
|| chains two commands together. If the first fails, the second is executed. If the first succeeds nothing else happens.
$$ is the pid of the current shell, which means that if the tempfile command fails the tempfile variable will still contain a string in the form /tmp/test6052 if the process' pid is 6052.
The first part of the line, up to the ||, runs the program tempfile and captures standard output in the variable tempfile, throwing errors away. There's an exit status, too: either zero for success or non-zero for failure (either failure to execute the tempfile command or failure reported by the tempfile command when it is run).
The || means "if the LHS (left-hand side) failed then do the RHS (right-hand side)".
So, if the tempfile command had a problem, the RHS will be used, assigning a simpler temporary file name to tempfile (the variable).
Overall, it is equivalent to:
if tempfile=`tempfile 2>/dev/null`
then : OK
else tempfile=/tmp/test$$
fi
Only it is on one line, not four.
The idea is, I'm sure, to get something in $tempfile whether or not the tempfile command exists on the machine.
Did you look at man tempfile?
That line is trying to use tempfile(1) to generate a temporary filename, storing it in $tempfile. If that fails (the "||", "or" part), it falls back to an explicit filename of /tmp/test$$, where $$ is the PID of the executing script.

How can I run a function from a script in command line?

I have a script that has some functions.
Can I run one of the function directly from command line?
Something like this?
myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example:
$ cat test.sh
testA() {
echo "TEST A $1";
}
testB() {
echo "TEST B $2";
}
"$#"
$ bash test.sh
$ bash test.sh testA
TEST A
$ bash test.sh testA arg1 arg2
TEST A arg1
$ bash test.sh testB arg1 arg2
TEST B arg2
For polish, you can first verify that the command exists and is a function:
# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
# call arguments verbatim
"$#"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it:
. ./myScript.sh && function_name
Briefly, no.
You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care.
There is no way to call a function from a shell script as if it were a shared library.
Using case
#!/bin/bash
fun1 () {
echo "run function1"
[[ "$#" ]] && echo "options: $#"
}
fun2 () {
echo "run function2"
[[ "$#" ]] && echo "options: $#"
}
case $1 in
fun1) "$#"; exit;;
fun2) "$#"; exit;;
esac
fun1
fun2
This script will run functions fun1 and fun2 but if you start it with option
fun1 or fun2 it'll only run given function with args(if provided) and exit.
Usage
$ ./test
run function1
run function2
$ ./test fun2 a b c
run function2
options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed:
sed -n "/^func ()/,/^}/p" myScript.sh
And to execute it at the time I need it, I put it in a file and use source:
sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts.
If you have a bash script called "control" and inside it you have a function called "build":
function build() {
...
}
Then you can call it like this (from the directory where it is):
./control build
If it's inside another folder, that would make it:
another_folder/control build
If your file is called "control.sh", that would accordingly make the function callable like this:
./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh:
#!/bin/bash
source $1 && shift && "#a"
Then call any function within any script via:
./eval_func.sh <any script> <any function> <any args>...
An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly.
I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example...
# breakfast.sh
make_donuts() {
echo 'donuts!'
}
make_bagels() {
echo 'bagels!'
}
# bootstrap.sh
source 'breakfast.sh'
make_donuts
Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts.
Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you.
# breakfast.sh
# Now it has to do slightly different things
# depending on where the script lives!
make_donuts() {
if [[ $AWS_ENV_VAR ]]
then
echo '/donuts'
elif [[ $AZURE_ENV_VAR ]]
then
echo '\donuts'
else
echo '/keto_diet'
fi
}
If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session:
# bootstrap.sh
source 'breakfast.sh'
case $1 in
AWS)
AWS_ENV_VAR="arn::mumbo:jumbo:12345"
;;
AZURE)
AZURE_ENV_VAR="cloud::woo:_impress"
;;
esac
make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly.
In terminal session you're just entering:
./bootstrap.sh AWS
Result:
# /donuts
you can call function from command line argument like below
function irfan() {
echo "Irfan khan"
date
hostname
}
function config() {
ifconfig
echo "hey"
}
$1
Once you defined the functions put $1 at the end to accept argument which function you want to call.
Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.

Resources