Installation script in Perl not functioning correctly - linux

I have a program that gets installed using the following Perl script. The installation does not work and I get the message"No installer found." Obviously, nothing was done as the script just simply dies.
Here is the Perl install script (it is for installing a program called Simics):
#!/usr/bin/perl
use strict;
use warnings;
# Find the most recent installer in the current working directory.
my $installer;
my $highest_build = 0;
opendir my $d, "." or die $!;
foreach (readdir $d) {
if (-f && -x && /^build-(\d+)-installer/) {
if ($1 > $highest_build) {
$highest_build = $1;
$installer = $_;
}
}
}
closedir $d;
die "No installers found.\n" unless defined $installer;
exec "./$installer", #ARGV;

Stepping through your code above, this line:
foreach (readdir $d) {
reads the name of each of the files in the directory you opened to the handle "$d" and assigns each of those files in turn to the thing variable ($). (This variable is one of those weird but brilliant Perl idiosyncrasies. You don't have to mention $ in most cases; it's just there.)
Then in the next line:
if (-f && -x && /^build-(\d+)-installer/) {
The "-f" and the "-x" are file test operators. Since neither one has an explicit argument (e.g., -f "myfile.txt") they will use the implied thing variable, $_. The -f operator just checks to see if something is a file and the -x checks to see if the file is executable, (as indicated by the executable bit being set.) The third part, /^build-(\d+)-installer/, checks to see if it matches that pattern.
As you mentioned in your comment above, the directory listing shows
-rw------- 1 nikk nikk 52238 Feb 27 20:50 build-4607-installer.pl
The rw------- shows the file permissions for each of three groups, the owner ("nikk") and the group that owns the file (second "nikk"). The first three characters, starting with rw-, show that nikk can read and write from the file - but not execute. The listing would show rwx if nikk could execute the file. The next two groups of three characters, --- and --- show that neither the group nikk nor anyone else on the machine can read, write, or execute.
More information on Unix file system permissions
The lack of execute permission is causing the "-x" test to fail. There are two ways of fixing this. Either remove the -x from the if test so that it looks like this:
if (-f && /^build-(\d+)-installer/) {
Or add execute permission to the file. To do that just for the owner (assuming your program is running as user nikk or as root, do this:
chmod u+x build-4607-installer.pl
More information on chmod.
I hope that's helpful!

Related

SSH Create Directory In Remote Site Using Perl Script

Previously I have asked a question here on how to determine whether a path is a directory or not in remote site using SSH. I wish to create the directory if the path is not a directory. I have tried following code with two ways but it seem not to be working. Thanks for everyone that helps here.
use File::Path;
my $destination_path = "<path>";
my $ssh = "usr/bin/ssh";
my $user_id = getpwuid( $< );
my $site = "<site_name>";
my $host = "rsync.$site.com";
if (system("$ssh $user_id\#$host [ -d $destination_path ]") == 0) {
print "It is a directory.\n";
} else {
print "It is not a directory.\n";
#First Way
if(system("$ssh $user_id\#$host [ make_path ($d_path_full) ]") == 0{
#Second Way
if(system("$ssh $user_id\#$host [ mkdir -p $d_path_full ]") == 0{
print "Create directory successfully.\n";
} else {
print "Create directory fail.\n";
}
}
The bracket(s), single [ or the pair [ ], is a builtin in bash which is a test operator (see man test), and the last use of it is incorrect. But you don't need it to make a directory
use warnings;
use strict;
use feature 'say';
my $ssh = '/usr/bin/ssh';
my $user_id = ...
my $host = ...
my $to = quotemeta $user_id.'#'.$host;
my $cmd = 'mkdir -p TEST_MKDIR_OVER_SSH';
system("$ssh $to $cmd") == 0 or die "Can't mkdir: $!";
The mkdir is quiet with -p if a directory already exists, and it returns succes what also defeats the purpose of [ ] (if that was the intent). But an actual error -- a file with that name exists, no permissions on the path, etc -- does make its way back to the script, as you'd want, and a string with the error message is in $! so please test for this.
If you simply wish to know whether the directory existed please put back your test branch, or just omit -p and analyze the $! for what that message is on your system.
As for the second attempt: the command to be executed runs on the remote system and has nothing to do with this script anymore (apart from interpolated variables). So Perl functions or libraries from this script make no sense in that command.
For the next step I suggest to look into modules for (preparing and) running external commands, that are much more helpful than the bare system.
Some, from simple to more capable: IPC::System::Simple, Capture::Tiny, IPC::Run3, IPC::Run. Also see String::ShellQuote, to prepare commands and avoid quoting issues, shell injection bugs, and other problems. This recent post is a good example, and there's a lot more out there.
I would recommend using a proper module to do SSH, namely Net::OpenSSH, a SSH client built upon OpenSSH.
While being implemented in pure Perl, it is fast and stable, and has no mandatory dependency (apart of course, OpenSSH binaries).
As explained in the docs, it will, under certain conditions, automatically quote any shell metacharacters in the command lists.
The following codes demonstrates how it can respond to your use case. It relies on the same shortcut explained by #zdim, using mkdir -p :
if the directory does not exists, it gets created (if that fails, an error happends)
if it already exists, nothing happens
if a file exists with the target name, an error happens
Code :
use warnings;
use strict;
use Net::OpenSSH;
my $host = ...;
my $user_id = ...;
my $destination_path = ...;
# connect
my $ssh = Net::OpenSSH->new($host, user => $user_id);
$ssh->error and die "Can't ssh to $host: " . $ssh->error;
# try to create the directory
if ( $ssh->system('mkdir', '-p', $destination_path) ) {
print "dir created !\n";
} else {
print "can't mkdir $dir on $host : " . $ssh->error . "\n";
}
# disconnect
undef $ssh;

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Executing a bash script from a Perl program

I'm trying to write a Perl program which will execute a bash script. The Perl script looks like this
#!/usr/bin/perl
use diagnostics;
use warnings;
require 'userlib.pl';
use CGI qw(:standard);
ReadParse();
my $q = new CGI;
my $dir = $q->param('X');
my $s = $q->param('Y');
ui_print_header(undef, $text{'edit_title'}.$dir, "");
print $dir."<br>";
print $s."<br>";
print "Under Construction <br>";
use Cwd;
my $pwd = cwd();
my $directory = "/Logs/".$dir."/logmanager/".$s;
my $command = $pwd."/script ".$directory."/".$s.".tar";
print $command."<br>";
print $pwd."<br>";
chdir($directory);
my $pwd1 = cwd();
print $pwd1."<br>";
system($command, $directory) or die "Cannot open Dir: $!";
The script fail with the following error:
Can't exec "/usr/libexec/webmin/foobar/script
/path/filename.tar": No such file or directory at /usr/libexec/webmin/foobar/program.cgi line 23 (#3)
(W exec) A system(), exec(), or piped open call could not execute the
named program for the indicated reason. Typical reasons include: the
permissions were wrong on the file, the file wasn't found in
$ENV{PATH}, the executable in question was compiled for another
architecture, or the #! line in a script points to an interpreter that
can't be run for similar reasons. (Or maybe your system doesn't support #! at all.)
I've checked that the permissions are correct, the tar file I'm passing to my bash script exists, and also tried from the command line to run the same command I'm trying to run from the Perl script ( /usr/libexec/webmin/foobar/script /path/filename.tar ) and it works properly.
In Perl, calling system with one argument (in scalar context) and calling it with several scalar arguments (in list context) does different things.
In scalar context, calling
system($command)
will start an external shell and execute $command in it. If the string in $command has arguments, they will be passed to the call, too. So for example
$command="ls /";
system($commmand);
will evaluate to
sh -c "ls /"
where the shell is given the entire string, i.e. the command with all arguments. Also, the $command will run with all the normal environment variables set. This can be a security issue, see here and here for a few examples why.
On the other hand, if you call system with an array (in list context), Perl will not call a shell and give it the $command as argument, but rather try to execute the first element of the array directly and give it the other arguments as parameters. So
$command = "ls";
$directory = "/";
system($command, $directory);
will call ls directly, without spawning a shell in between.
Back to your question: your code says
my $command = $pwd."/script ".$directory."/".$s.".tar";
system($command, $directory) or die "Cannot open Dir: $!";
Note that $command here is something like /path/to/script /path/to/foo.tar, with the argument already being part of the string. If you call this in scalar context
system($command)
all will work fine, because
sh -c "/path/to/script /path/to/foo.tar"
will execute script with foo.tar as argument. But if you call it in list context, it will try to locate an executable named /path/to/script /path/to/foo.tar, and this will fail.
I found the problem.
changed the system command removing the second parameter and now it's working
system($command) or die "Cannot open Dir: $!";
In fairness I did not understand what was wrong on first example but now works fine, if anyone can explain probably it can be interesting understand
There are multiple ways to execute bash command/ scripts in perl.
System
backquate
exec

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

find based filename autocomplete in Bash script

There is a command line feature I've been wanting for a long time, and I've thought about how to best realize it, but I got nothing...
So what I'd like to have is when I start typing a filename and hit tab,for example:
# git add Foo<tab>
I'd like it to run a find . -name "*$1*" and basically autocomplete the complete path to the matched File to my command line.
What I have so far:
I know I'll have to write a function that will call the app with the parameters I want,
for example git add. After that it needs to catch the tab-keystroke event and do the find mentioned above, and display the results if many, or fill in the result if one.
What I haven't been able to figure out:
How to catch the tab key event within a function within function.
So basically in pseudocode:
gadd() {git add autocomplete_file_search($1)}
autocomplete_file_search(keyword) {
if( tab-key-pressed ){
files = find . -name "*$1*";
if( filecount > 1 ) {
show list;
}
if( files == 1 ) {
return files
}
}
}
Any ideas?
thanks.
Matching anywhere in the filename is rather complicated, and I'm not sure it's really all that useful. Matching at the start of filenames makes more sense and is much easier to implement, even recursively.
Now, you mentioned find as a requirement, but bash (since version 4.0) can also find files recursively, and it should be more efficient to let bash do that part. To match recursively in bash, you enable the globstar shell option by running shopt -s globstar, then two consecutive asterisks, **, will match recursively.
Next up, given that you want to match files recursively inside a git repository, we best have a way to detect that we're actually in a git repository, otherwise, if you accidentally trigger it in / for instance, your prompt will hang while waiting for bash to search through your entire filesystem. The following function should be fairly efficient at determining if we're inside a git repository. Given the current working directory, e.g. /foo/bar/baz, it'll look for /foo/bar/baz/.git, /foo/bar/.git, /foo/.git, /.git and return true if it finds one, false otherwise.
isgit() {
local p=$PWD
while [[ $p ]]; do
[[ -d $p/.git ]] && return
p=${p%/*}
done
return 1
}
For simplicity, we'll create a gadd command to add the completions for. A completion function can only be applied to the first word of the command. E.g. we can add completion for git, but not git add, thus we'll make a new command that turns git add into one word.
gadd() {
git add "$#"
}
Now for the actual completion function. When triggered by hitting TAB, the function will be invoked with three arguments. $1 is the command being completed, $2 is the current word of the command line being completed, and $3 is the previous word on the line. So the files we want to search will be matched by the glob **/"$2"*; all files starting with "$2". We iterate these filenames, and append them to the COMPREPLY array. If the COMPREPLY array only contains one value when the function is done, the word will be replaced by that value. If it contains more than one value, hit tab another time to get a list of all the matches.
shopt -s globstar
_git_add_complete() {
local file
isgit || return
for file in **/"$2"*; do
# If the glob doesn't match, we'll get the glob itself, so make sure
# we have an existing file
[[ -e $file ]] || continue
# If it's a directory, add a trailing /
[[ -d $file ]] && file+=/
COMPREPLY+=( "$file" )
done
}
complete -F _git_add_complete gadd
Add the above three code blocks to your ~/.bashrc, then open a new terminal, enter a git repository and try gadd something<tab>.
You should take a look at this introduction to bash completion. Briefly, bash has a system for configuring and extending tab completion. Other shells do this, too, and each one has a different way to set it up. Using this system it is not necessary to do everything yourself and adding custom argument completion to a command is relatively easy.
Does this work?
$ cat .bash_completion
_foo()
{
local files
cur=${COMP_WORDS[COMP_CWORD]}
local files=$(for x in `find -type f`; do echo ${x}; done)
COMPREPLY=( $( compgen -W "${files}" -- ${cur} ) )
return 0
}
complete -F _foo foo
$ . /etc/bash_completion
$ foo ./[tab]
I wrote git-number so that I never have to hit tab when specifying files to git.
With git-number I can use numbers to represent the filenames that I want git to handle.

Resources