Checking whether a program exists - linux

In the middle of my perl script I want to execute a bash command. The script takes a long time, so at the beginning of the script I want to see if the command exists. This answer says to just try and run it and this other answer suggests some bash commands to test if the program exists.
Is the latter option the best solution? Are there any better ways to do this check in perl?

My best guess is that you want to check for existence of an executable file that you want to run using system or qx//
But if you want your command line to behave the same way as the shell, then you can probably use File::Which

What if we assume that we don't know the command's location?
This means that syck's answer won't work, and zdim's answer is incomplete.
Try this function in perl:
sub check_exists_command {
my $check = `sh -c 'command -v $_[0]'`;
return $check;
}
# two examples
check_exists_command 'pgrep' or die "$0 requires pgrep";
check_exists_command 'readlink' or die "$0 requires readlink";
I just tested it, because I just wrote it.

With perl, you can test files for existence, readability, executability etc., take a look here.
Therefore just use
executeBashStuff() if -x $filename;
or stat it:
stat($filename);
executeBashStuff() if -x _;

To me a better check is to run the program at the beginning of the script (with -V say).
I'd use the same invocation as you use to run the job later (via shell or not, via execvp). Once at it, make sure to see whether it threw errors. This is also discussed in your link but I would in fact get the output back (not send it away) and check that. This is the surest way to see whether the thing actually runs out of your program and whether it is what you expect it to be.
Checking for the executable with -x (if you know the path) is useful, too, but it only tells you that a file with a given name is there and that it is executable.
The system's which seems to be beset with critism for its possible (mis)behavior, it may or may not be a shell-builtin (which complicates how exactly to use it), is an external utility, and its exact behavior is system dependent. The module File::Which pointed out in Borodin's answer would be better -- if it is indeed better than which. (What it may well be, I just don't know.)
Note. I am not sure what "bash command" means: a bash shell built-in, or the fact that you use bash when on terminal? Perl's qx and system use the sh shell, not bash (if they invoke the shell, which depends on how you use them). While sh is mostly a link, and often to bash, it may not be and there are differences, and you cannot rely on your shell configuration.
Can also actually run a shell, qx(/path/bash -c 'cmd args'), if you must. Mind the quotes. You may need to play with it to find the exact syntax on your system. See this page and links.

Related

Dry-run a potentially dangerous script?

A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.

Alter environment after a command

Altering the environment just for a single command is very simple:
DB=postgresql some_command --with --arguments
Unfortunately, I have to do this on a remote server and due to limitations of the deployment, I can only edit what comes after the some_command. The following would be nice, but doesn't do the trick (in Bash):
some_command --with --arguments DB=postgresql
Is there some other Bash hack to get there?
Here's another idea, a bit wild I'm afraid:
some_command --do-nothing `DB=postgresql some_command --now-really`
The idea is that the backquoted command will actually do what you want. The first some_command is only there so the command will start as you want it. You should find parameters that would make it do something harmless.
If you have nothing equivalent for the --do-nothing parameter, you can do this:
some_command `DB=postgresql some_command --now-really; ps-grep-kill`
Where ps-grep-kill is a combination of these commands (I leave the details as an exercise), which finds the parent process, which is just about to run some_command, and kills it before it gets a chance to (but after the backquoted some_command has run already).
Can you execute export DB=postgresql to modify the environmental variables globally? Then you can run subsequent commands that pick up the new environmental variable.
If you just run the command, it can't change the environment.
But if you source it, it can:
source some_command --with --arguments DB=postgresq
Shorthand:
. some_command --with --arguments DB=postgresql
I prefer not to source large scripts this way, because then they may end up changing more than you intended. So I'd write two scripts - one finds what you want to change, and outputs it. The other is a very small one, that runs the first one (normally, without source) and does the changes.
But re-reading your question, it seems like you can't use source. So I see no way to do what you want.

In my command-line, why does echo $0 return "-"?

When I type echo $0 I see -
I expect to see bash or some filename, what does it mean if I just get a "-"?
A hyphen in front of $0 means that this program is a login shell.
note: $0 does not always contain accurate path to the running executable as there is a way to override it when calling execve(2).
I get '-bash', a few weeks ago, I played with modifying a process name visible when you run ps or top/htop or echo $0. To answer you question directly, I don't think it means anything. Echo is a built-in function of bash, so when it checks the arguments list, bash is actually doing the checking, and seeing itself there.
Your intuition is correct, if you wrote echo $0 in a script file, and ran that, you would see the script's filename.
So based on one of your comments, you're really want to know how to determine what shell you're running; you assumed $0 was the solution, and asked about that, but as you've seen $0 won't reliably tell you what you need to know.
If you're running bash, then several unexported variables will be set, including $BASH_VERSION. If you're running tcsh, then the shell variables $tcsh and $version will be set. (Note that $version is an excessively generic name; I've run into problems where some system-wide startup script sets it and clobbers the tcsh-specific variable. But $tcsh should be reliable.)
The real problem, though, is that bash and tcsh syntax are mostly incompatible. It might be possible to write a script that can execute when invoked (via . or source) from either tcsh or bash, but it would be difficult and ugly.
The usual approach is to have separate setup files, one for each shell you use. For example, if you're running bash you might run
. ~/setup.bash
or
. ~/setup.sh
and if you're running tcsh you might run
source ~/setup.tcsh
or
source ~/setup.csh
The .sh or .csh versions refer to the ancestors of both shells; it makes sense to use those suffixes if you're not using any bash-specific or tcsh-specific features.
But that requires knowing which shell you're running.
You could probably set up an alias in your .cshrc, .tcshrc, or.login, and an alias or function in your.profile,.bash_profile, or.bashrc` that will invoke whichever script you need.
Or if you want to do the setup every time you login, or every time you start a new interactive shell, you can put the commands directly in the appropriate shell startup file(s). Of course the commands will be different for tcsh vs. bash.

Scripting on Linux

I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)

How to get bash built in commands using Perl

I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;

Resources