A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.
Related
In the middle of my perl script I want to execute a bash command. The script takes a long time, so at the beginning of the script I want to see if the command exists. This answer says to just try and run it and this other answer suggests some bash commands to test if the program exists.
Is the latter option the best solution? Are there any better ways to do this check in perl?
My best guess is that you want to check for existence of an executable file that you want to run using system or qx//
But if you want your command line to behave the same way as the shell, then you can probably use File::Which
What if we assume that we don't know the command's location?
This means that syck's answer won't work, and zdim's answer is incomplete.
Try this function in perl:
sub check_exists_command {
my $check = `sh -c 'command -v $_[0]'`;
return $check;
}
# two examples
check_exists_command 'pgrep' or die "$0 requires pgrep";
check_exists_command 'readlink' or die "$0 requires readlink";
I just tested it, because I just wrote it.
With perl, you can test files for existence, readability, executability etc., take a look here.
Therefore just use
executeBashStuff() if -x $filename;
or stat it:
stat($filename);
executeBashStuff() if -x _;
To me a better check is to run the program at the beginning of the script (with -V say).
I'd use the same invocation as you use to run the job later (via shell or not, via execvp). Once at it, make sure to see whether it threw errors. This is also discussed in your link but I would in fact get the output back (not send it away) and check that. This is the surest way to see whether the thing actually runs out of your program and whether it is what you expect it to be.
Checking for the executable with -x (if you know the path) is useful, too, but it only tells you that a file with a given name is there and that it is executable.
The system's which seems to be beset with critism for its possible (mis)behavior, it may or may not be a shell-builtin (which complicates how exactly to use it), is an external utility, and its exact behavior is system dependent. The module File::Which pointed out in Borodin's answer would be better -- if it is indeed better than which. (What it may well be, I just don't know.)
Note. I am not sure what "bash command" means: a bash shell built-in, or the fact that you use bash when on terminal? Perl's qx and system use the sh shell, not bash (if they invoke the shell, which depends on how you use them). While sh is mostly a link, and often to bash, it may not be and there are differences, and you cannot rely on your shell configuration.
Can also actually run a shell, qx(/path/bash -c 'cmd args'), if you must. Mind the quotes. You may need to play with it to find the exact syntax on your system. See this page and links.
I infrequently have to write bash scripts for various unrelated purposes and while I usually have a good idea what commands I want in the script, I often have no idea what header to use or why I'm using one when I do find it. For example(s):
Standard shell script:
#!/bin/bash
Python:
#!/usr/bin/env python
Scripts seem to work fine without headers but if headers are the standard, there's a reason for them and they shouldn't be ignored. If it has an effect, then it's a valuable tool that could be used to accomplish more.
Minimally, I'd like to know what headers to use with MySQL scripts and what the headers do on Standard, Python, and MySQL scripts. Ideally, I'd like a generic list of headers or an understanding of how to create a header based on what program is being used.
How the Kernel Executes Things
Simplified (a bit), there are two ways the kernel in a POSIX system knows how to execute a program. One, if the program is in a binary format the kernel understands (such as ELF), the kernel can execute it "directly" (more detail out of scope). If the program is a text file starting with a shebang, such as
#!/usr/bin/somebinary -arg
or what-have-you, the kernel actually executes the command as if it had been directed to execute:
/usr/bin/somebinary -arg "$0"
where $0 here is the name of the script file you just tried to execute. (So you can immediately tell why so many scripting languages use # as a comment-starter – it means they don't have to treat the shebang as special.)
PATH and the env command
The kernel does not look at the PATH environment variable to determine which executable you're talking about, so if you are distributing a python script to systems that may have multiple versions of python installed, you can't guarantee that there will be a
#!/usr/bin/python
env, however, is POSIX, so you can count on it existing, and it will look up python in PATH. Thus,
#!/usr/bin/env python
will execute the script with the first python found in your PATH.
BASH, SH and Special Meanings for Invocation
Some programs have special semantics for how they're invoked. In particular, on many systems /bin/sh is a symlink to another shell, such as /bin/bash. While bash does not contain a perfectly POSIXLY_STRICT implementation of sh, when it is invoked as /bin/sh it is stricter than it would be if invoked as plain-old-bash.
MySQL and arg limitations
The shebang line can be length limited and technically, it can only support one argument, so mysql is a bit tricky – you can't expect to pass a username and database name to a mysql script.
#!/usr/bin/env mysql
use mydb;
select * from mytbl;
Will fail because the kernel will try mysql "$0". Even if you have your credentials in a .my.cnf file, mysql itself will try to treat "$0" as a database name. Likewise:
#!/usr/bin/mysql -e
use mydb;
select * from mytbl;
will fail because again, "$0" is not a table name (you hope).
There does not seem to be an appropriate syntax for directly executing a mysql script this way. Your best bet is to pipe the sql commands to mysql directly:
mysql < my_sql_commands
http://mywiki.wooledge.org/BashGuide/Practices#Choose_Your_Shell
When the first line of a script starts with #!, that's what's called a "shebang". When that script is run as an executable, the operating system uses that line to determine how to run the script -- that is to say, to find the program with which the script should be executed.
It's incorrect that "scripts work fine without headers" -- if you don't have a shebang line, you can't be invoked using the execve() call, which means that many (most?) programs won't be able to execute your script. Sometimes invocation from a shell will try to use that shell itself in the absence of a shebang, but you can't trust that to be the case.
(There's an exception to that -- if someone starts your script by running sh yourscript or bash yourscript, the shebang line isn't read at all, and the script they chose is used; however, running scripts this way is a bad practice, as the author typically knows better than the user what the correct interpreter is).
In short:
If you want to use modern features, and you want the user to be able to override the shell version in use by putting a different release of bash earlier in their path, use #!/usr/bin/env bash
If you want to use modern features and ensure that you always run with the system shell, use #!/bin/bash
If you're going to write your script to strictly conform with POSIX sh, use #!/bin/sh
There's not a limited list of shebang lines we can give you, since any native executable (non-script program) can be used as a script interpreter, and thus be placed in a shebang. If you created a file called myscript with #!/usr/bin/env yourprogram, gave it executable permissions, and ran ./myscript foo bar, this would result in /usr/bin/env yourprogram myscript foo bar being invoked; yourprogram would be run by /usr/bin/env (after a PATH lookup), and would be responsible for knowing what to do with myscript and its arguments.
For an extremely detailed history of shebang lines and how they work across systems both modern and ancient, see http://www.in-ulm.de/~mascheck/various/shebang/
I have a simple script cmakeclean to clean cmake temp files:
#!/bin/bash -f
rm CMakeCache.txt
rm *.cmake
which I call like
$ cmakeclean
And it does remove CMakeCache.txt, but it doesn't remove cmake_install.cmake:
rm: *.cmake: No such file or directory
When I run it like:
$ . cmakeclean
it does remove both.
What is the difference and can I make this script work like an usual linux command (without . in front)?
P.S.
I am sure the both times is same script is executed. To check this I added echo meme in the script and rerun it in both ways.
Remove the -f from your #!/bin/bash -f line.
-f prevents pathname expansion, which means that *.cmake will not match anything. When you run your script as a script, it interprets the shebang line, and in effect runs /bin/bash -f scriptname. When you run it as . scriptname, the shebang is just seen as a comment line and ignored, so the fact that you do not have -f set in your current environment allows it to work as expected.
. script is short for source script which means the current shell executes the commands in the script. If there's an exit in there, the current shell will exit (and e. g. the terminal window will close).
This is typically used to modify the environment of the current shell (set variables etc.).
script asks the shell to fork itself, then exec the given script in the child process, and then wait in the father for the termination of the child. If there's an exit in the script, this will be executed by the child shell and thus only terminate this. The father shell stays intact and unaltered by this call.
This is typically used to start other programs from the current shell.
Is this about ClearCase? What did you do in your poor life where you've been assigned to work in the deepest bowels of hell?
For years, I was a senior ClearCase Administer. I haven't touched it in over a decade. My life is way better now. The sky is bluer, bird songs are more melodious, and my dread over coming to work every day is now a bit less.
Getting back to your issue: It's hard to say exactly what's going on. ClearCase does some wacky things. In a dynamic view, the ClearCase repository on Unix systems is hidden in the shell's environment. Now you see it, now you don't.
When you run a shell script, it starts up a new environment. If a particular shell variable is not imported, it is invisible that shell script. When you merely run cmakeclean from the command line, you are spawning a new shell -- one that does not contain your ClearCase environment.
When you run a shell script with a dot prefix like . cmakeclean, you are running that shell script in the current shell which contains your ClearCase environment. Thus, it can see your ClearCase view.
If you're using a snapshot view, it is possible that you have a $HOME/.bashrc that's changing directories on you. When a new shell environment runs in BASH (the default shell in MacOS X and Linux), it first runs $HOME/.bashrc. If this sets a particular directory, then you end up in that directory and not in the directory where you ran your shell script. I use to see this when I too was involved in ClearCase hell. People setup their .kshrc script (it was the days before BASH and most people used Kornshell) to setup their views. Unfortunately, this made running any other shell script almost impossible to do.
I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)
I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;