why do some "if" tests not work in my cron scripts - linux

I have been "toying" around with this for some time now, for most issues i just worked around, but now i need this solved.
Why do basically all my cron jobs dont work with "If tests"
lets take this one
if [ "$line" == "downloads.php" ]
works absolutely fine when i run it in the shell, when i start it as cron job it just never works. The workaround
if echo "$line" | grep -q "downloads.php"
works both ways. Why is that? For the first one the [ ] basically stand for "test", and the second one well, its just a grep. But why are the "tests" not working in my cron jobs? (with or without redirection to >null)
i currently need this one in a cron job, and now i don't really know how to work around, or basically i just finally want to understand how to solve this, what I am doing wrong.
while [[ "${ofile: -1}" != "_" ]]
this one just produces an error "87: Bad substitution", do until first character is "_"
i managed to overcome all issues with cron, from full paths, to environment, this one is still a puzzle for me. any help is appreciated.

It sounds like you are running the script with bash, while cron is running it under some other shell; thus, all of the bash extensions you're using are failing. Make sure the shebang on your script (i.e. the first line) requests bash (#!/bin/bash), and that the cron entry runs it directly rather than specifying a shell (e.g. 0 0 * * * /path/to/script NOT 0 0 * * * /bin/sh /path/to/script).
EDIT: there are several different ways of controlling which shell will be used to interpret a script, with a definite precedence order:
If you run the script with an explicit shell (e.g. /bin/sh /path/to/script), the shell you told to run it will be used, and any shebang will be ignored.
If you run the script directly (e.g. /path/to/script or ./script, or place it in your PATH and run it as script), the system will use the shebang to determine which shell (or other interpreter) to run it with.
If you run it directly and there's no shebang, the program you're running it from (e.g. bash, sh, or crond) might choose to do something else. In this situation, bash will run the script with bash. I'm not sure what crond will do, and it might even depend on which version it is.
In general, using a proper shebang and running the script directly is the best way to go; the script should "know" what the proper interpreter is, and that should be respected. For example, if you write a script in portable shell code (with a #!/bin/sh shebang), and then later need to use some bash-only features, you can simply change the shebang and not have to track down all the places it's run from.
Specifying the shell explicitly should be reserved for cases where the shebang is wrong or missing (in which case the better solution is to fix the script), or you don't have execute permission (again, fix the script). The third option is an unreliable fallback, and should be avoided whenever possible.
P.s. If I'm reading your last comment (above) correctly, you want to test whether $line contains "downloads.php", not whether it equals that; but the [ x == y] comparison tests for equality, not containment. To test for containment, use the bash-only [[ string == pattern ]] form:
if [[ "$line" == *"downloads.php"* ]]

Related

Bash: Only allow script to run by being called from another script

We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh

Dry-run a potentially dangerous script?

A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.

Determine interpreter from inside script

I have a script; it needs to use bash's associative arrays (trust me on that one).
It needs to run on normal machines, as well as a certain additional machine that has /bin/bash 3.2.
It works fine if I declare the interpreter to be /opt/userwriteablefolder/bin/bash4, the location of bash 4.2 that I put there.. but it then only works on that machine.
I would like to have a test at the beginning of my script that checks what the interpreting shell is, and if it's bash3.2, calls bash4 $0 $#. The problem is that I can't figure out any way to determine what the interpreting shell is. I would really rather not do a $HOSTNAME based decision, but that will work if necessary (It's also awkward, because it needs to pass a "we've done this already" flag).
For a couple reasons, "Just have two scripts" is not a good solution.
You can check which interpreter is used by looking at $SHELL, which contains the full path to the shell executable (ex. /bin/bash)
Then, if it is Bash, you can check the Bash version in various ways:
${BASH_VERSINFO[*]} -- an array of version components, e.g. (4 1 5 1 release x86_64-pc-linux-gnu)
${BASH_VERSION} -- a string version, e.g. 4.1.5(1)-release
And of course, "$0" --version
This could be an option, depending on how you launch the script:
Install bash 4.2 as /opt/userwriteablefolder/bin/bash.
Use '#!/usr/bin/env bash' as the shebang in your script.
Add '/opt/userwriteablefolder/bin' to the front of PATH in the environment from which
your script is called, so that the bash there will be used if present, otherwise
the regular bash will be used.
The benefit would be to avoid having to detect the version of bash at runtime, but I realize your setup may not make step 3 desirable.

In my command-line, why does echo $0 return "-"?

When I type echo $0 I see -
I expect to see bash or some filename, what does it mean if I just get a "-"?
A hyphen in front of $0 means that this program is a login shell.
note: $0 does not always contain accurate path to the running executable as there is a way to override it when calling execve(2).
I get '-bash', a few weeks ago, I played with modifying a process name visible when you run ps or top/htop or echo $0. To answer you question directly, I don't think it means anything. Echo is a built-in function of bash, so when it checks the arguments list, bash is actually doing the checking, and seeing itself there.
Your intuition is correct, if you wrote echo $0 in a script file, and ran that, you would see the script's filename.
So based on one of your comments, you're really want to know how to determine what shell you're running; you assumed $0 was the solution, and asked about that, but as you've seen $0 won't reliably tell you what you need to know.
If you're running bash, then several unexported variables will be set, including $BASH_VERSION. If you're running tcsh, then the shell variables $tcsh and $version will be set. (Note that $version is an excessively generic name; I've run into problems where some system-wide startup script sets it and clobbers the tcsh-specific variable. But $tcsh should be reliable.)
The real problem, though, is that bash and tcsh syntax are mostly incompatible. It might be possible to write a script that can execute when invoked (via . or source) from either tcsh or bash, but it would be difficult and ugly.
The usual approach is to have separate setup files, one for each shell you use. For example, if you're running bash you might run
. ~/setup.bash
or
. ~/setup.sh
and if you're running tcsh you might run
source ~/setup.tcsh
or
source ~/setup.csh
The .sh or .csh versions refer to the ancestors of both shells; it makes sense to use those suffixes if you're not using any bash-specific or tcsh-specific features.
But that requires knowing which shell you're running.
You could probably set up an alias in your .cshrc, .tcshrc, or.login, and an alias or function in your.profile,.bash_profile, or.bashrc` that will invoke whichever script you need.
Or if you want to do the setup every time you login, or every time you start a new interactive shell, you can put the commands directly in the appropriate shell startup file(s). Of course the commands will be different for tcsh vs. bash.

Edit shell script while it's running

Can you edit a shell script while it's running and have the changes affect the running script?
I'm curious about the specific case of a csh script I have that batch runs a bunch of different build flavors and runs all night. If something occurs to me mid operation, I'd like to go in and add additional commands, or comment out un-executed ones.
If not possible, is there any shell or batch-mechanism that would allow me to do this?
Of course I've tried it, but it will be hours before I see if it worked or not, and I'm curious about what's happening or not happening behind the scenes.
It does affect, at least bash in my environment, but in very unpleasant way. See these codes. First a.sh:
#!/bin/sh
echo "First echo"
read y
echo "$y"
echo "That's all."
b.sh:
#!/bin/sh
echo "First echo"
read y
echo "Inserted"
echo "$y"
# echo "That's all."
Do
$ cp a.sh run.sh
$ ./run.sh
$ # open another terminal
$ cp b.sh run.sh # while 'read' is in effect
$ # Then type "hello."
In my case, the output is always:
hello
hello
That's all.
That's all.
(Of course it's far better to automate it, but the above example is readable.)
[edit] This is unpredictable, thus dangerous. The best workaround is , as described here put all in a brace, and before the closing brace, put "exit". Read the linked answer well to avoid pitfalls.
[added] The exact behavior depends on one extra newline, and perhaps also on your Unix flavor, filesystem, etc. If you simply want to see some influences, simply add "echo foo/bar" to b.sh before and/or after the "read" line.
Try this... create a file called bash-is-odd.sh:
#!/bin/bash
echo "echo yes i do odd things" >> bash-is-odd.sh
That demonstrates that bash is, indeed, interpreting the script "as you go". Indeed, editing a long-running script has unpredictable results, inserting random characters etc. Why? Because bash reads from the last byte position, so editing shifts the location of the current character being read.
Bash is, in a word, very, very unsafe because of this "feature". svn and rsync when used with bash scripts are particularly troubling, because by default they "merge" the results... editing in place. rsync has a mode that fixes this. svn and git do not.
I present a solution. Create a file called /bin/bashx:
#!/bin/bash
source "$1"
Now use #!/bin/bashx on your scripts and always run them with bashx instead of bash. This fixes the issue - you can safely rsync your scripts.
Alternative (in-line) solution proposed/tested by #AF7:
{
# your script
exit $?
}
Curly braces protect against edits, and exit protects against appends. Of course, we'd all be much better off if bash came with an option, like -w (whole file), or something that did this.
Break your script into functions, and each time a function is called you source it from a separate file. Then you could edit the files at any time and your running script will pick up the changes next time it gets sourced.
foo() {
source foo.sh
}
foo
Good question!
Hope this simple script helps
#!/bin/sh
echo "Waiting..."
echo "echo \"Success! Edits to a .sh while it executes do affect the executing script! I added this line to myself during execution\" " >> ${0}
sleep 5
echo "When I was run, this was the last line"
It does seem under linux that changes made to an executing .sh are enacted by the executing script, if you can type fast enough!
An interesting side note - if you are running a Python script it does not change. (This is probably blatantly obvious to anyone who understands how shell runs Python scripts, but thought it might be a useful reminder for someone looking for this functionality.)
I created:
#!/usr/bin/env python3
import time
print('Starts')
time.sleep(10)
print('Finishes unchanged')
Then in another shell, while this is sleeping, edit the last line. When this completes it displays the unaltered line, presumably because it is running a .pyc? Same happens on Ubuntu and macOS.
I don't have csh installed, but
#!/bin/sh
echo Waiting...
sleep 60
echo Change didn't happen
Run that, quickly edit the last line to read
echo Change happened
Output is
Waiting...
/home/dave/tmp/change.sh: 4: Syntax error: Unterminated quoted string
Hrmph.
I guess edits to the shell scripts don't take effect until they're rerun.
If this is all in a single script, then no it will not work. However, if you set it up as a driver script calling sub-scripts, then you might be able to change a sub-script before it's called, or before it's called again if you're looping, and in that case I believe those changes would be reflected in the execution.
I'm hearing no... but what about with some indirection:
BatchRunner.sh
Command1.sh
Command2.sh
Command1.sh
runSomething
Command2.sh
runSomethingElse
Then you should be able to edit the contents of each command file before BatchRunner gets to it right?
OR
A cleaner version would have BatchRunner look to a single file where it would consecutively run one line at a time. Then you should be able to edit this second file while the first is running right?
Use Zsh instead for your scripting.
AFAICT, Zsh does not exhibit this frustrating behavior.
usually, it uncommon to edit your script while its running. All you have to do is to put in control check for your operations. Use if/else statements to check for conditions. If something fail, then do this, else do that. That's the way to go.
Scripts don't work that way; the executing copy is independent from the source file that you are editing. Next time the script is run, it will be based on the most recently saved version of the source file.
It might be wise to break out this script into multiple files, and run them individually. This will reduce the execution time to failure. (ie, split the batch into one build flavor scripts, running each one individually to see which one is causing the trouble).

Resources