This is a reduced example of a makefile which illustrates my problem:
exec:
time (ls > ls.txt; echo $$? > code) 2> time.txt
make exec runs fine under one Linux installation:
Linux-2.6.32-642.4.2.el6.x86_64-x86_64-with-centos-6.8-Final
but it fails under my Ubuntu installation:
Linux-4.4.0-64-generic-x86_64-with-Ubuntu-16.04-xenial
and produces the message:
/bin/sh: 1: Syntax error: word unexpected (expecting ")")
No problems if I run the command time directly from the terminal.
Are there different versions of the command in different Linux installations? I need the version which allows a sequence of commands.
Make always invokes /bin/sh to run the recipe. On some systems, /bin/sh is an alias for bash which has a lot of extra extensions to the standard POSIX shell (sh). On other systems (like Ubuntu), /bin/sh is an alias for dash which is a smaller, simpler, closer to plain POSIX shell.
Bash has a built-in time operation which accepts an entire pipeline and shows the time taken for it (run help time at a bash shell command prompt to see documentation). Other shells like dash don't have a built-in time, so when you run it you get the program /usr/bin/time; run man time to see documentation. As a separate program it of course cannot time an entire pipeline (because a pipeline is a feature of the shell); it can only time one individual command.
You have various options:
You can force your makefile to always use bash as its shell by adding:
SHELL := /bin/bash
to it. I recommend adding a comment there as well describing why bash specifically is needed.
Or you can modify your rule to work in a portable way by making the shell invocation explicit so that time only has one command to invoke:
exec:
time /bin/sh -c 'ls > ls.txt; echo $$? > code' 2>/time.txt
Put a semicolon in front of "time". As is, make is trying to parse your command as a list of dependencies.
The only suggestion that worked is to force bash in my makefile:
SHELL := /bin/bash
I checked: on my Ubuntu machine, /bin/sh is really /bin/dash whereas on the CentOS machine it is /bin/bash!
Thanks!
Related
I'm converting an app to a new image, and the existing commands use substring expansion to set the artifact version like so: mvn clean versions:set -DnewVersion="0.1.$VCSINFO.I${INFO:0:6}.M$OTHER_INFO". I'm using a ubuntu image that defaults to /bin/sh, and I am unable to figure out how to either do something equivalent in bourne shell, or switch shells to run the command. I know bash is installed because I can see it in /etc/shells.
I tried using RUN ['/bin/bash', '-c', '...'] but I can see it is just running that command like so The command '/bin/sh -c ['/bin/bash', '-c',.... What is the best way to convert this functionality over to this new image?
You can run a bash command in two ways, even from sh: Either by passing the string '/bin/bash path/to/your/cmd' to the -c option of sh, or by setting the x-bit in cmd and having as the first line in cmd a #!/bin/bash.
Hence in your setting I would try either a RUN ['/bin/bash /path/to/your/cmd'] or just do a RUN ['/path/to/your/cmd'] and ensure that cmd has the #! line mentioned above, or complicated but fail safe - write a sh wrapper script, which then invokes the bash script in turn. Hence, if this wrappe script is called /path/to/your/cmdwrapper.sh, its content would be
:
/bin/bash /path/to/your/cmd
I have been working on a shell script to automate some tasks. What is the best way to make sure the shell script would run without any issues in most of the platforms. For ex., I have been using echo -n command to print some messages to the screen without a trailing new line and the -n switch doesn't work in some ksh shells. I was told the script must be POSIX compliant. How do I make sure that the script is POSIX compliant. Is there a tool? Or is there a shell that supports only bare minimum POSIX requirements?
POSIX
One first step, which gives you indications of what works or not and why, is to set the shebang to /bin/sh and use shellcheck site to analyze your script.
For example, paste this script in the shellcheck editor window:
#!/bin/sh
read -r a b <<<"$1"
echo $((a+b))
to get an indication that: "In POSIX sh, here-strings are undefined".
As a second step, you can use a shell that is as compatible with POSIX as possible.
One shell that is compatible with most other simple shells, is dash, Debian default system shell, which is a derivative of the older BSD ash.
Another shell compatible with posix is posh.
However, dash and/or posh may not be available for some systems.
There is lksh (with a ksh flavor), with the goal to be compatible with legacy (old) shell scripts. From its manual:
lksh is a command interpreter intended exclusively for running legacy shell scripts.
But there is the need to use options when calling lksh, like -o posix and -o sh:
Note that it's strongly recommended to invoke lksh with at least the -o posix option, if not both that and -o sh, to fully enjoy better compatibility to the POSIX standard (which is probably why you use lksh over mksh in the first place) or legacy scripts, respectively.
You would call lksh -o posix -o sh instead of the simple lksh.
Using options is a way to make other shells become POSIX compatible. Like lksh, using the option -o posix, like bash -o posix.
In bash, it is even possible to turn the POSIX option inside an script, with:
shopt -o posix # also with: set -o posix
It is also possible to make a local link to bash or zsh that makes both act like an old sh shell. Like this:
$ ln -s /bin/bash ./sh
$ ./sh
There are plenty of alternatives (dash, posh, lksh, bash, zsh, etc.) to get a shell that will work as a POSIX shell.
Portable
However, even so, all the above does not ensure "portability".
Unfortunately, making a shell script 'POSIX-compliant' is usually easier than making it run on any real-world shell.
The only real-world sensible recommendation is test your script in several shells.
Like the list above: dash, posh, lksh, and bash --posix.
Solaris is a world on its own, probably you will need to test against /bin/sh and xpg4/sh.
Followup:
How can I test for POSIX compliance for shell scripts?
Starting Bash with the --posix command-line option or executing ‘set -o posix’ while Bash is running will cause Bash to conform more closely to the POSIX standard by changing the behavior to match that specified by POSIX in areas where the Bash default differs.
Reference
Note:
This answer complements user8017719's great answer.
As requested in the question, a tool is discussed below: while it does not directly check for POSIX compliance, it runs a given script in multiple shells, notably including /bin/sh.
/bin/sh, the system default shell, should not be assumed to support any features other than POSIX-prescribed ones, though in practice it does, to varying degrees, depending on the specific implementation. Therefore, successfully running via /bin/sh on one platform does not guarantee that the script will work on another. Among widely used shells, dash comes closest to being a POSIX-features-only shell.
Running successfully in multiple shells is important:
if you're authoring a script that needs to be sourced in various shells.
if you know that your script will encounter only a limited set of known-in-advance shells.
For a proof-of-the-pudding-is-in-the-eating approach, consider using shall (a utility I wrote), which allows you to invoke a given script or command with multiple shells at once, with feedback about which of the targeted shells the script/command executed successfully with.
If you have Node.js installed, you can easily install it with npm install -g shall (if not, follow the above link to the GitHub repo for manual installation instructions) and then use it as follows:
shall scriptFile
or, with an ad-hoc command:
shall -c '<shell-commands>'
By default, it invokes sh, and, if installed, dash, bash, zsh, and ksh, but you can target any set of shells that you have installed by using the SHELLS environment variable.
Using the example of the echo -n command on macOS to only target shells sh and bash:
$ SHELLS=sh,bash shall -c 'echo -n hi'
✓ sh (bash variant) [0.00s]
-n hi
✓ bash [0.00s]
hi
OK - All 2 shells (sh, bash) report success.
On macOS, bash (effectively) acts as sh, and while echo -n didn't fail when used with sh, you can also see that -n wasn't recognized as an option when bash ran as sh.
Another macOS example that shows that bash permits certain Bash-specific extensions even when running as sh, such as using nonstandard [[ ... ]] conditionals (assumes that dash - which acts as sh on Ubuntu systems - was installed via Homebrew):
$ SHELLS=sh,bash,dash shall -c '[[ -n nonempty ]] && echo nonempty'
✓ sh (bash variant) [0.00s]
nonempty
✓ bash [0.00s]
nonempty
✗ dash [0.01s]
dash: 1: [[: not found
FAILED - 1 shell (dash) reports failure, 2 (sh, bash) report success.
As you can see, Bash running as sh still accepted [[ ... ]], whereas dash, which is a (mostly) POSIX-features-only shell, failed, because POSIX only mandates [ ... ] conditionals (as an alias of test ... commands).
I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf
A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.
While writing BASH scripts, I generally use the which command of a Linux machine (where Linux Machine refers to Desktop based Linux OS like Ubuntu, Fedora, OpenSUSE) for finding path or availability of other binaries. I understand that which can search for binaries (commands) which are present in the PATH variable set.
Now, I am unable to understand how to proceed in case the which command itself is not present on that machine.
My intention is to create a shell script (BASH) which can be run on a machine and in case the environment is not adequate (like some command being used in script is missing), it should be able to exit gracefully.
Does any one has any suggestions in this regard. I understand there can be ways like using locate or find etc - but again, what if even they are not available. Another option which I already know is that I look for existence of a which binary on standard path like /usr/bin/ or /bin/ or /usr/local/bin/. Is there any other possibility as well?
Thanks in advance.
type which
type is a bash built-in command, so it's always available in bash. See man bash for details on it.
Note, that this will also recognize aliases:
$ alias la='ls -l -a'
$ type la
la is aliased to 'ls -l -a'
(More of a comment because Boldewyn answered perfectly, but it is another take on the question that may be of interest to some.)
If you are worried that someone may have messed with your bash installation and somehow removed which, then I suppose in theory, when you actually invoked the command you would get an exit code of 127.
Consider
$ sdgsdg
-bash: sdgsdg: command not found
$ echo $?
127
Exit codes in bash: http://tldp.org/LDP/abs/html/exitcodes.html
Of course, if someone removed which, then I wouldn't trust the exit codes, either.