linux script - substring and append compatibility with unix - linux

I am trying to make my following script compatible with other platforms (unix) and i am not sure if it will be. Especially [[test]] and %%# symbols are compatible. At lease this script works fine on linux.
It would be great if someone (who is familiar with unix) can make some suggestions or fixes to make the following script portable across the platforms (except windows).
#!/bin/sh
INSTALL_HOME=/opt/prod/install0308
export INSTALL_HOME
export CONF_INSTALL_ARGS="-Dinstall.ext.dir=/opt/prod/installExt -Dinstall.alternateExtDir=/opt/dev/installExt/lib -Dinstall.type=OSD"
INSTALL_ALTERNATIVE_TYPES_DIR=''
if [[ ${CONF_INSTALL_ARGS} == *'-Dinstall.alternateExtDir'* ]]; then
INSTALL_ALT_TYPE_DIR_TEMP=${CONF_INSTALL_ARGS#*-Dinstall.alternateExtDir=}
INSTALL_TYPE_DIR=${INSTALL_ALT_TYPE_DIR_TEMP%%-D*}
FINAL_INST_TYPE_DIR="$(echo -e "${INSTALL_TYPE_DIR}" | sed 's/ *$//g')"
INSTALL_ALTERNATIVE_TYPES_DIR=','$FINAL_INST_TYPE_DIR
fi
TOTAL_CONF_ARGS="-Dinstall.ext.dir=${INSTALL_HOME}/lib/provider,${INSTALL_HOME}/lib/security${INSTALL_ALTERNATIVE_TYPES_DIR}"
echo $TOTAL_CONF_ARGS

This is not a compatibility problem between Operating Systems, this is a compatibility problem between Shells.
Your script has been written for bash-like shells, so you just need to replace the first line #!/bin/sh by #!/bin/bash (or any path where bash is located) for it to work on other systems (do not forget to install bash on them).
NB: This script works on your Linux with the shebang #!/bin/sh probably because your Linux distribution has chosen to replace legacy sh by a link to bash or you are explicitly running the script with bash like this: bash ./script.sh.

Related

how to get the Unix shell executable name for a script marked as executable and bin/bash shebang [duplicate]

I'm writing a bash script and it throws an error when using "sh" command in Ubuntu (it seems it's not compatible with dash, I'm learning on this subject). So I would like to detect if dash is being used instead of bash to throw an error.
How can I detect it in a script context?. Is it even possible?
You can check for the presence of shell-specific variables:
For instance, bash defines $BASH_VERSION.
Since that variable won't be defined while running in dash, you can use it to make the distinction:
[ -n "$BASH_VERSION" ] && isBash=1
Afterthought: If you wanted to avoid relying on variables (which, conceivably, could be set incorrectly), you could try to obtain the ultimate name of the shell executable running your script, by determining the invoking executable and, if it is a symlink, following it to its (ultimate) target.
The shell function getTrueShellExeName() below does that; for instance, it would return 'dash' on Ubuntu for a script run with sh (whether explicitly or via shebang #!/bin/sh), because sh is symlinked to dash there.
Note that the function's goal is twofold:
Be portable:
Work with all POSIX-compatible (Bourne-like) shells,
across at least most platforms, with respect to what utilities and options are used - see caveats below.
Work in all invocation scenarios:
sourced (whether from a login shell or not)
executed stand-alone, via the shebang line
executed by being passed as a filename argument to a shell executable
executed by having its contents piped via stdin to a shell executable
Caveats:
On at least one platform - macOS - sh is NOT a symlink, even though it is effectively bash. There, the function would return 'sh' in a script run with sh.
The function uses readlink, which, while not mandated by POSIX, is present on most modern platforms - though with differing syntax and features. Therefore, using GNU readlink's -f option to find a symlink's ultimate target is not an option.
(The only modern platform I'm personally aware of that does not have a readlink utility is HP-UX - see https://stackoverflow.com/a/24114056/45375 for a recursive-readlink implementation that should work on all POSIX platforms.)
The function uses the which utility (except in zsh, where it's a builtin), which, while not mandated by POSIX, is present on most modern platforms.
Ideally, ps -p $$ -o comm= would be sufficient to determine the path of the executable underlying the process, but that doesn't work as intended when directly executing shell scripts with shebang lines on Linux, at least when using the ps implementation from the procps-ng package, as found on Ubuntu, for instance: there, such scripts report the script's file name rather than the underlying script engine's.Tip of the hat to ferdymercury for his help.
Therefore, the content of special file /proc/$$/cmdline is parsed on Linux, whose first NUL-separated field contains the true executable path.
Example use of the function:
[ "$(getTrueShellExeName)" = 'bash' ] && isBash=1
Shell function getTrueShellExeName():
getTrueShellExeName() {
local trueExe nextTarget 2>/dev/null # ignore error in shells without `local`
# Determine the shell executable filename.
if [ -r /proc/$$/cmdline ]; then
trueExe=$(cut -d '' -f1 /proc/$$/cmdline) || return 1
else
trueExe=$(ps -p $$ -o comm=) || return 1
fi
# Strip a leading "-", as added e.g. by macOS for login shells.
[ "${trueExe#-}" = "$trueExe" ] || trueExe=${trueExe#-}
# Determine full executable path.
[ "${trueExe#/}" != "$trueExe" ] || trueExe=$([ -n "$ZSH_VERSION" ] && which -p "$trueExe" || which "$trueExe")
# If the executable is a symlink, resolve it to its *ultimate*
# target.
while nextTarget=$(readlink "$trueExe"); do trueExe=$nextTarget; done
# Output the executable name only.
printf '%s\n' "$(basename "$trueExe")"
}
Use $0 (that is the name of the executable of the shell being called).The command for example
echo $0
gives
/usr/bin/dash
for the dash and
/bin/bash
for a bash.The parameter substitution
${0##*/}
gives just 'dash' or 'bash'. This can be used in a test.
An alternative approach might be to test if a shell feature is available, for example to give an idea...
[[ 1 ]] 2>/dev/null && echo could be bash || echo not bash, maybe dash
echo $0 and [[ 1 ]] 2>/dev/null && echo
could be bash || echo not bash, maybe bash worked for me running Ubuntu 19.
Done slight Pascal, Fortran and C in school, but need to become fluent in shell script.

Running bash script on multiple shells

So I was trying to create a script on bash shell, I came to know that the script doesn't run on ksh or dash shells. So my question is how you make a script to run on all 3 (bash, dash & ksh) shells.
In order to write a script that is guaranteed to be portable between the various shells, the script must be POSIX Shell compliant. POSIX is a minimum set of builtins and commands that all conforming shells must support. Ash, Dash, Zsh, Bash, Ksh, etc.. are all shells capable of running scripts that are POSIX compliant.
What shells like Bash do is add nice features which make the shell more capable, like additional parameter expansions for conversion to upper/lower case, substring replacement, etc.. and new builtins like [[ ... ]] that provide regex matching capabilities, etc.. While this makes Bash more capable, it also means scripts written using "Bashisms" are no longer able to run under all other shells. Ash, Dash and other minimal shells have no idea how to handle the features added by Bash, Ksh or Zsh and therefore fail.
To write truly portable scripts, you must limit the content to that provided by the POSIX command language.
You need something file like this:
#!/bin/bash #isn't a simple comment
echo "hello bash"
#!/bin/sh #isn't a simple comment
echo "hello sh"
#!/bin/ksh #isn't a simple comment
echo "hello ksh"
( #!) it's called shebang tells the shell what program to interpret the script
called this file as you better prefert (file.bsk), but don't forget give it execute permission it with :
chmod +x file.bsk
then run ./file.bsk
Some commands or utilities are not available in all shells or they might have different behavior in different shell. If you know which command run on which shell or gives you desired output you can write shell specific commends as below
bash -c 'echo bash'
ksh -c 'echo ksh'
All other commands that are common to all shell can be written in normal way.

Making Unix shell scripts POSIX compliant

I have been working on a shell script to automate some tasks. What is the best way to make sure the shell script would run without any issues in most of the platforms. For ex., I have been using echo -n command to print some messages to the screen without a trailing new line and the -n switch doesn't work in some ksh shells. I was told the script must be POSIX compliant. How do I make sure that the script is POSIX compliant. Is there a tool? Or is there a shell that supports only bare minimum POSIX requirements?
POSIX
One first step, which gives you indications of what works or not and why, is to set the shebang to /bin/sh and use shellcheck site to analyze your script.
For example, paste this script in the shellcheck editor window:
#!/bin/sh
read -r a b <<<"$1"
echo $((a+b))
to get an indication that: "In POSIX sh, here-strings are undefined".
As a second step, you can use a shell that is as compatible with POSIX as possible.
One shell that is compatible with most other simple shells, is dash, Debian default system shell, which is a derivative of the older BSD ash.
Another shell compatible with posix is posh.
However, dash and/or posh may not be available for some systems.
There is lksh (with a ksh flavor), with the goal to be compatible with legacy (old) shell scripts. From its manual:
lksh is a command interpreter intended exclusively for running legacy shell scripts.
But there is the need to use options when calling lksh, like -o posix and -o sh:
Note that it's strongly recommended to invoke lksh with at least the -o posix option, if not both that and -o sh, to fully enjoy better compatibility to the POSIX standard (which is probably why you use lksh over mksh in the first place) or legacy scripts, respectively.
You would call lksh -o posix -o sh instead of the simple lksh.
Using options is a way to make other shells become POSIX compatible. Like lksh, using the option -o posix, like bash -o posix.
In bash, it is even possible to turn the POSIX option inside an script, with:
shopt -o posix # also with: set -o posix
It is also possible to make a local link to bash or zsh that makes both act like an old sh shell. Like this:
$ ln -s /bin/bash ./sh
$ ./sh
There are plenty of alternatives (dash, posh, lksh, bash, zsh, etc.) to get a shell that will work as a POSIX shell.
Portable
However, even so, all the above does not ensure "portability".
Unfortunately, making a shell script 'POSIX-compliant' is usually easier than making it run on any real-world shell.
The only real-world sensible recommendation is test your script in several shells.
Like the list above: dash, posh, lksh, and bash --posix.
Solaris is a world on its own, probably you will need to test against /bin/sh and xpg4/sh.
Followup:
How can I test for POSIX compliance for shell scripts?
Starting Bash with the --posix command-line option or executing ‘set -o posix’ while Bash is running will cause Bash to conform more closely to the POSIX standard by changing the behavior to match that specified by POSIX in areas where the Bash default differs.
Reference
Note:
This answer complements user8017719's great answer.
As requested in the question, a tool is discussed below: while it does not directly check for POSIX compliance, it runs a given script in multiple shells, notably including /bin/sh.
/bin/sh, the system default shell, should not be assumed to support any features other than POSIX-prescribed ones, though in practice it does, to varying degrees, depending on the specific implementation. Therefore, successfully running via /bin/sh on one platform does not guarantee that the script will work on another. Among widely used shells, dash comes closest to being a POSIX-features-only shell.
Running successfully in multiple shells is important:
if you're authoring a script that needs to be sourced in various shells.
if you know that your script will encounter only a limited set of known-in-advance shells.
For a proof-of-the-pudding-is-in-the-eating approach, consider using shall (a utility I wrote), which allows you to invoke a given script or command with multiple shells at once, with feedback about which of the targeted shells the script/command executed successfully with.
If you have Node.js installed, you can easily install it with npm install -g shall (if not, follow the above link to the GitHub repo for manual installation instructions) and then use it as follows:
shall scriptFile
or, with an ad-hoc command:
shall -c '<shell-commands>'
By default, it invokes sh, and, if installed, dash, bash, zsh, and ksh, but you can target any set of shells that you have installed by using the SHELLS environment variable.
Using the example of the echo -n command on macOS to only target shells sh and bash:
$ SHELLS=sh,bash shall -c 'echo -n hi'
✓ sh (bash variant) [0.00s]
-n hi
✓ bash [0.00s]
hi
OK - All 2 shells (sh, bash) report success.
On macOS, bash (effectively) acts as sh, and while echo -n didn't fail when used with sh, you can also see that -n wasn't recognized as an option when bash ran as sh.
Another macOS example that shows that bash permits certain Bash-specific extensions even when running as sh, such as using nonstandard [[ ... ]] conditionals (assumes that dash - which acts as sh on Ubuntu systems - was installed via Homebrew):
$ SHELLS=sh,bash,dash shall -c '[[ -n nonempty ]] && echo nonempty'
✓ sh (bash variant) [0.00s]
nonempty
✓ bash [0.00s]
nonempty
✗ dash [0.01s]
dash: 1: [[: not found
FAILED - 1 shell (dash) reports failure, 2 (sh, bash) report success.
As you can see, Bash running as sh still accepted [[ ... ]], whereas dash, which is a (mostly) POSIX-features-only shell, failed, because POSIX only mandates [ ... ] conditionals (as an alias of test ... commands).

Setting environment variable in /usr/bin/env hangs process on Linux

While the man for env on Linux seems to indicate that you can set new environment variables before executing a command. Unfortunately, when I set new variables in a file's shebang on Linux systems, the file never executes.
#!/usr/bin/env VAR1=foo bash
echo $VAR1
When I execute this file on a CentOS or Ubuntu machine, it just sits there.
$ ./shell-env.sh
<nothing happens>
What I find particularly bizarre is this works perfectly fine on OS X with BSD env.
$ ./shell-env.sh
foo
$
Is this just a difference between BSD env and Linux env? Why do the man pages for Linux seem to say it should work the same way as on BSD?
P.S. My use case here is to override the PATH variable, so I can try to find a ruby on the system but that's not on the PATH.
Thank you in advance!
There's a way to manipulate the environment before executing a Ruby script, without using a wrapper script of some kind, but it's not pretty:
#!/bin/bash
export FOO=bar
exec ruby -x "$0" "$#"
#!ruby
puts ENV['FOO']
This is usually reserved for esoteric situations where you need to manipulate e.g. PATH or LD_LIBRARY_PATH before executing the program, and it needs to be self-contained for some reason. It works for Perl and possibly others too!

Run an teredata query on linux

I am in badly need of some direction here :) ,there is an batch file(.bat) which runs an teredata query on windows, but for some reasons i will have to use Linux server from now on
test.bat
echo off
bteq < D:\commands.txt > D:\output.txt 2>&1
#echo off goto end
:end #echo exit
commands.txt
.LOGON ------
select (date);
.LOGOFF
how can i do this on red hat - linux? and is it necessary to have bteq utilities or any other Teredata utilities , i have got teredata ODBC drivers on linux though.
it would be great if any one could give an insight onto this ?
Thank you
BTEQ is available on multiple flavours of Windows/Unix/linux including RedHat.
BTEQ can't use ODBC, need to install it plus some other packages like cli.
You just might have to do some minor modifications in your BTEQ script, e.g. backslash to slash in pathname, rm instead of del in .OS.
Otherwise you can run this as a shell script (you just have to decide which Unix shell to use: sh, ksh, bash, etc.), all you can do in a Windows bat can be done in Unix shel, too.
Make the script executable using chmod u+x test.sh
#!/bin/sh
bteq < /...../commands.txt > /...../output.txt 2>&1
and then simply run it from the command prompt.

Resources