Bash return code error handling when using heredoc input - linux

Motivation
I'm in a situation where I have to run multiple bash commands with a single bash invocation without the possibility to write a full script file (use case: Passing multiple commands to a container in Kubernetes). A common solution is to combine commands with ; or &&, for instance:
bash -c " \
echo \"Hello World\" ; \
ls -la ; \
run_some_command "
In practice writing bash scripts like that turns out to be error prone, because I often forget the semicolon leading to subtle bugs.
Inspired by this question, I was experiment with writing scripts in a more standard style by using a heredoc:
bash <<EOF
echo "Hello World"
ls -la
run_some_command
EOF
Unfortunately, I noticed that there is a difference in exit code error handling when using a heredoc. For instance:
bash -c " \
run_non_existing_command ; \
echo $? "
outputs (note that $? properly captures the exit code):
bash: run_non_existing_command: command not found
127
whereas
bash <<EOF
run_non_existing_command
echo $?
EOF
outputs (note that $? fails to capture the exit code compared to standard script execution):
bash: line 1: run_non_existing_command: command not found
0
Why is the heredoc version behaving differently? Is it possible to write the script in the heredoc style and maintaining normal exit code handling?

Why is the heredoc version behaving differently?
Because $? is expanded before running the command.
The following will output 1, that is the exit status of false command:
false
bash <<EOF
run_non_existing_command
echo $?
EOF
It's the same in principle as the following, which will print 5:
variable=5
bash <<EOF
variable="This is ignored"
echo $variable
EOF
Is it possible to write the script in the heredoc style and maintaining normal exit code handling?
If you want to have the $? expanded inside the subshell, then:
bash <<EOF
run_non_existing_command
echo \$?
EOF
or
bash <<'EOF'
run_non_existing_command
echo $?
EOF
Also note that:
bash -c \
run_non_existing_command ;
echo $? ;
is just equal to:
bash -c run_non_existing_command
echo $?
The echo $? is not executed inside bash -c.

Related

Bash command with pipe('|') alway return exit code of 0, even in error case [duplicate]

I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.
There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.
Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?
using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1
This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.
By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1
So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.
(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.
In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.
Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.
The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.
PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?
Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return
Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).
It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.
Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.

User defined output redirection not working as expected

I am using a KSH script to execute a binary (program) that has the following syntax to execute correctly:
myprog [-v | --verbose (optional)] [input1] [input2]
The program prints nothing & returns exit code 0 (zero) on success. On failure it prints ERROR messages to STDERR & returns exit status > 0. If -v option is specified it prints verbose details to STDOUT both in case of success and failure.
To make this usable and reduce chances of argument swapping and user controlled logging I used a ksh shell script to invoke this binary. The syntax to run the ksh shell script is as:
myshell.sh [-v (optional)] [-a input1] [-b input2]
If -v option is specified, ksh redirects STDOUT to <execution_date_time>_out.log and STDERR to <execution_date_time>_err.log. My ksh script is as follows:
myshell.sh :
#! /bun/ksh
verbopt=""
log=""
arg1=""
arg2=""
dateTime=`date +%y-%m-%d_%H:%M:%S`
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
log="1>${dateTime}_out.log 2>${dateTime}_err.log"
;;
a) # Input 1
arg1=$OPTARG
;;
b) # Input 2
arg2=$OPTARG
;;
*) # usage
echo "USAGE: myshell.sh [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $arg1|| -z $arg2]]
then
echo "Missing arguments"
exit 2
fi
myprog $verbopt $arg1 $arg2 $log
exit $?
The problem here is, all the output STDERR & STDOUT is printed on the screen (i.e, No redirection took place) as well as no *.log files were created after successful or unsuccessful execution (i.e, exit status: 0 or >0 respectively).
Can anyone help me out on this?
Thanks.
Rather than trying to monkey patch redirections into the command line, just redirect the streams when you parse the flags. That is:
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
exec 1>${dateTime}_out.log 2>${dateTime}_err.log
;;
...
You need to be a little careful, since you do some error checking after this and you probably don't want your later error messages going to the *_err.log, but that's fairly trivial to fix. (eg, error check sooner, or do a test -n "$verbopt" && exec > ... after the error check, or similar)
The problem is that > is not expanded in the value of $log.
I'm afraid you will need to use a conditional for this, for example:
cmd="myprog $verbopt $arg1 $arg2"
if [ "$log" ]; then
$cmd 1>${dateTime}_out.log 2>${dateTime}_err.log
else
$cmd
fi
I would use the idiom exec redirection, which runs the rest of the script as if the given redirection had been supplied when it was run:
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
If you need to restore stdout and stderr afterward for the script to do more unlogged things, you can just run the logging part in a subshell:
(
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
)
this command will not be logged regardless
I would also build the command in an array, so you can add things like -v to it without having to have a separate variable for each possible parameter. If the order in which the -a and -b arguments are supplied to myprog doesn't matter, you can just add those to the array instead of having separate variables as well.
You can see my version below. Besides the above changes, I also don't bother getting the timestamp if not logging, since it's unneeded, and send error messages to standard error instead of standard out using the ksh builtin print.
Here's what I put together:
#!/usr/bin/env ksh
# new array syntax requires ksh93+; for older ksh, use this:
# set -A cmd myprog
cmd=(myprog) # build up the command to run in an array
log_flag=0 # nonzero if the command should be logged
input_a= # the two input filenames
input_b=
while getopts 'va:b:' arg; do
case $arg in
v) # verbose output
# older ksh: set -A cmd "${cmd[#]}" -v
cmd+=(-v)
log_flag=1
;;
a) # Input 1
input_a=$OPTARG
;;
b) # Input 2
input_b=$OPTARG
;;
*) # usage
print -u2 "USAGE: $0 [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $input_a || -z $input_b ]]; then
print -u2 "$0: Missing arguments"
exit 2
fi
if (( log_flag )); then
timestamp=$(date +%y-%m-%d_%H:%M:%S)
exec >"${timestamp}_out.log" 2>"${timestamp}_err.log"
fi
"${cmd[#]}" "$input_a" "$input_b"
Your timestamp uses the two-digit year (%y); that and the underscore between the components are the only deviations from the ISO 8601 standard, so I would recommend you go ahead and adopt the standard format. That'd be %Y-%m-%dT%H:%M:%S, or, in C libraries with newer versions of strftime, %FT%T.
You could also be a little more clever and make log_flag a string that is either empty or -q, pass that to the command, and test it against the empty string to determine whether or not to open the log files, but I find the logic easier to follow with the simple 0/1 value treated as a Boolean.
Take a look at the eval command.
Replace ...
myprog $verbopt $arg1 $arg2 $log
with:
eval myprog $verbopt $arg1 $arg2 $log
I don't know what your myprog does but here's a simple example using eval to run date (valid command) and date xyz (invalid command), redirecting output to log.stdout/log.stderr accordingly:
$ cat logout
log='1>log.stdout 2>log.stderr'
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date ${log}'
eval date ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date xyz ${log}'
eval date xyz ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
Now run the script:
$ logout
eval date ${log}
++++++++++++ log.stdout
Sun Jul 23 15:56:01 CDT 2017
++++++++++++ log.stderr
++++++++++++
eval date xyz ${log}
++++++++++++ log.stdout
++++++++++++ log.stderr
date: invalid date `xyz'
++++++++++++

Bash discards command line arguments when passing to another bash shell

I have a big script (call it test) that, after stripping out the unrelated parts, comes down to just this using which I can explain my question:
#!/bin/bash
bash -c "$#"
This doesn't work as expected. E.g. ./test echo hi executes the only the echo and the argument disappears!
Testing with various inputs I can see only $1 is passed to bash -c ... and rest are discarded.
But if I use a variable like:
#!/bin/bash
cmd="$#"
bash -c "$cmd"
it works as expected for all inputs.
Questions:
1) I would like to understand why the double quotes don't "pass" the entire command line arguments to bash -c .... What am I missing here (that it works perfectly fine when using an intermediate variable)?
2) Why does bash discard the rest of the arguments (except $1) without any error messages?
For example:
bash -c "ls" -l -a hi hello blah
simply runs echo and hi hello blah doesn't result in any errors at all?
(If possible, please refer to the bash grammar where this behaviour is documented).
1) I would like to understand why the double quotes don't "pass" the entire command line arguments to bash -c .... What am I missing here (that it works perfectly fine when using an intermediate variable)?
From info bash #:
#
($#) Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands
to a separate word. That is, "$#" is equivalent to "$1" "$2" ....
Thus, bash -c "$#" is equivalent to bash -c "$1" "$2" .... In the case of ./test echo hi invocation, the expression is expanded to
bash -c "echo" "hi"
2) Why does bash discard the rest of the arguments (except $1) without any error messages?
Bash actually doesn't discard anything. From man bash:
If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to the positional parameters, starting with $0.
Thus, for the command bash -c "echo" "hi", Bash passes "hi" as $0 for the "echo" script.
bash -c "ls" -l -a hi hello blah
simply runs echo and hi hello blah doesn't result in any errors at all?
According to the rules mentioned above, Bash executes "ls" script and passes the following positional parameters to this script:
$0: "-l"
$1: "-a"
$2: "hi"
$3: "hello"
$4: "blah"
Thus, the command actually executes ls, and the positional parameters are unused in the script. You can use them by referencing to the positional parameters, e.g.:
$ set -x
$ bash -c "ls \$0 \$1 \$3" -l -a hi hello blah
+ bash -c 'ls $0 $1 $3' -l -a hi hello blah
ls: cannot access hello: No such file or directory
You should be using $* instead of $# to pass command line as string. "$#" expands to multiple quoted arguments and "$*" combines multiple arguments into a single argument.
#!/bin/bash
bash -c "$*"
Problem is with your $# it executes:
bash -c echo hi
But with $* it executes:
bash -c 'echo hi'
When you use:
cmd="$#"
and use: bash -c "$cmd" it does the same thing for you.
Read: What is the difference between “$#” and “$*” in Bash?

linux terminal execute echo function

when I read the book linux shell scripting cookbook
they say when you wanna print !,you shouldn't put it in double quote,or you can add \ before ! to escape it.
e.g.
$echo "Hello,world!"
bash: !:event not found error
$echo "Hello,world\\!"
Hello,world!
but in my situation(ubuntu14.04), I get the answer like that:
$echo "Hello,world!"
Hello,world!
$echo "Hello,world\\!"
Hello,world\!
So, why in my machine can't get the same answer?
Why the escape symbol \ was printed as a normal symbol?
When you're typing interactively to the shell, ! has special meaning, it's the history expansion character. To prevent this special meaning, you need to put it in single quotes or escape it.
echo 'Hello, world!'
echo "Hello, world\!'
The reason it's not happening on Ubuntu may be because it's running a newer version of bash, which is apparently more selective about when history expansion occurs. It seems to require ! to be followed by alphanumerics, not punctuation.
You don't need to do this in scripts, because history is not normally enabled there. It's just for interactive shells.
Create a shell script called file.sh:
#!/bin/bash
# file.sh: a sample shell script to demonstrate the concept of Bash shell functions
# define usage function
usage(){
echo "Usage: $0 filename"
exit 1
}
# define is_file_exits function
# $f -> store argument passed to the script
is_file_exits(){
local f="$1"
[[ -f "$f" ]] && return 0 || return 1
}
# invoke usage
# call usage() function if filename not supplied
[[ $# -eq 0 ]] && usage
# Invoke is_file_exits
if ( is_file_exits "$1" )
then
echo "File found"
else
echo "File not found"
fi
Run it as follows:
chmod +x file.sh
./file.sh
./file.sh /etc/resolv.conf

Shell scripting shell inside shell

I would like to connect to different shells (csh, ksh etc.,) and execute command inside each switched shell.
Following is the sample program which reflects my intention:
#!/bin/bash
echo $SHELL
csh
echo $SHELL
exit
ksh
echo $SHELL
exit
Since, i am not well versed with Shell scripting need a pointer on how to achieve this. Any help would be much appreciated.
If you want to execute only one single command, you can use the -c option
csh -c 'echo $SHELL'
ksh -c 'echo $SHELL'
If you want to execute several commands, or even a whole script in a child-shell, you can use the here-document feature of bash and use the -s (read commands from stdin) on the child shells:
#!/bin/bash
echo "this is bash"
csh -s <<- EOF
echo "here go the commands for csh"
echo "and another one..."
EOF
echo "this is bash again"
ksh -s <<- EOF
echo "and now, we're in ksh"
EOF
Note that you can't easily check the shell you are in by echo $SHELL, because the parent shell expands this variable to the text /././bash. If you want to be sure that the child shell works, you should check if a shell-specific syntax is working or not.
It is possible to use the command line options provided by each shell to run a snippet of code.
For example, for bash use the -c option:
bash -c $code
bash -c 'echo hello'
zsh and fish also use the -c option.
Other shells will state the options they use in their man pages.
You need to use the -c command line option if you want to pass commands on bash startup:
#!/bin/bash
# We are in bash already ...
echo $SHELL
csh -c 'echo $SHELL'
ksh -c 'echo $SHELL'
You can pass arbitrary complex scripts to a shell, using the -c option, as in
sh -c 'echo This is the Bourne shell.'
You will save you a lot of headaches related to quotes and variable expansion if you wrap the call in a function reading the script on stdin as:
execute_with_ksh()
{
local script
script=$(cat)
ksh -c "${script}"
}
prepare_complicated_script()
{
# Write shell script on stdout,
# for instance by cat-ting a here-document.
cat <<'EOF'
echo ${SHELL}
EOF
}
prepare_complicated_script | execute_with_ksh
The advantage of this method is that it easy to insert a tee in the pipe or to break the pipe to control the script being passed to the shell.
If you want to execute the script on a remote host through ssh you should consider encode your script in base 64 to transmit it safely to the remote shell.

Resources