Accessing the value returned by a shell script in the parent script - linux

I am trying to access a string returned by a shell script which was called from a parent shell script. Something like this:
ex.sh:
echo "Hemanth"
ex2.sh:
sh ex.sh
if [ $? == "Hemanth" ]; then
echo "Hurray!!"
else
echo "Sorry Bro!"
fi
Is there a way to do this? Any help would be appreciated.
Thank you.

Use a command substitution syntax on ex2.sh
valueFromOtherScript="$(sh ex.sh)"
printf "%s\n" "$valueFromOtherScript"
echo by default outputs a new-line character after the string passed, if you don't need it in the above variable use printf as
printf "Hemanth"
on first script. Also worth adding $? will return only the exit code of the last executed command. Its values are interpreted as 0 being a successful run and a non-zero on failure. It will NEVER have a string value as you tried to use.

A Bash script does not really "return" a string. What you want to do is capture the output of a script (or external program, or function, they all act the same in this respect).
Command substitution is a common way to capture output.
captured_output="$(sh ex.sh)"
This initializes variable captured_output with the string containing all that is output by ex.sh. Well, not exactly all. Any script (or command, or function) actually has two output channels, usually called "standard out" (file descriptor number 1) and "standard error" (file descriptor number 2). When executing from a terminal, both typically end up on the screen. But they can be handled separately if needed.
For instance, if you want to capture really all output (including error messages), you would add a "redirection" after your command that tells the shell you want standard error to go to the same place as standard out.
captured_output="$(sh ex.sh 2>&1)"
If you omit that redirection, and the script outputs something on standard error, then this will still show on screen, and will not be captured.
Another way to capture output is sending it to a file, and then read back that file to a variable, like this :
sh ex.sh > output_file.log
captured_output="$(<output_file.log)"
A script (or external program, or function) does have something called a return code, which is an integer. By convention, a value of 0 means "success", and any other value indicates abnormal execution (but not necessarily failure) : the meaning of that return code is not standardized, it is ultimately specific to each script, program or function.
This return code is available in the $? special shell variable immediately after the execution terminates.
sh ex.sh
return_code=$?
echo "Return code is $return_code"

Related

How to get the complete calling command of a BASH script from inside the script (not just the arguments)

I have a BASH script that has a long set of arguments and two ways of calling it:
my_script --option1 value --option2 value ... etc
or
my_script val1 val2 val3 ..... valn
This script in turn compiles and runs a large FORTRAN code suite that eventually produces a netcdf file as output. I already have all the metadata in the netcdf output global attributes, but it would be really nice to also include the full run command one used to create that experiment. Thus another user who receives the netcdf file could simply reenter the run command to rerun the experiment, without having to piece together all the options.
So that is a long way of saying, in my BASH script, how do I get the last command entered from the parent shell and put it in a variable? i.e. the script is asking "how was I called?"
I could try to piece it together from the option list, but the very long option list and two interface methods would make this long and arduous, and I am sure there is a simple way.
I found this helpful page:
BASH: echoing the last command run
but this only seems to work to get the last command executed within the script itself. The asker also refers to use of history, but the answers seem to imply that the history will only contain the command after the programme has completed.
Many thanks if any of you have any idea.
You can try the following:
myInvocation="$(printf %q "$BASH_SOURCE")$((($#)) && printf ' %q' "$#")"
$BASH_SOURCE refers to the running script (as invoked), and $# is the array of arguments; (($#)) && ensures that the following printf command is only executed if at least 1 argument was passed; printf %q is explained below.
While this won't always be a verbatim copy of your command line, it'll be equivalent - the string you get is reusable as a shell command.
chepner points out in a comment that this approach will only capture what the original arguments were ultimately expanded to:
For instance, if the original command was my_script $USER "$(date +%s)", $myInvocation will not reflect these arguments as-is, but will rather contain what the shell expanded them to; e.g., my_script jdoe 1460644812
chepner also points that out that getting the actual raw command line as received by the parent process will be (next to) impossible. Do tell me if you know of a way.
However, if you're prepared to ask users to do extra work when invoking your script or you can get them to invoke your script through an alias you define - which is obviously tricky - there is a solution; see bottom.
Note that use of printf %q is crucial to preserving the boundaries between arguments - if your original arguments had embedded spaces, something like $0 $* would result in a different command.
printf %q also protects against other shell metacharacters (e.g., |) embedded in arguments.
printf %q quotes the given argument for reuse as a single argument in a shell command, applying the necessary quoting; e.g.:
$ printf %q 'a |b'
a\ \|b
a\ \|b is equivalent to single-quoted string 'a |b' from the shell's perspective, but this example shows how the resulting representation is not necessarily the same as the input representation.
Incidentally, ksh and zsh also support printf %q, and ksh actually outputs 'a |b' in this case.
If you're prepared to modify how your script is invoked, you can pass $BASH_COMMANDas an extra argument: $BASH_COMMAND contains the raw[1]
command line of the currently executing command.
For simplicity of processing inside the script, pass it as the first argument (note that the double quotes are required to preserve the value as a single argument):
my_script "$BASH_COMMAND" --option1 value --option2
Inside your script:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
myInvocation=$1 # Save the command line in a variable...
shift # ... and remove it from "$#".
# Now process "$#", as you normally would.
Unfortunately, there are only two options when it comes to ensuring that your script is invoked this way, and they're both suboptimal:
The end user has to invoke the script this way - which is obviously tricky and fragile (you could however, check in your script whether the first argument contains the script name and error out, if not).
Alternatively, provide an alias that wraps the passing of $BASH_COMMAND as follows:
alias my_script='/path/to/my_script "$BASH_COMMAND"'
The tricky part is that this alias must be defined in all end users' shell initialization files to ensure that it's available.
Also, inside your script, you'd have to do extra work to re-transform the alias-expanded version of the command line into its aliased form:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
# Here we also re-transform the alias-expanded command line to
# its original aliased form, by replacing everything up to and including
# "$BASH_COMMMAND" with the alias name.
myInvocation=$(sed 's/^.* "\$BASH_COMMAND"/my_script/' <<<"$1")
shift # Remove the first argument from "$#".
# Now process "$#", as you normally would.
Sadly, wrapping the invocation via a script or function is not an option, because the $BASH_COMMAND truly only ever reports the current command's command line, which in the case of a script or function wrapper would be the line inside that wrapper.
[1] The only thing that gets expanded are aliases, so if you invoked your script via an alias, you'll still see the underlying script in $BASH_COMMAND, but that's generally desirable, given that aliases are user-specific.
All other arguments and even input/output redirections, including process substitutiions <(...) are reflected as-is.
"$0" contains the script's name, "$#" contains the parameters.
Do you mean something like echo $0 $*?

Bash and Dash inconsistently check command substitution error codes with `errexit`

I seem to have encountered a very, very strange inconsistency in the way both dash and bash check for error conditions with the errexit option.
Using both dash and bash without the set -e/set -o errexit option, the following program:
foo()
{
echo pre
bar=$(fail)
echo post
}
foo
will print the following (with slightly different error strings for dash):
pre
./foo.sh: line 4: fail: command not found
post
With the errexit option, it will print the following:
pre
./foo.sh: line 4: fail: command not found
Surprisingly, however, if bar is local, the program will always echo both pre and post. More specifically, using both dash and bash with our without the errexit option, the following program:
foo()
{
echo pre
local bar=$(fail)
echo post
}
foo
will print the following:
pre
./foo.sh: line 4: fail: command not found
post
In other words, it seems that the return value of a command substitution that is assigned to a local variable is not checked by errexit, but it is if the variable is global.
I would have been inclined to think this was simply a corner case bug, if it didn't happen with both shells. Since dash is specifically designed to be POSIX conformant I wonder if this behavior is actually specified by the POSIX standard, though I have a hard time imagining how that would make sense.
dash(1) has this to say about errexit:
If not interactive, exit immediately if any untested command fails. The exit status of a command is considered to be explicitly tested if the command is used to control an if, elif, while, or until; or if the command is the left hand operand of an “&&” or “||” operator.
bash(1) is somewhat more verbose, but I have a hard time making sense of it:
Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR above), exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell.
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
TL;DR The exit status of local "hides" the exit status of any command substitutions appearing in one of its arguments.
The exit status of a variable assignment is poorly documented (or at least, I couldn't find any specifics in a quick skim of the various man pages and the POSIX spec). As far as I can tell, the exit status is taken as the exit status of the last command substitution that occurs in the value of the assignment, or 0 if there are no command substitutions. Non-final command substitutions appear to be included in the list of "tested" situations, as an assignment like
foo=$(false)$(true)
does not exit with errexit set.
local, however, is a command itself whose exit status is normally 0, independent of any command substitutions that occur in its arguments. That is, while
foo=$(false)
has an exit status of 1,
local foo=$(false)
will have an exit status of 0, with any command substitutions in an argument seemingly considered "tested" for the purposes of errexit.
Try this:
#!/bin/bash
set -e
foo()
{
echo pre
local bar
bar=$(fail)
echo post
}
foo
exit
!! OR !!
#!/bin/bash
foo()
{
set -e
echo pre
local bar
bar=$(fail)
echo post
}
foo
exit
OUTPUT:
$ ./errexit_function
pre
./errexit_function: line 8: fail: command not found
$ echo $?
127
As far as I can tell this is a work around for a bug in bash, but try this,
#!/bin/bash
set -e
foo()
{
echo true || return_value=$?
echo the command returned a value of ${return_value:-0}
$(fail) || return_value=$?
echo the command returned a value of ${return_value:-0}
echo post
}
foo
exit

Unix: What does cat by itself do?

I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR

What effect does this line have in a shell script?

I've seen this line in many shell scripts but I don't understand the effect it has. Could someone explain please?
tempfile=`tempfile 2>/dev/null` || tempfile=/tmp/test$$
It creates a temporary file and puts the path to it in the $tempfile variable.
`tempfile 2>/dev/null`
runs the tempfile command (man tempfile) and discards any error messages. If it succeeds, it returns the name of the newly created temporary file. If it fails, it returns non-zero, in which case the next part of the command runs.
For a command this || that, that only runs if this fails, i.e. returns non-zero.
$$ is a variable in bash that expands to the process ID of the shell. (Compare the results of ps and echo $$.) So tempfile=/tmp/test$$ will expand to something like tempfile=/tmp/test2278.
Presumably, later in the script, something writes to $tempfile.
The shell has a separate namespace for command and variables (making it a Lisp-2, LOL) which is exploited in your script line. tempfile is a command which is run to compute the value of the tempfile variable which is unrelated to it in any way. tempfile produces a pathname suitable for use as the name of a temporary file. 2> /dev/null redirects any error message from tempfile into /dev/null (2 is the standard error file descriptor). The command1 || command2 logic means, "execute command2 if command1 fails". If we can't get a temporary name from tempfile, then we use /tmp/test$$, where $$ is a special built-in shell parameter which expands to the shell's own process ID.
tempfile creates a temporay file with a file name similar to /tmp/tmp.XXXXXX
2>/dev/null redirects the command output to the /dev/null device, which just throws it away. This redirection just ignore any errors on creating a temporary file.
|| chains two commands together. If the first fails, the second is executed. If the first succeeds nothing else happens.
$$ is the pid of the current shell, which means that if the tempfile command fails the tempfile variable will still contain a string in the form /tmp/test6052 if the process' pid is 6052.
The first part of the line, up to the ||, runs the program tempfile and captures standard output in the variable tempfile, throwing errors away. There's an exit status, too: either zero for success or non-zero for failure (either failure to execute the tempfile command or failure reported by the tempfile command when it is run).
The || means "if the LHS (left-hand side) failed then do the RHS (right-hand side)".
So, if the tempfile command had a problem, the RHS will be used, assigning a simpler temporary file name to tempfile (the variable).
Overall, it is equivalent to:
if tempfile=`tempfile 2>/dev/null`
then : OK
else tempfile=/tmp/test$$
fi
Only it is on one line, not four.
The idea is, I'm sure, to get something in $tempfile whether or not the tempfile command exists on the machine.
Did you look at man tempfile?
That line is trying to use tempfile(1) to generate a temporary filename, storing it in $tempfile. If that fails (the "||", "or" part), it falls back to an explicit filename of /tmp/test$$, where $$ is the PID of the executing script.

How to get return (status) value of an external command in Vim

I want to get the exit value (returned by $? on a shell; usually 0 or 1 for success or failure) of a external shell command in Vim. Note that I want to get its standard output too. So I can use the output and the also the exit value in a Vim conditional expression. Is this possible?
There is v:shell_error variable that has exactly the same value as $? in shell scripts. Works at least after :!, :read !, calling system().
its like this
var=$(echo $?)
will give you the value of $? into the variable var.
The standard output of course is obtained. because, only after the output is worked, the return value is obtained. ($? comes either as 0 or 1 only after executing the command).

Resources