Related
I am fairly new to bash scripting, working mostly in python till now.
What does this mean if ! "$a" function "$b" $$ ; then exactly in bash?
Where,
"a" is a variable,
"b" is a variable,
"function' is a custom function
Any help would be appreciated.
Thanks!
The content of variable a is taken to be a command (either an executable file, or a bash function). This command is invoked, and gets 3 parameters: The word function, the content of the variable b, and the PID of the process executing this if-statement.
After the command has terminated, its exit code is checked: If it is not zero, the then part of the compound is executed. This interpretation of the exit code is by virtue of the exclamation mark (!) in front. In general:
If you write a command as
! cmd
and the cmd itself would yield a non-zero exit code, the overall exit code of this statement (i.e. what goes into $?) is 0. If the cmd itself would yield the exit code zero. the overall exit code is 1.
I am confused about what error code the command will return when executing a variable assignment plainly and with command substitution:
a=$(false); echo $?
It outputs 1, which let me think that variable assignment doesn't sweep or produce new error code upon the last one. But when I tried this:
false; a=""; echo $?
It outputs 0, obviously this is what a="" returns and it override 1 returned by false.
I want to know why this happens, is there any particularity in variable assignment that differs from other normal commands? Or just be cause a=$(false) is considered to be a single command and only command substitution part make sense?
-- UPDATE --
Thanks everyone, from the answers and comments I got the point "When you assign a variable using command substitution, the exit status is the status of the command." (by #Barmar), this explanation is excellently clear and easy to understand, but speak doesn't precise enough for programmers, I want to see the reference of this point from authorities such as TLDP or GNU man page, please help me find it out, thanks again!
Upon executing a command as $(command) allows the output of the command to replace itself.
When you say:
a=$(false) # false fails; the output of false is stored in the variable a
the output produced by the command false is stored in the variable a. Moreover, the exit code is the same as produced by the command. help false would tell:
false: false
Return an unsuccessful result.
Exit Status:
Always fails.
On the other hand, saying:
$ false # Exit code: 1
$ a="" # Exit code: 0
$ echo $? # Prints 0
causes the exit code for the assignment to a to be returned which is 0.
EDIT:
Quoting from the manual:
If one of the expansions contained a command substitution, the exit
status of the command is the exit status of the last command
substitution performed.
Quoting from BASHFAQ/002:
How can I store the return value and/or output of a command in a
variable?
...
output=$(command)
status=$?
The assignment to output has no effect on command's exit status, which
is still in $?.
Note that this isn't the case when combined with local, as in local variable="$(command)". That form will exit successfully even if command failed.
Take this Bash script for example:
#!/bin/bash
function funWithLocalAndAssignmentTogether() {
local output="$(echo "Doing some stuff.";exit 1)"
local exitCode=$?
echo "output: $output"
echo "exitCode: $exitCode"
}
function funWithLocalAndAssignmentSeparate() {
local output
output="$(echo "Doing some stuff.";exit 1)"
local exitCode=$?
echo "output: $output"
echo "exitCode: $exitCode"
}
funWithLocalAndAssignmentTogether
funWithLocalAndAssignmentSeparate
Here is the output of this:
nick.parry#nparry-laptop1:~$ ./tmp.sh
output: Doing some stuff.
exitCode: 0
output: Doing some stuff.
exitCode: 1
This is because local is actually a builtin command, and a command like local variable="$(command)" calls local after substituting the output of command. So you get the exit status from local.
I came across the same problem yesterday (Aug 29 2018).
In addition to local mentioned in Nick P.'s answer and #sevko's comment in the accepted answer, declare in global scope also has the same behavior.
Here's my Bash code:
#!/bin/bash
func1()
{
ls file_not_existed
local local_ret1=$?
echo "local_ret1=$local_ret1"
local local_var2=$(ls file_not_existed)
local local_ret2=$?
echo "local_ret2=$local_ret2"
local local_var3
local_var3=$(ls file_not_existed)
local local_ret3=$?
echo "local_ret3=$local_ret3"
}
func1
ls file_not_existed
global_ret1=$?
echo "global_ret1=$global_ret1"
declare global_var2=$(ls file_not_existed)
global_ret2=$?
echo "global_ret2=$global_ret2"
declare global_var3
global_var3=$(ls file_not_existed)
global_ret3=$?
echo "global_ret3=$global_ret3"
The output:
$ ./declare_local_command_substitution.sh 2>/dev/null
local_ret1=2
local_ret2=0
local_ret3=2
global_ret1=2
global_ret2=0
global_ret3=2
Note the values of local_ret2 and global_ret2 in the output above. The exit codes are overwritten by local and declare.
My Bash version:
$ echo $BASH_VERSION
4.4.19(1)-release
(not an answer to original question but too long for comment)
Note that export A=$(false); echo $? outputs 0! Apparently the rules quoted in devnull's answer no longer apply. To add a bit of context to that quote (emphasis mine):
3.7.1 Simple Command Expansion
...
If there is a command name left after expansion, execution proceeds as described below. Otherwise, the command exits. If one of the expansions contained a command substitution, the exit status of the command is the exit status of the last command substitution performed. If there were no command substitutions, the command exits with a status of zero.
3.7.2 Command Search and Execution [ — this is the "below" case]
IIUC the manual describes var=foo as special case of var=foo command... syntax (pretty confusing!). The "exit status of the last command substitution" rule only applies to the no-command case.
While it's tempting to think of export var=foo as a "modified assignment syntax", it isn't — export is a builtin command (that just happens to take assignment-like args).
=> If you want to export a var AND capture command substitution status, do it in 2 stages:
A=$(false)
# ... check $?
export A
This way also works in set -e mode — exits immediately if the command substitution return non-0.
As others have said, the exit code of the command substitution is the exit code of the substituted command, so
FOO=$(false)
echo $?
---
1
However, unexpectedly, adding export to the beginning of that produces a different result:
export FOO=$(false)
echo $?
---
0
This is because, while the substituted command false fails, the export command succeeds, and that is the exit code returned by the statement.
When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.
I know the return code will be contained in $? after a command was executed, but what does $? mean after a script was executed? The return code of the last command in that script?
Can I tell if a script has been excuted from head to tail and not interrupted by some unexpected system halt or something?
If I have a script like below excuted,
Command A;
if [ $? -eq 0]
then
echo "OK" >> log
else
echo "failed" >> log
fi
and the system halted while A was running, what will I find in that log file? "OK", "failed" or nothing?
Yes, or the value passed after exit, e.g. exit 31.
Not without taking measures within the other script to make it explicit.
$? reads the exit status of the last command executed. After a function returns, $? gives the exit status of the last command executed in the function. This is Bash's way of giving functions a "return value.
Example
#!/bin/bash
echo hello
echo $? # Exit status 0 returned because command executed successfully.
lskdf # Unrecognized command.
echo $? # Non-zero exit status returned because command failed to execute.
echo
exit 113 # Will return 113 to shell.
# To verify this, type "echo $?" after script terminates.
# By convention, an 'exit 0' indicates success,
#+ while a non-zero exit value means an error or anomalous condition
the return code of the script is indeed the return code of the last command executed, some commands allow you to finish execution at any point and arbitrarily set the return code; those are exit for scripts and return for functions but in both cases if you omit the argument they'll just use the return code of the previous command.
This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 1 year ago.
I am a noob in shell-scripting. I want to print a message and exit my script if a command fails. I've tried:
my_command && (echo 'my_command failed; exit)
but it does not work. It keeps executing the instructions following this line in the script. I'm using Ubuntu and bash.
Try:
my_command || { echo 'my_command failed' ; exit 1; }
Four changes:
Change && to ||
Use { } in place of ( )
Introduce ; after exit and
spaces after { and before }
Since you want to print the message and exit only when the command fails ( exits with non-zero value) you need a || not an &&.
cmd1 && cmd2
will run cmd2 when cmd1 succeeds(exit value 0). Where as
cmd1 || cmd2
will run cmd2 when cmd1 fails(exit value non-zero).
Using ( ) makes the command inside them run in a sub-shell and calling a exit from there causes you to exit the sub-shell and not your original shell, hence execution continues in your original shell.
To overcome this use { }
The last two changes are required by bash.
The other answers have covered the direct question well, but you may also be interested in using set -e. With that, any command that fails (outside of specific contexts like if tests) will cause the script to abort. For certain scripts, it's very useful.
If you want that behavior for all commands in your script, just add
set -e
set -o pipefail
at the beginning of the script. This pair of options tell the bash interpreter to exit whenever a command returns with a non-zero exit code. (For more details about why pipefail is needed, see http://petereisentraut.blogspot.com/2010/11/pipefail.html)
This does not allow you to print an exit message, though.
Note also, each command's exit status is stored in the shell variable $?, which you can check immediately after running the command. A non-zero status indicates failure:
my_command
if [ $? -eq 0 ]
then
echo "it worked"
else
echo "it failed"
fi
I've hacked up the following idiom:
echo "Generating from IDL..."
idlj -fclient -td java/src echo.idl
if [ $? -ne 0 ]; then { echo "Failed, aborting." ; exit 1; } fi
echo "Compiling classes..."
javac *java
if [ $? -ne 0 ]; then { echo "Failed, aborting." ; exit 1; } fi
echo "Done."
Precede each command with an informative echo, and follow each command with that same
if [ $? -ne 0 ];... line. (Of course, you can edit that error message if you want to.)
Provided my_command is canonically designed, ie returns 0 when succeeds, then && is exactly the opposite of what you want. You want ||.
Also note that ( does not seem right to me in bash, but I cannot try from where I am. Tell me.
my_command || {
echo 'my_command failed' ;
exit 1;
}
You can also use, if you want to preserve exit error status, and have a readable file with one command per line:
my_command1 || exit
my_command2 || exit
This, however will not print any additional error message. But in some cases, the error will be printed by the failed command anyway.
The trap shell builtin allows catching signals, and other useful conditions, including failed command execution (i.e., a non-zero return status). So if you don't want to explicitly test return status of every single command you can say trap "your shell code" ERR and the shell code will be executed any time a command returns a non-zero status. For example:
trap "echo script failed; exit 1" ERR
Note that as with other cases of catching failed commands, pipelines need special treatment; the above won't catch false | true.
Using exit directly may be tricky as the script may be sourced from other places (e.g. from terminal). I prefer instead using subshell with set -e (plus errors should go into cerr, not cout) :
set -e
ERRCODE=0
my_command || ERRCODE=$?
test $ERRCODE == 0 ||
(>&2 echo "My command failed ($ERRCODE)"; exit $ERRCODE)