Any way to capture Windows specific exit codes in in git-bash? - cygwin

I have a scenario where a windows application that I execute in CI exits with -1073740791 eg Stack Overflow. One cmd, this can be caught obviously via %errorlevel% but on bash, at least this exit code maps to 127 in $?.
Obviously, bash on windows should not break scripting so anything above or beyond 0-255 is not fine.
Question is: Is there any special variables or mechanism directly in git-bash itself to catch this actual value ? In this case, the executable is testsuite (think off google benchhmark or google test) and exit code 127 - command not found is not helpful at all.

I had the same issue and i do not think that there is any way to do that within bash.
I decided to wrap my executable in a powershell call, append the exit code to stdout and extract it afterwards like this:
OUTPUT=$(powershell ".\"$EXECUTABLE\" $PARAMETERS 2>&1 | % ToString; echo \$LASTEXITCODE")
# Save exit code to separate variable and remove it from $OUTPUT.
EXITCODE=$(echo "$OUTPUT" | tail -n1 | tr -d '\r\n')
OUTPUT=$(sed '$d' <<< "$OUTPUT")
Some notes:
This solution does combine all stdout and stderr output into the variable $OUTPUT.
Redirecting stderr in powershell wraps the output in some error class. Calling ToString() on these returns them as normal text. This is what the | % ToString is for. Cmp. this SO Answer.
Invoking Powershell can be surprisingly slow due to Windows Defender. This can possibly be fixed by adding powershell to the list of Windows Defender exclusions.

Related

How to capture linux command log into the file?

Let's say I have the below command.
STATE_NOT_C_COUNT=`mongo --host "${DB_HOST}" --port 27017 "${MONGO_DATABASE}" --eval "db.$MONGO_DATABASE.count({\"state\" : {"'"$ne"'":\"C\"},\"physicalTableName\":\"table_name\"},{nolock:true})" | tail -1`
When I used to run the above command, got the exception like
exception: connect failed
I want to capture this exception in into the file via the error function.
error(){
if [ "$?" -ne "0" ]; then
echo "$1" 2>&1 error_log
exit 1
fi
}
I'm using the above function like this:
error $STATE_NOT_C_COUNT
But I'm not able to capture the exception through the function in files.
What you are doing is terrible. Let the program that fails print its error messages to stderr, and ensure that stderr is pointed to the right thing. However, the major issue you are having is just lack of quotes. Try:
error "$STATE_NOT_C_COUNT"
The issue is that the command error $STATE_NOT_C_COUNT is subject to field splitting, so if $STATE_NOT_C_COUNT contains any whitespace it is split into arguments, and you are only writing the first one. Another alternative is to write echo "$#" in the function, but this will squash whitespace. However, it cannot be stressed enough that this is a terrible approach, completely against the unix philosophy. The program should write its error to stderr, and you should let them go there. Just make sure stderr is pointed where you want it. The only possible reason to capture stderr is if you want to write it to multiple locations, so you might pipe it to tee or to a syslogger, or some other message bus, but doing such a thing is questionable.

Accessing the value returned by a shell script in the parent script

I am trying to access a string returned by a shell script which was called from a parent shell script. Something like this:
ex.sh:
echo "Hemanth"
ex2.sh:
sh ex.sh
if [ $? == "Hemanth" ]; then
echo "Hurray!!"
else
echo "Sorry Bro!"
fi
Is there a way to do this? Any help would be appreciated.
Thank you.
Use a command substitution syntax on ex2.sh
valueFromOtherScript="$(sh ex.sh)"
printf "%s\n" "$valueFromOtherScript"
echo by default outputs a new-line character after the string passed, if you don't need it in the above variable use printf as
printf "Hemanth"
on first script. Also worth adding $? will return only the exit code of the last executed command. Its values are interpreted as 0 being a successful run and a non-zero on failure. It will NEVER have a string value as you tried to use.
A Bash script does not really "return" a string. What you want to do is capture the output of a script (or external program, or function, they all act the same in this respect).
Command substitution is a common way to capture output.
captured_output="$(sh ex.sh)"
This initializes variable captured_output with the string containing all that is output by ex.sh. Well, not exactly all. Any script (or command, or function) actually has two output channels, usually called "standard out" (file descriptor number 1) and "standard error" (file descriptor number 2). When executing from a terminal, both typically end up on the screen. But they can be handled separately if needed.
For instance, if you want to capture really all output (including error messages), you would add a "redirection" after your command that tells the shell you want standard error to go to the same place as standard out.
captured_output="$(sh ex.sh 2>&1)"
If you omit that redirection, and the script outputs something on standard error, then this will still show on screen, and will not be captured.
Another way to capture output is sending it to a file, and then read back that file to a variable, like this :
sh ex.sh > output_file.log
captured_output="$(<output_file.log)"
A script (or external program, or function) does have something called a return code, which is an integer. By convention, a value of 0 means "success", and any other value indicates abnormal execution (but not necessarily failure) : the meaning of that return code is not standardized, it is ultimately specific to each script, program or function.
This return code is available in the $? special shell variable immediately after the execution terminates.
sh ex.sh
return_code=$?
echo "Return code is $return_code"

Generate specific, non-zero return code?

I am working on some piece of python code that calls various linux tools (like ssh) for automation purposes. Right now I am looking into "return code" handling.
Thus: I am looking for a simple way to run some command that gives me a specific non-zero return code; something like
echo "this is a testcommand, that should return with rc=5"
for example. But of course, the above comes back with rc=0.
I know that I can call false, but this will always return with rc=1. I am looking for something that gives me an rc that I can control.
Edit: first answers suggest to exit; but the problem with that: exit is a bash function. So, when I try to run that from within a python script, I get "No such file or directory: exit".
So, I am actually looking for some "binary" tool that gives me that (obviously one can write some simple script to get that; I am just looking if there is something similar to false that is already shipped with any Linux/Unix).
Run exit in a subshell.
$ (exit 5) ; echo $?
5
I have this function defined in .bashrc:
return_errorcode ()
{
return $1
}
So, I can directly use something like
$ return_errorcode 5
$ echo $?
5
Compared to (exit 5); echo $? option, this mechanism saves you a subshell.
This is not exactly what you are asking but custom rc can be achieved through exit command.
echo "this is a test command, that should return with " ;exit 5
echo $?
5

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

Trace of executed programs called by a Bash script

A script is misbehaving. I need to know who calls that script, and who calls the calling script, and so on, only by modifying the misbehaving script.
This is similar to a stack-trace, but I am not interested in a call stack of function calls within a single bash script.
Instead, I need the chain of executed programs/scripts that is initiated by my script.
A simple script I wrote some days ago...
# FILE : sctrace.sh
# LICENSE : GPL v2.0 (only)
# PURPOSE : print the recursive callers' list for a script
# (sort of a process backtrace)
# USAGE : [in a script] source sctrace.sh
#
# TESTED ON :
# - Linux, x86 32-bit, Bash 3.2.39(1)-release
# REFERENCES:
# [1]: http://tldp.org/LDP/abs/html/internalvariables.html#PROCCID
# [2]: http://linux.die.net/man/5/proc
# [3]: http://linux.about.com/library/cmd/blcmdl1_tac.htm
#! /bin/bash
TRACE=""
CP=$$ # PID of the script itself [1]
while true # safe because "all starts with init..."
do
CMDLINE=$(cat /proc/$CP/cmdline)
PP=$(grep PPid /proc/$CP/status | awk '{ print $2; }') # [2]
TRACE="$TRACE [$CP]:$CMDLINE\n"
if [ "$CP" == "1" ]; then # we reach 'init' [PID 1] => backtrace end
break
fi
CP=$PP
done
echo "Backtrace of '$0'"
echo -en "$TRACE" | tac | grep -n ":" # using tac to "print in reverse" [3]
... and a simple test.
I hope you like it.
You can use Bash Debugger http://bashdb.sourceforge.net/
Or, as mentioned in the previous comments, the caller bash built-in. See: http://wiki.bash-hackers.org/commands/builtin/caller
i=0; while caller $i ;do ((i++)) ;done
Or as a bash function:
dump_stack(){
local i=0
local line_no
local function_name
local file_name
while caller $i ;do ((i++)) ;done | while read line_no function_name file_name;do echo -e "\t$file_name:$line_no\t$function_name" ;done >&2
}
Another way to do it is to change PS4 and enable xtrace:
PS4='+$(date "+%F %T") ${FUNCNAME[0]}() $BASH_SOURCE:${BASH_LINENO[0]}+ '
set -o xtrace # Comment this line to disable tracing.
~$ help caller
caller: caller [EXPR]
Returns the context of the current subroutine call.
Without EXPR, returns "$line $filename". With EXPR,
returns "$line $subroutine $filename"; this extra information
can be used to provide a stack trace.
The value of EXPR indicates how many call frames to go back before the
current one; the top frame is frame 0.
Since you say you can edit the script itself, simply put a:
ps -ef >/tmp/bash_stack_trace.$$
in it, where the problem is occurring.
This will create a number of files in your tmp directory that show the entire process list at the time it happened.
You can then work out which process called which other process by examining this output. This can either be done manually, or automated with something like awk, since the output is regular - you just use those PID and PPID columns to work out the relationships between all the processes you're interested in.
You'll need to keep an eye on the files, since you'll get one per process so they may have to be managed. Since this is something that should only be done during debugging, most of the time that line will be commented out (preceded by #), so the files won't be created.
To clean them up, you can simply do:
rm /tmp/bash_stack_trace.*
UPDATE:
The code below should work. Now I have a newer answer with a newer code version that allows a message inserted in the stacktrace.
IIRC I just couldn't find this answer to update it as well at the time. But now decided code is better kept in git so latest version of the above should be in this gist.
original code-corrected answer below:
There was another answer about this somewhere but here is a function to use for getting stack trace in the sense used for example in the java programming language. You call the function and it puts the stack trace into the variable $STACK. It show the code points that led to get_stack being called. This is mostly useful for complicated execution where single shell sources multiple script snippets and nesting.
function get_stack () {
STACK=""
# to avoid noise we start with 1 to skip get_stack caller
local i
local stack_size=${#FUNCNAME[#]}
for (( i=1; i<$stack_size ; i++ )); do
local func="${FUNCNAME[$i]}"
[ x$func = x ] && func=MAIN
local linen="${BASH_LINENO[(( i - 1 ))]}"
local src="${BASH_SOURCE[$i]}"
[ x"$src" = x ] && src=non_file_source
STACK+=$'\n'" "$func" "$src" "$linen
done
}
adding pstree -p -u `whoami` >>output in your script will probably get you the information you need.
The simplest script which returns a stack trace with all callers:
i=0; while caller $i ;do ((i++)) ;done
You could try something like
strace -f -e execve script.sh

Resources