Catching a gcc/glibc fatal error - io

I have a binary that occasionally throws a fatal error. I have no access to the source code, but it can be something like the following C++ code compiled as test.a:
#include <ctime>
int main() {
int *a = new int;
delete a;
time_t t=time(0);
if (t%2==0)
delete a;
return 0;
}
In approximately 50% of runs it leads to the error. What is important, this type of error cannot be caught through the redirecting of stderr:
$ ./test.a 2>/tmp/Error
*** Error in `./test.a': double free or corruption (fasttop): 0x00000000007de010 ***
Aborted (core dumped)
When I asked, why this happens, I received an answer that "gcc (and probably glibc) open /dev/tty directly to output such fatal errors bypassing IO redirection".
However, it would be very nice to have a way to catch this error from a script. As it occurs randomly and not all the time, it would be useful to retry, e.g.:
while :; do
./test.a 2>/tmp/Error
ERROR="$(</tmp/Error)"
if [ -z "$ERROR" ]; then
echo "Smooth run!"
break
else
echo "Error occured. Retry!"
fi
done
This of course does not work and in 50% cases outputs this:
*** Error in `./test.a': double free or corruption (fasttop): 0x0000000000df3010 ***
./test.sh: line 10: 16788 Aborted (core dumped) ./test.a 2> /tmp/Error
Smooth run!
Therefore, I wonder, if there is a way to catch a fatal error and then make a retry to run the bugged binary (I primarily look for a linux solution)?
EDIT:
If a binary is called directly from the script, the desired behaviour can be achieved by checking the exit status:
while :; do
./test.a
if (( $? == 0 )); then
echo "Smooth run!"
break
else
echo "Error occured. Retry!"
fi
done
Of course, for this to work the callable executable itself must be written properly. In my real case, for example, the error-causing binary itself is called from another binary. Let us consider test2.a as the result of compilation of the following code:
#include <cstdlib>
#include <iostream>
using namespace std;
int main() {
std::system("./test.a");
cout<<"Execution of test.a finished"<<endl;
return 0;
}
In this case call of ./test2.a always exits with status 0, as crash during the system() call does not interrupt the execution of the binary:
*** Error in `./test.a': double free or corruption (fasttop): 0x0000000002431010 ***
Aborted (core dumped)
Execution of test.a finished
Smooth run!
How to circumvent this (as I cannot modify the binaries) I still do not know.

Related

Need some advanced explanations about BOF

So I was following a tutorial about buffer overflow with the following code:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char **argv)
{
volatile int modified;
char buffer[64];
modified = 0;
gets(buffer);
if(modified != 0) {
printf("you have changed the 'modified' variable\n");
} else {
printf("Try again?\n");
}
}
I then compile it with gcc and additionnally run beforehand sudo sysctl -w kernel.randomize_va_space=0 to prevent random memory and allow the stack smashing (buffer overflow) exploit
gcc protostar.c -g -z execstack -fno-stack-protector -o protostar
-g is to allow debugging in gdb ('list main')
-z execstack -fno-stack-protector is to remove the stack protection
and then execute it:
python -c 'print "A"*76' | ./protostar
Try again?
python -c 'print "A"*77' | ./protostar
you have changed the 'modified' variable
So I do not understand why the buffer overflow occurs with 77 while it should have been 65, so it makes a 12 bits difference (3 bytes). I wonder the reason why if anyone has a clear explanation ?
Also it remains this way from 77 to 87:
python -c 'print "A"*87' | ./protostar
you have changed the 'modified' variable
And from 88 it adds a segfault:
python -c 'print "A"*88' | ./protostar
you have changed the 'modified' variable
Segmentation fault (core dumped)
Regards
To fully understand what's happening, it's first important to make note of how your program is laying out memory.
From your comment, you have that for this particular run, memory for buffer starts at 0x7fffffffdf10 and then modified starts at 0x7fffffffdf5c (although randomize_va_space may keep this consistent across runs, but I'm not quite sure).
So you have something like this:
0x7fffffffdf10 0x7fffffffdf50 0x7fffffffdf5c
↓ ↓ ↓
(64 byte buffer)..........(some 12 bytes).....(modified)....
Essentially, you have the 64 character buffer, then when that ends, there's 12 bytes that are used for some other stack variable (likely 4 bytes argc and 8 bytes for argv), and then modified comes after, precisely starting 64+12 = 76 bytes after the buffer starts.
Therefore, when you write between 65 and 76 characters into the 64 byte buffer, it goes past and starts writing into those 12 bytes that are in-between the buffer and modified. When you start writing the 77th character, it starts overwriting what's in modified which causes you to see the "you have changed the 'modified' variable" message.
You asked also "why does it work if I go up to 87 and then at 88 there's a segfault? The answer is that because it's undefined behavior, as soon as you start writing into invalid memory and the kernel recognizes it, it'll immediately kill your process because you are trying to read/write memory you don't have access to.
Note that you should almost never use gets in practice and this is a big reason, since you don't know exactly how many bytes you will be reading so there's a chance to overwrite. Also note that the behavior you're seeing is not the same behavior I'm seeing on my machine when I run it. This is normal, and that's because it's undefined behavior. There are no guarantees to what will happen when you run it. On my machine, modified actually comes before buffer in memory, so I don't ever see the modified variable get overwritten. I think this is a good learning example to understand why undefined behavior like this is just so unpredictable.

How can I detect if any error was given by Helgrind(valgrind)

I have a bash script that needs to detect whether any error was given by Helgrind. I have tried using the following:
thread_race=$(valgrind --error-exitcode=10 --tool=helgrind ./$2.out $3; echo $?)
$2 is a .cpp file and $3 is an argument. However, this doesn't work because it doesn't actually change the exit code.
I would expect thread race to be 10 if any errors are detected, and something else otherwise.
Is there any way to know if helgrind has detected any errors at all and then save the answer? I'm using Ubuntu linux with clang++.
EDIT:
In both cases when helgrind detects an error, and doesn't detect an error, thread_race returns 134.
I have a similar test without helgrind that works perfectly fine;
leak=$(valgrind --leak-check=full --error-exitcode=55 ./$2.out $3 >/dev/null; echo $?)
This would change the exit code to 55 when an error is given. I have been notified that --leak-check=full is needed for proper exit codes, using that results in more errors when using helgrind:
thread_race=$(valgrind --leak-check=full --error-exitcode=10 --tool=helgrind ./$2.out $3; echo $?)
outputs
valgrind: Unknown option: --leak-check=full
and thread_race = 1

Cannot locate core file with abrt-hook-cpp installed

I've been led to understand that if abrt-ccpp.service is installed on a Linux PC, it supersedes/overwrites (I've read both, not sure which is true) the file /proc/sys/kernel/core_pattern, which otherwise specifies the location and filename pattern of core files.
Question:
When I execute systemctl, why does abrt-ccpp.service report exited under the SUB column? I don't understand the combination of active and exited: is the service "alive"/active/running or not?
> systemctl
UNIT LOAD ACTIVE SUB
abrt-ccpp.service loaded active exited ...
Question:
Where are core files generated? I wrote this program to generate a SIGSEGV:
#include <iostream>
int main(int argc, char* argv[], char* envz[])
{
int* pInt = NULL;
std::cout << *pInt << std::endl;
return 0;
}
Compilation and execution as follows:
> g++ main.cpp
> ./a.out
Segmentation fault (core dumped)
But I cannot locate where the core file is generated.
What I have tried:
Looked in the same directory as my main.cpp. Core file is not there.
Looked in /var/tmp/abrt/ because of the following comment in /etc/abrt/abrt.conf. Core file is not there.
...
# Specify where you want to store coredumps and all files which are needed for
# reporting. (default:/var/tmp/abrt)
#
# Changing dump location could cause problems with SELinux. See man_abrt_selinux(8).
#
#DumpLocation = /var/tmp/abrt
...
Looked in /var/spool/abrt/ because of a comment at this link. Core file is not there.
Edited /etc/abrt/abrt.conf and uncommented and set DumpLocation = ~/foo which is an existing directory. Followed this by restarting abrt-hook-ccpp (sudo service abrt-ccpp restart) and rerunning a.out. Core file was not generated in ~/foo/
Verified that ulimit -c reports unlimited.
I am out of ideas of what else to try and where else to look.
In case helpful, this is the content of my /proc/sys/kernel/core_pattern:
> cat /proc/sys/kernel/core_pattern
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
Can someone help explain how the abrt-hook-ccpp service works and where it generates core files? Thank you.
I'd like to credit https://unix.stackexchange.com/users/119298/meuh who answered this at https://unix.stackexchange.com/questions/343240/cannot-locate-core-file-with-abrt-hook-cpp-installed.
The answer was to add this line in file /etc/abrt/abrt-action-save-package-data.conf
ProcessUnpackaged = yes
The comment from #daniel-kamil-kozar was also a viable workaround.

Raise error in a Bash script

I want to raise an error in a Bash script with message "Test cases Failed !!!". How to do this in Bash?
For example:
if [ condition ]; then
raise error "Test cases failed !!!"
fi
This depends on where you want the error message be stored.
You can do the following:
echo "Error!" > logfile.log
exit 125
Or the following:
echo "Error!" 1>&2
exit 64
When you raise an exception you stop the program's execution.
You can also use something like exit xxx where xxx is the error code you may want to return to the operating system (from 0 to 255). Here 125 and 64 are just random codes you can exit with. When you need to indicate to the OS that the program stopped abnormally (eg. an error occurred), you need to pass a non-zero exit code to exit.
As #chepner pointed out, you can do exit 1, which will mean an unspecified error.
Basic error handling
If your test case runner returns a non-zero code for failed tests, you can simply write:
test_handler test_case_x; test_result=$?
if ((test_result != 0)); then
printf '%s\n' "Test case x failed" >&2 # write error message to stderr
exit 1 # or exit $test_result
fi
Or even shorter:
if ! test_handler test_case_x; then
printf '%s\n' "Test case x failed" >&2
exit 1
fi
Or the shortest:
test_handler test_case_x || { printf '%s\n' "Test case x failed" >&2; exit 1; }
To exit with test_handler's exit code:
test_handler test_case_x || { ec=$?; printf '%s\n' "Test case x failed" >&2; exit $ec; }
Advanced error handling
If you want to take a more comprehensive approach, you can have an error handler:
exit_if_error() {
local exit_code=$1
shift
[[ $exit_code ]] && # do nothing if no error code passed
((exit_code != 0)) && { # do nothing if error code is 0
printf 'ERROR: %s\n' "$#" >&2 # we can use better logging here
exit "$exit_code" # we could also check to make sure
# error code is numeric when passed
}
}
then invoke it after running your test case:
run_test_case test_case_x
exit_if_error $? "Test case x failed"
or
run_test_case test_case_x || exit_if_error $? "Test case x failed"
The advantages of having an error handler like exit_if_error are:
we can standardize all the error handling logic such as logging, printing a stack trace, notification, doing cleanup etc., in one place
by making the error handler get the error code as an argument, we can spare the caller from the clutter of if blocks that test exit codes for errors
if we have a signal handler (using trap), we can invoke the error handler from there
Error handling and logging library
Here is a complete implementation of error handling and logging:
https://github.com/codeforester/base/blob/master/lib/stdlib.sh
Related posts
Error handling in Bash
The 'caller' builtin command on Bash Hackers Wiki
Are there any standard exit status codes in Linux?
BashFAQ/105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
Equivalent of __FILE__, __LINE__ in Bash
Is there a TRY CATCH command in Bash
To add a stack trace to the error handler, you may want to look at this post: Trace of executed programs called by a Bash script
Ignoring specific errors in a shell script
Catching error codes in a shell pipe
How do I manage log verbosity inside a shell script?
How to log function name and line number in Bash?
Is double square brackets [[ ]] preferable over single square brackets [ ] in Bash?
There are a couple more ways with which you can approach this problem. Assuming one of your requirement is to run a shell script/function containing a few shell commands and check if the script ran successfully and throw errors in case of failures.
The shell commands in generally rely on exit-codes returned to let the shell know if it was successful or failed due to some unexpected events.
So what you want to do falls upon these two categories
exit on error
exit and clean-up on error
Depending on which one you want to do, there are shell options available to use. For the first case, the shell provides an option with set -e and for the second you could do a trap on EXIT
Should I use exit in my script/function?
Using exit generally enhances readability In certain routines, once you know the answer, you want to exit to the calling routine immediately. If the routine is defined in such a way that it doesn’t require any further cleanup once it detects an error, not exiting immediately means that you have to write more code.
So in cases if you need to do clean-up actions on script to make the termination of the script clean, it is preferred to not to use exit.
Should I use set -e for error on exit?
No!
set -e was an attempt to add "automatic error detection" to the shell. Its goal was to cause the shell to abort any time an error occurred, but it comes with a lot of potential pitfalls for example,
The commands that are part of an if test are immune. In the example, if you expect it to break on the test check on the non-existing directory, it wouldn't, it goes through to the else condition
set -e
f() { test -d nosuchdir && echo no dir; }
f
echo survived
Commands in a pipeline other than the last one, are immune. In the example below, because the most recently executed (rightmost) command's exit code is considered ( cat) and it was successful. This could be avoided by setting by the set -o pipefail option but its still a caveat.
set -e
somecommand that fails | cat -
echo survived
Recommended for use - trap on exit
The verdict is if you want to be able to handle an error instead of blindly exiting, instead of using set -e, use a trap on the ERR pseudo signal.
The ERR trap is not to run code when the shell itself exits with a non-zero error code, but when any command run by that shell that is not part of a condition (like in if cmd, or cmd ||) exits with a non-zero exit status.
The general practice is we define an trap handler to provide additional debug information on which line and what cause the exit. Remember the exit code of the last command that caused the ERR signal would still be available at this point.
cleanup() {
exitcode=$?
printf 'error condition hit\n' 1>&2
printf 'exit code returned: %s\n' "$exitcode"
printf 'the command executing at the time of the error was: %s\n' "$BASH_COMMAND"
printf 'command present on line: %d' "${BASH_LINENO[0]}"
# Some more clean up code can be added here before exiting
exit $exitcode
}
and we just use this handler as below on top of the script that is failing
trap cleanup ERR
Putting this together on a simple script that contained false on line 15, the information you would be getting as
error condition hit
exit code returned: 1
the command executing at the time of the error was: false
command present on line: 15
The trap also provides options irrespective of the error to just run the cleanup on shell completion (e.g. your shell script exits), on signal EXIT. You could also trap on multiple signals at the same time. The list of supported signals to trap on can be found on the trap.1p - Linux manual page
Another thing to notice would be to understand that none of the provided methods work if you are dealing with sub-shells are involved in which case, you might need to add your own error handling.
On a sub-shell with set -e wouldn't work. The false is restricted to the sub-shell and never gets propagated to the parent shell. To do the error handling here, add your own logic to do (false) || false
set -e
(false)
echo survived
The same happens with trap also. The logic below wouldn't work for the reasons mentioned above.
trap 'echo error' ERR
(false)
Here's a simple trap that prints the last argument of whatever failed to STDERR, reports the line it failed on, and exits the script with the line number as the exit code. Note these are not always great ideas, but this demonstrates some creative application you could build on.
trap 'echo >&2 "$_ at $LINENO"; exit $LINENO;' ERR
I put that in a script with a loop to test it. I just check for a hit on some random numbers; you might use actual tests. If I need to bail, I call false (which triggers the trap) with the message I want to throw.
For elaborated functionality, have the trap call a processing function. You can always use a case statement on your arg ($_) if you need to do more cleanup, etc. Assign to a var for a little syntactic sugar -
trap 'echo >&2 "$_ at $LINENO"; exit $LINENO;' ERR
throw=false
raise=false
while :
do x=$(( $RANDOM % 10 ))
case "$x" in
0) $throw "DIVISION BY ZERO" ;;
3) $raise "MAGIC NUMBER" ;;
*) echo got $x ;;
esac
done
Sample output:
# bash tst
got 2
got 8
DIVISION BY ZERO at 6
# echo $?
6
Obviously, you could
runTest1 "Test1 fails" # message not used if it succeeds
Lots of room for design improvement.
The draw backs include the fact that false isn't pretty (thus the sugar), and other things tripping the trap might look a little stupid. Still, I like this method.
You have 2 options: Redirect the output of the script to a file, Introduce a log file in the script and
Redirecting output to a file:
Here you assume that the script outputs all necessary info, including warning and error messages. You can then redirect the output to a file of your choice.
./runTests &> output.log
The above command redirects both the standard output and the error output to your log file.
Using this approach you don't have to introduce a log file in the script, and so the logic is a tiny bit easier.
Introduce a log file to the script:
In your script add a log file either by hard coding it:
logFile='./path/to/log/file.log'
or passing it by a parameter:
logFile="${1}" # This assumes the first parameter to the script is the log file
It's a good idea to add the timestamp at the time of execution to the log file at the top of the script:
date '+%Y%-m%d-%H%M%S' >> "${logFile}"
You can then redirect your error messages to the log file
if [ condition ]; then
echo "Test cases failed!!" >> "${logFile}";
fi
This will append the error to the log file and continue execution. If you want to stop execution when critical errors occur, you can exit the script:
if [ condition ]; then
echo "Test cases failed!!" >> "${logFile}";
# Clean up if needed
exit 1;
fi
Note that exit 1 indicates that the program stop execution due to an unspecified error. You can customize this if you like.
Using this approach you can customize your logs and have a different log file for each component of your script.
If you have a relatively small script or want to execute somebody else's script without modifying it to the first approach is more suitable.
If you always want the log file to be at the same location, this is the better option of the 2. Also if you have created a big script with multiple components then you may want to log each part differently and the second approach is your only option.
I often find it useful to write a function to handle error messages so the code is cleaner overall.
# Usage: die [exit_code] [error message]
die() {
local code=$? now=$(date +%T.%N)
if [ "$1" -ge 0 ] 2>/dev/null; then # assume $1 is an error code if numeric
code="$1"
shift
fi
echo "$0: ERROR at ${now%???}${1:+: $*}" >&2
exit $code
}
This takes the error code from the previous command and uses it as the default error code when exiting the whole script. It also notes the time, with microseconds where supported (GNU date's %N is nanoseconds, which we truncate to microseconds later).
If the first option is zero or a positive integer, it becomes the exit code and we remove it from the list of options. We then report the message to standard error, with the name of the script, the word "ERROR", and the time (we use parameter expansion to truncate nanoseconds to microseconds, or for non-GNU times, to truncate e.g. 12:34:56.%N to 12:34:56). A colon and space are added after the word ERROR, but only when there is a provided error message. Finally, we exit the script using the previously determined exit code, triggering any traps as normal.
Some examples (assume the code lives in script.sh):
if [ condition ]; then die 123 "condition not met"; fi
# exit code 123, message "script.sh: ERROR at 14:58:01.234564: condition not met"
$command |grep -q condition || die 1 "'$command' lacked 'condition'"
# exit code 1, "script.sh: ERROR at 14:58:55.825626: 'foo' lacked 'condition'"
$command || die
# exit code comes from command's, message "script.sh: ERROR at 14:59:15.575089"

Are there any standard exit status codes in Linux?

A process is considered to have completed correctly in Linux if its exit status was 0.
I've seen that segmentation faults often result in an exit status of 11, though I don't know if this is simply the convention where I work (the applications that failed like that have all been internal) or a standard.
Are there standard exit codes for processes in Linux?
Part 1: Advanced Bash Scripting Guide
As always, the Advanced Bash Scripting Guide has great information:
(This was linked in another answer, but to a non-canonical URL.)
1: Catchall for general errors
2: Misuse of shell builtins (according to Bash documentation)
126: Command invoked cannot execute
127: "command not found"
128: Invalid argument to exit
128+n: Fatal error signal "n"
255: Exit status out of range (exit takes only integer args in the range 0 - 255)
Part 2: sysexits.h
The ABSG references sysexits.h.
On Linux:
$ find /usr -name sysexits.h
/usr/include/sysexits.h
$ cat /usr/include/sysexits.h
/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
(A whole bunch of text left out.)
#define EX_OK 0 /* successful termination */
#define EX__BASE 64 /* base value for error messages */
#define EX_USAGE 64 /* command line usage error */
#define EX_DATAERR 65 /* data format error */
#define EX_NOINPUT 66 /* cannot open input */
#define EX_NOUSER 67 /* addressee unknown */
#define EX_NOHOST 68 /* host name unknown */
#define EX_UNAVAILABLE 69 /* service unavailable */
#define EX_SOFTWARE 70 /* internal software error */
#define EX_OSERR 71 /* system error (e.g., can't fork) */
#define EX_OSFILE 72 /* critical OS file missing */
#define EX_CANTCREAT 73 /* can't create (user) output file */
#define EX_IOERR 74 /* input/output error */
#define EX_TEMPFAIL 75 /* temp failure; user is invited to retry */
#define EX_PROTOCOL 76 /* remote error in protocol */
#define EX_NOPERM 77 /* permission denied */
#define EX_CONFIG 78 /* configuration error */
#define EX__MAX 78 /* maximum listed value */
8 bits of the return code and 8 bits of the number of the killing signal are mixed into a single value on the return from wait(2) & co..
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <signal.h>
int main() {
int status;
pid_t child = fork();
if (child <= 0)
exit(42);
waitpid(child, &status, 0);
if (WIFEXITED(status))
printf("first child exited with %u\n", WEXITSTATUS(status));
/* prints: "first child exited with 42" */
child = fork();
if (child <= 0)
kill(getpid(), SIGSEGV);
waitpid(child, &status, 0);
if (WIFSIGNALED(status))
printf("second child died with %u\n", WTERMSIG(status));
/* prints: "second child died with 11" */
}
How are you determining the exit status? Traditionally, the shell only stores an 8-bit return code, but sets the high bit if the process was abnormally terminated.
$ sh -c 'exit 42'; echo $?
42
$ sh -c 'kill -SEGV $$'; echo $?
Segmentation fault
139
$ expr 139 - 128
11
If you're seeing anything other than this, then the program probably has a SIGSEGV signal handler which then calls exit normally, so it isn't actually getting killed by the signal. (Programs can chose to handle any signals aside from SIGKILL and SIGSTOP.)
None of the older answers describe exit status 2 correctly. Contrary to what they claim, status 2 is what your command line utilities actually return when called improperly. (Yes, an answer can be nine years old, have hundreds of upvotes, and still be wrong.)
Here is the real, long-standing exit status convention for normal termination, i.e. not by signal:
Exit status 0: success
Exit status 1: "failure", as defined by the program
Exit status 2: command line usage error
For example, diff returns 0 if the files it compares are identical, and 1 if they differ. By long-standing convention, unix programs return exit status 2 when called incorrectly (unknown options, wrong number of arguments, etc.) For example, diff -N, grep -Y or diff a b c will all result in $? being set to 2. This is and has been the practice since the early days of Unix in the 1970s.
The accepted answer explains what happens when a command is terminated by a signal. In brief, termination due to an uncaught signal results in exit status 128+[<signal number>. E.g., termination by SIGINT (signal 2) results in exit status 130.
Notes
Several answers define exit status 2 as "Misuse of bash builtins". This applies only when bash (or a bash script) exits with status 2. Consider it a special case of incorrect usage error.
In sysexits.h, mentioned in the most popular answer, exit status EX_USAGE ("command line usage error") is defined to be 64. But this does not reflect reality: I am not aware of any common Unix utility that returns 64 on incorrect invocation (examples welcome). Careful reading of the source code reveals that sysexits.h is aspirational, rather than a reflection of true usage:
* This include file attempts to categorize possible error
* exit statuses for system programs, notably delivermail
* and the Berkeley network.
* Error numbers begin at EX__BASE [64] to reduce the possibility of
* clashing with oth­er exit statuses that random programs may
* already return.
In other words, these definitions do not reflect the common practice at the time (1993) but were intentionally incompatible with it. More's the pity.
'1': Catch-all for general errors
'2': Misuse of shell builtins (according to Bash documentation)
'126': Command invoked cannot execute
'127': "command not found"
'128': Invalid argument to exit
'128+n': Fatal error signal "n"
'130': Script terminated by Ctrl + C
'255': Exit status out of range
This is for Bash. However, for other applications, there are different exit codes.
There are no standard exit codes, aside from 0 meaning success. Non-zero doesn't necessarily mean failure either.
Header file stdlib.h does define EXIT_FAILURE as 1 and EXIT_SUCCESS as 0, but that's about it.
The 11 on segmentation fault is interesting, as 11 is the signal number that the kernel uses to kill the process in the event of a segmentation fault. There is likely some mechanism, either in the kernel or in the shell, that translates that into the exit code.
Header file sysexits.h has a list of standard exit codes. It seems to date back to at least 1993 and some big projects like Postfix use it, so I imagine it's the way to go.
From the OpenBSD man page:
According to style(9), it is not good practice to call exit(3) with arbitrary values to indicate a failure condition when ending a program. Instead, the predefined exit codes from sysexits should be used, so the caller of the process can get a rough estimation about the failure class without looking up the source code.
To a first approximation, 0 is success, non-zero is failure, with 1 being general failure, and anything larger than one being a specific failure. Aside from the trivial exceptions of false and test, which are both designed to give 1 for success, there's a few other exceptions I found.
More realistically, 0 means success or maybe failure, 1 means general failure or maybe success, 2 means general failure if 1 and 0 are both used for success, but maybe success as well.
The diff command gives 0 if files compared are identical, 1 if they differ, and 2 if binaries are different. 2 also means failure. The less command gives 1 for failure unless you fail to supply an argument, in which case, it exits 0 despite failing.
The more command and the spell command give 1 for failure, unless the failure is a result of permission denied, nonexistent file, or attempt to read a directory. In any of these cases, they exit 0 despite failing.
Then the expr command gives 1 for success unless the output is the empty string or zero, in which case, 0 is success. 2 and 3 are failure.
Then there's cases where success or failure is ambiguous. When grep fails to find a pattern, it exits 1, but it exits 2 for a genuine failure (like permission denied). klist also exits 1 when it fails to find a ticket, although this isn't really any more of a failure than when grep doesn't find a pattern, or when you ls an empty directory.
So, unfortunately, the Unix powers that be don't seem to enforce any logical set of rules, even on very commonly used executables.
Programs return a 16 bit exit code. If the program was killed with a signal then the high order byte contains the signal used, otherwise the low order byte is the exit status returned by the programmer.
How that exit code is assigned to the status variable $? is then up to the shell. Bash keeps the lower 7 bits of the status and then uses 128 + (signal nr) for indicating a signal.
The only "standard" convention for programs is 0 for success, non-zero for error. Another convention used is to return errno on error.
Standard Unix exit codes are defined by sysexits.h, as David mentioned.
The same exit codes are used by portable libraries such as Poco - here is a list of them:
Class Poco::Util::Application, ExitCode
A signal 11 is a SIGSEGV (segment violation) signal, which is different from a return code. This signal is generated by the kernel in response to a bad page access, which causes the program to terminate. A list of signals can be found in the signal man page (run "man signal").
When Linux returns 0, it means success. Anything else means failure. Each program has its own exit codes, so it would been quite long to list them all...!
About the 11 error code, it's indeed the segmentation fault number, mostly meaning that the program accessed a memory location that was not assigned.
Some are convention, but some other reserved ones are part of POSIX standard.
126 -- A file to be executed was found, but it was not an executable utility.
127 -- A utility to be executed was not found.
>128 -- A command was interrupted by a signal.
See the section RATIONALE of man 1p exit.

Resources