Core is not generated while running with valgrind - ubuntu-14.04

I am using valgrind (valgrind-3.10.1) on my ubuntu for testing an cpp application.
I added some code which caused the application to crash and to generate a core file, which is working perfectly fine.
But when I run the same application with valgrind, it fails to generate a core file.
Possible fixes which i tried and not helpful.
Verified the core file size using ulimit -a (it is unlimited)
Verified the kernel.core_pattern (kernel.core_pattern =
|/usr/share/apport/apport %p %s %c %d %P).
What other explanation could there be for this issue?

But when I run the same application with valgrind, it fails to generate a core file.
Valgrind runs your application on a "virtual" CPU. When it detects that the app perform undefined operation that would normally cause the process to be terminated, it prints a message to this effect, and exits.
If ulimit -c allows it, and the current directory is writable, Valgrind also produces vgcore.$pid, which is the memory dump in core dump format of the simulated application. That is the core file you want to analyze with GDB.
The actual operation that would have caused the core dump never executes on the real CPU, so the Linux kernel never sees the application crash.
Even if Valgrind did execute that operation and the kernel core dump was produced, that core would be useless, because it would represent the state of the Valgrind itself, not the state of the application.

Related

Linux core dump for current stack only

I want to resolve an application crash running on linux-3.10.85 for which I am generating the core dump. Due to space constraints, I just want the current process stack to be present in the core dump (the memory which is referred to as RSS in linux). I have found something useful for Solaris OS but unable to find anything relevant for Linux. Is this possible in linux? If yes, is there any other way to analyse the core dump file apart from gdb?
Link to coreadm utility for Solaris which solves this problem.
I have already tried setting the coredump_filter in the /proc file system but it does not seem to be working.

Core dump of main application on small embedded system

I'm trying to dump the core of a segfaulting main application on a small embedded system running on Linux. The main application essentially handles the complete execution and functionality of the device, and causes the system to reboot upon receiving a SIGSEGV signal.
I have made sure that:
the core dump is allowed to be of unlimited size
ulimit -c unlimited
an output path to a writeable directory with sufficient free space (mounted SD-card) is set
sysctl -w kernel.core_pattern='/path/to/dir/core_%e.%p'
I have permission to read and execute the binary
I have tried to dump the core of a dummy process like so:
sleep 10 &
killall -SIGSEGV sleep
And it works as expected, generating a core dump of the process at the desired location.
However, the main application does not create a core dump and just causes the system to reboot. I have tried segfaulting the application both manually though my telnet-provided shell, as well as exploiting a stack buffer overflow remotely (which is what I'm trying to investigate).
Since this is a small embedded system, I don't have access to common utilities such as gdb, ptrace, pstack etc.
Is there any workaround here that would allow me to view the stack of the process, either while still running or after receiving a SIGSEGV signal?

how can i get all the memory of the coredump with google breakpad

i can get all the memory of minidump on the windows platform by using MiniDumpWithFullMemory.but how can i do that on the linux platform?
The original question was how to create a compatible gdb coredump using google breakpad on linux.
This is actually feasible, follow google instructions to create the minidump and symbol files:
https://chromium.googlesource.com/breakpad/breakpad/+/master/docs/linux_starter_guide.md
and then use this tool to transform the minidump to a coredump:
https://chromium.googlesource.com/chromium/src/+/master/docs/linux_minidump_to_core.md
Please notice that generating a coredump from a minidump will not contain the full memory dump, only a "slim" version of it.
The kernel may (on certain conditions) dump a core(5) file. See also this question. You may need to call the setrlimit syscall to enable core dumping, perhaps thru the ulimit bash builtin.
Many things can be queried or configured thru /proc, notably /proc/1234/maps is showing you the address map of process 1234 and /proc/1234/mem gives you access to its address space.
gdb gives often you a gcore command to force a core dump.
Breakpad does not currently support writing full-memory dumps on Linux. Sorry. If you wanted you could write out full core dumps and use the core2md tool in the Breakpad tree to turn them into minidumps:
http://code.google.com/p/google-breakpad/source/browse/trunk/src/tools/linux/core2md/core2md.cc

core dump not generated

I am working on a PC running CentOS as its operating system.
I also work on a embedded with the same OS.
On my PC, I succeeded to create a core dump file on segmentation fault by changing:
core pattern
core_uses_pid
ulimit -c unlimited
sysctl -p
But on the embedded system nothing works - the core dump is not generated! What could be the reason?
If it matters, the application that I would like a dump of is written in C++.
What can I do to get a core dump on the embedded system?
I've made a little crash program, and core dump is generated from the crash program but not for the one i need !!!
So the problem is not on the O.S, but with the specific program.
I discover that we strip -g executable/library files before sending them to the embeded system.I did the same for my crash program, and this one still produce core dump.
Are you certain the kernel on your embedded system supports core dumps? The feature can be disabled in the kernel build (ref), in which case you may have to fake it yourself using something like google-coredumper.
Ok,
i've made a little mistake when i checked ther program on my computer, i've checked it with another signal than on the embeded system. There were still a problem why for the custom signal handler, ther is no core dump.
Solution is in one of the links:
Unfortunately, if your application is equipped with a customized signal handler, no core dump will be generated, because it is generated only by the default signal handlers. In case your application has a custom signal handler, disable it before starting to debug, otherwise, no core dump will be generated. Some sources in the Internet mention that restoring the default signal handler inside the signal handler after the exception has occurred, and sending it again in a loopback can trigger a core dump. In the tests I did, it did generate a core dump, but the only thing I saw in the core dump was the code that my handler executed (i.e. the calls to signal and kill), so this did not help me. Perhaps on other platforms this trick works better.
On my platform it do work - another solution would to generate the core dump in signal handler. I hears that gcore can do it, with windows core i got an error incompatibility.
I've seen two sources of possible information, both of which point to the /etc/security/limits.conf file:
Linux Disable Core Dumps - Yes, I know you want to enable core dumps, but this could help in reverse
CentOS enabling core dumps - Another source pointing at limits.conf.

GDB not breaking on SIGSEGV

I'm trying to debug an application for an ARM processor from my x86 box. I some followed instructions from someone that came before on getting a development environment setup. I've got a version of gdbserver that has been cross-compiled for the ARM processor and appears to allow me to connect to it via my ARM-aware gdb on my box.
I'm expecting that when the process I've got gdb attached to crashes (from a SIGSEGV or similar) it will break so that I can check out the call stack.
Is that a poor assumption? I'm new to the ARM world and cross-compiling things, is there possibly a good resource to get started on this stuff that I'm missing?
It depends on the target system (the one which uses an ARM processor). Some embedded systems detect invalid memory accesses (e.g. dereferencing NULL) but react with unconditional, uncatchable system termination (I have done development on such a system). What kind of OS is the target system running ?
So i assume that the gdb client is able to connect to gdbserver and you are able to put the break point on the running process right?
If all the above steps are successful then you should put the break point before the instruction which crashes, lets say if you dont know where is it crashing then i would say once the application is crashed, the core will be generated, take that core from the board. Then compile the source code again with debug option using -g option(if binaries are stripped) and do the offline ananlysis of core. something like below
gdb binary-name core_file
Then once you get gdb prompt ,give below commands
gdb thread apply all bt
The above command will give you the complete backtrace of all the threads, remember that binaries should not be stripped and the proper path of all the source code and shared lib should be available.
you can switch between threads using below command on gdb prompt
gdb thread thread_number
If the core file is not getting generated on the board then try below command on board before executing the application
ulimit -c unlimited

Resources