core dump not generated - linux

I am working on a PC running CentOS as its operating system.
I also work on a embedded with the same OS.
On my PC, I succeeded to create a core dump file on segmentation fault by changing:
core pattern
core_uses_pid
ulimit -c unlimited
sysctl -p
But on the embedded system nothing works - the core dump is not generated! What could be the reason?
If it matters, the application that I would like a dump of is written in C++.
What can I do to get a core dump on the embedded system?
I've made a little crash program, and core dump is generated from the crash program but not for the one i need !!!
So the problem is not on the O.S, but with the specific program.
I discover that we strip -g executable/library files before sending them to the embeded system.I did the same for my crash program, and this one still produce core dump.

Are you certain the kernel on your embedded system supports core dumps? The feature can be disabled in the kernel build (ref), in which case you may have to fake it yourself using something like google-coredumper.

Ok,
i've made a little mistake when i checked ther program on my computer, i've checked it with another signal than on the embeded system. There were still a problem why for the custom signal handler, ther is no core dump.
Solution is in one of the links:
Unfortunately, if your application is equipped with a customized signal handler, no core dump will be generated, because it is generated only by the default signal handlers. In case your application has a custom signal handler, disable it before starting to debug, otherwise, no core dump will be generated. Some sources in the Internet mention that restoring the default signal handler inside the signal handler after the exception has occurred, and sending it again in a loopback can trigger a core dump. In the tests I did, it did generate a core dump, but the only thing I saw in the core dump was the code that my handler executed (i.e. the calls to signal and kill), so this did not help me. Perhaps on other platforms this trick works better.
On my platform it do work - another solution would to generate the core dump in signal handler. I hears that gcore can do it, with windows core i got an error incompatibility.

I've seen two sources of possible information, both of which point to the /etc/security/limits.conf file:
Linux Disable Core Dumps - Yes, I know you want to enable core dumps, but this could help in reverse
CentOS enabling core dumps - Another source pointing at limits.conf.

Related

Linux core dump for current stack only

I want to resolve an application crash running on linux-3.10.85 for which I am generating the core dump. Due to space constraints, I just want the current process stack to be present in the core dump (the memory which is referred to as RSS in linux). I have found something useful for Solaris OS but unable to find anything relevant for Linux. Is this possible in linux? If yes, is there any other way to analyse the core dump file apart from gdb?
Link to coreadm utility for Solaris which solves this problem.
I have already tried setting the coredump_filter in the /proc file system but it does not seem to be working.

Core dump of main application on small embedded system

I'm trying to dump the core of a segfaulting main application on a small embedded system running on Linux. The main application essentially handles the complete execution and functionality of the device, and causes the system to reboot upon receiving a SIGSEGV signal.
I have made sure that:
the core dump is allowed to be of unlimited size
ulimit -c unlimited
an output path to a writeable directory with sufficient free space (mounted SD-card) is set
sysctl -w kernel.core_pattern='/path/to/dir/core_%e.%p'
I have permission to read and execute the binary
I have tried to dump the core of a dummy process like so:
sleep 10 &
killall -SIGSEGV sleep
And it works as expected, generating a core dump of the process at the desired location.
However, the main application does not create a core dump and just causes the system to reboot. I have tried segfaulting the application both manually though my telnet-provided shell, as well as exploiting a stack buffer overflow remotely (which is what I'm trying to investigate).
Since this is a small embedded system, I don't have access to common utilities such as gdb, ptrace, pstack etc.
Is there any workaround here that would allow me to view the stack of the process, either while still running or after receiving a SIGSEGV signal?

Core is not generated while running with valgrind

I am using valgrind (valgrind-3.10.1) on my ubuntu for testing an cpp application.
I added some code which caused the application to crash and to generate a core file, which is working perfectly fine.
But when I run the same application with valgrind, it fails to generate a core file.
Possible fixes which i tried and not helpful.
Verified the core file size using ulimit -a (it is unlimited)
Verified the kernel.core_pattern (kernel.core_pattern =
|/usr/share/apport/apport %p %s %c %d %P).
What other explanation could there be for this issue?
But when I run the same application with valgrind, it fails to generate a core file.
Valgrind runs your application on a "virtual" CPU. When it detects that the app perform undefined operation that would normally cause the process to be terminated, it prints a message to this effect, and exits.
If ulimit -c allows it, and the current directory is writable, Valgrind also produces vgcore.$pid, which is the memory dump in core dump format of the simulated application. That is the core file you want to analyze with GDB.
The actual operation that would have caused the core dump never executes on the real CPU, so the Linux kernel never sees the application crash.
Even if Valgrind did execute that operation and the kernel core dump was produced, that core would be useless, because it would represent the state of the Valgrind itself, not the state of the application.

what tool for debugging a linux kernel?

I am new to linux kernel.
wandering how to browse the complete flow, right from the power up of CPU.
Basic idea on BIOS/ROM code.
can I have some tool to debug the complete kernel ?
or
raw code browsing is preferable ?
The following tools may help you to debug Linux kernel
Dynamic Probes is one of the popular debugging tool for Linux which developed by IBM. This tool allows the placement of a “probe” at almost any place in the system, in both user and kernel space. The probe consists of some code (written in a specialized, stack-oriented language) that is executed when control hits the given point. Resources regarding dprobes / kprobes listed below
http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaax/dprobesltt.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.6212&rep=rep1&type=pdf
https://www.redhat.com/magazine/005mar05/features/kprobes/
https://sourceware.org/systemtap/kprobes/
http://www.ibm.com/developerworks/library/l-kprobes/index.html
https://doc.opensuse.org/documentation/html/openSUSE_121/opensuse-tuning/cha.tuning.kprobes.html
Linux Trace Toolkit is a kernel patch and a set of related utilities that allow the tracing of events in the kernel. The trace includes timing information and can create a reasonably complete picture of what happened over a given period of time. Resources of LTT, LTT Viewer and LTT Next Generation
http://elinux.org/Linux_Trace_Toolkit
http://www.linuxjournal.com/article/3829
http://multivax.blogspot.com/2010/11/introduction-to-linux-tracing-toolkit.html
MEMWATCH is an open source memory error detection tool. It works by defining MEMWATCH in gcc statement and by adding a header file to our code. Through this we can track memory leaks and memory corruptions. Resources regarding MEMWATCH
http://www.linuxjournal.com/article/6059
ftrace is a good tracing framework for Linux kernel. ftrace traces internal operations of the kernel. This tool included in the Linux kernel in 2.6.27. With its various tracer plugins, ftrace can be targeted at different static tracepoints, such as scheduling events, interrupts, memory-mapped I/O, CPU power state transitions, and operations related to file systems and virtualization. Also, dynamic tracking of kernel function calls is available, optionally restrictable to a subset of functions by using globs, and with the possibility to generate call graphs and provide stack usage. You can find a good tutorial of ftrace at https://events.linuxfoundation.org/slides/2010/linuxcon_japan/linuxcon_jp2010_rostedt.pdf
ltrace is a debugging utility in Linux, used to display the calls a user space application makes to shared libraries. This tool can be used to trace any dynamic library function call. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program.
http://www.ellexus.com/getting-started-with-ltrace-how-does-it-do-that/?doing_wp_cron=1425295977.1327838897705078125000
http://developerblog.redhat.com/2014/07/10/ltrace-for-rhel-6-and-7/
KDB is the in-kernel debugger of the Linux kernel. KDB follows simplistic shell-style interface. We can use it to inspect memory, registers, process lists, dmesg, and even set breakpoints to stop in a certain location. Through KDB we can set breakpoints and execute some basic kernel run control (Although KDB is not source level debugger). Several handy resources regarding KDB
http://www.drdobbs.com/open-source/linux-kernel-debugging/184406318
http://elinux.org/KDB
http://dev.man-online.org/man1/kdb/
https://www.kernel.org/pub/linux/kernel/people/jwessel/kdb/usingKDB.html
KGDB is intended to be used as a source level debugger for the Linux kernel. It is used along with gdb to debug a Linux kernel. Two machines are required for using kgdb. One of these machines is a development machine and the other is the target machine. The kernel to be debugged runs on the target machine. The expectation is that gdb can be used to "break in" to the kernel to inspect memory, variables and look through call stack information similar to the way an application developer would use gdb to debug an application. It is possible to place breakpoints in kernel code and perform some limited execution stepping. Several handy resources regarding KGDB
http://landley.net/kdocs/Documentation/DocBook/xhtml-nochunks/kgdb.html
First, see related question Linux kernel live debugging, how it's done and what tools are used?. Try to use KDB or Ftrace.
If your intention is understanding whole flow of Linux kernel, running Linux kernel on QEMU can be easy way to learn how Linux works. Esp. you can emulate many CPU types without real H/W. or how about user mode Linux?
This document can be helpful to debug kernel on QEMU.
Just adding, the Linux kernel is not very suitable for debugging. Linus Torvalds once stated that he's againts supportng kernel debugging in Linux because it leads to badly written code.
I used kdbg, however I didn't find it very useful, what I suggest is to debug the kernel the oldschool way, using printk.

GDB not breaking on SIGSEGV

I'm trying to debug an application for an ARM processor from my x86 box. I some followed instructions from someone that came before on getting a development environment setup. I've got a version of gdbserver that has been cross-compiled for the ARM processor and appears to allow me to connect to it via my ARM-aware gdb on my box.
I'm expecting that when the process I've got gdb attached to crashes (from a SIGSEGV or similar) it will break so that I can check out the call stack.
Is that a poor assumption? I'm new to the ARM world and cross-compiling things, is there possibly a good resource to get started on this stuff that I'm missing?
It depends on the target system (the one which uses an ARM processor). Some embedded systems detect invalid memory accesses (e.g. dereferencing NULL) but react with unconditional, uncatchable system termination (I have done development on such a system). What kind of OS is the target system running ?
So i assume that the gdb client is able to connect to gdbserver and you are able to put the break point on the running process right?
If all the above steps are successful then you should put the break point before the instruction which crashes, lets say if you dont know where is it crashing then i would say once the application is crashed, the core will be generated, take that core from the board. Then compile the source code again with debug option using -g option(if binaries are stripped) and do the offline ananlysis of core. something like below
gdb binary-name core_file
Then once you get gdb prompt ,give below commands
gdb thread apply all bt
The above command will give you the complete backtrace of all the threads, remember that binaries should not be stripped and the proper path of all the source code and shared lib should be available.
you can switch between threads using below command on gdb prompt
gdb thread thread_number
If the core file is not getting generated on the board then try below command on board before executing the application
ulimit -c unlimited

Resources