How to collect a minimum debug data from a truncated core of Linux C program? with or without GDB? - linux

How to collect a minimum debug data from a truncated core of Linux C program?
I tried some hours to reproduce the segmentation-fault signal with the same program but I did not succed. I only succeded to get it once. I don't know how to extract a minimum information with gdb. I thougth I was experienced with gdb but now I think I have to learn a lot again...
Perhaps someone knows another debugger than GDB to try something with. By minimum information I mean the calling function which produces a segfault signal.
My core file is 32MB and GDB indicates the core allocated memory requiered is 700MB. How to know if the deeper stack function concerned by the segfault is identified inside the core file or not?
From the name of the file I already know the concerned thread name but it is not enougth to debug the program.
I collected the /proc/$PID/maps file of the main program but I don't know if it is usefull to retrieve the segfault function.
Moreover, how to know if the segfault signal was produced from inside the thread or if it came from outside the thread?

Related

How to find reason for SEGFAULT in large multi-thread C program?

I am working with a multi-thread program for Linux embedded systems that crashes very randomly through SEGFAULT signal.
I need to find the issue WITHOUT using gdb since the crash occurs only under production environment, never during testing.
I know the symbol table of the program and I'm using sigaction() and backtrace() in main thread but I don't get enough information. The backtraced lines are from sigaction function itself. I allow 50 frames to be captured and I use -g flag in gcc for compilation:
Caught segfault at address 0xe76acc
Obtained 3 stack frames.
./mbca(_Z11print_tracev+0x20) [0x37530]
./mbca(_Z18segfault_sigactioniP7siginfoPv+0x34) [0x375f4]
/lib/libc.so.6(__default_rt_sa_restorer_v2+0) [0x40db5a60]
As the program is running 15 threads, I would like to get a clue about from which one is coming the signal so I can limit the possibilities. FYI Main thread creates a fork and that fork creates the remaining 14 threads.
How can I achive this? What can I do with the information I already have?
Thank you all for your help
PD: I also tried Core-dump file but it is not generated because this option was not included in kernel compilation and I cannot modify it.

How to dump the heap of running C++ process to a file under Linux?

I've got a program that is running on a headless/embedded Linux box, and under certain circumstances that program seems to be using up quite a bit more memory (as reported by top, etc) than I would expect it to use.
Since the fault condition is difficult to reproduce outside of the actual working environment, and since the embedded box doesn't have niceties like valgrind or gdb installed, what I'd like to do is simply write out the process's heap-memory to a file, which I could then transfer to my development machine and look through at my leisure, to see if I can tell from the contents of the file what kind of data it is that is taking up the bulk of the heap. If I'm lucky there might be a smoking gun like a repeating string or magic-number that comes up a lot, that points me to the place in my code that is either leaking or perhaps just growing a data structure without bounds.
Is there a good way to do this? The only way I can think of would be to force the process to crash and then collect a core dump, but since the fault condition is rare it would be preferable if I could collect the information without crashing the process as a side effect.
You can read the entire memory space of the process via /proc/pid/mem; You can read /proc/pid/maps to see what is where in the memory space (so you can find the bounds of the heap and read just that). You can attempt to read the data while the process is running (in which case it might be changing while you are reading it), or you can stop the process with a SIGSTOP signal and later resume it with a SIGCONT.

shared object file path that is executed by the current thread

Is there a way to get the file path/file name of the .so that is currently under execution by a thread? The program is written in c++ and run on a 64-bit Linux 3.0 machine.
You can sequentially read (from inside your process) the file /proc/self/maps to get the memory mapping (including those for shared objects) of the current process.
Then you could get your program counter (or those of the caller) and find in which segment it is. Perhaps backtrace or GCC builtin_return_address is relevant.
You might also use the dladdr function.
See proc(5), backtrace(3), dladdr(3) man pages, and also this answer.
addenda
From a signal handler you could get the program counter when the signal was sent by using sigaction(2) with SA_SIGINFO. The sa_sigaction function pointers gets a ucontext_t from which you could get the program counter register (using machine dependent C code). Then you could handle it.
I suggest looking into detail into what GCC is doing
I think the closes thing is to get a list of all shared libraries loaded by your process. You can do that either by using pmap or lsof.

Generating core dumps

From times to times my Go program crashes.
I tried a few things in order to get core dumps generated for this program:
defining ulimit on the system, I tried both ulimit -c unlimited and ulimit -c 10000 just in case. After launching my panicking program, I get no core dump.
I also added recover() support in my program and added code to log to syslog in case of panic but I get nothing in syslog.
I am running out of ideas right now.
I must have overlooked something but I do not find what, any help would be appreciated.
Thanks ! :)
Note that a core dump is generated by the OS when a condition from a certain set is met. These conditions are pretty low-level — like trying to access unmapped memory or trying to execute an opcode the CPU does not know etc. Under a POSIX operating system such as Linux when a process does one of these things, an appropriate signal is sent to it, and some of them, if not handled by the process, have a default action of generating a core dump, which is done by the OS if not prohibited by setting a certain limit.
Now observe that this machinery treats a process on the lowest possible level (machine code), but the binaries a Go compiler produces are more higher-level that those a C compiler (or assembler) produces, and this means certain errors in a process produced by a Go compiler are handled by the Go runtime rather than the OS. For instance, a typical NULL pointer dereference in a process produced by a C compiler usually results in sending the process the SIGSEGV signal which is then typically results in an attempt to dump the process' core and terminate it. In contrast, when this happens in a process compiled by a Go compiler, the Go runtime kicks in and panics, producing a nice stack trace for debugging purposes.
With these facts in mind, I would try to do this:
Wrap your program in a shell script which first relaxes the limit for core dumps (but see below) and then runs your program with its standard error stream redirected to a file (or piped to the logger binary etc).
The limits a user can tweak have a hierarchy: there are soft and hard limits — see this and this for an explanation. So try checking your system does not have 0 for the core dump size set as a hard limit as this would explain why your attempt to raise this limit has no effect.
At least on my Debian systems, when a program dies due to SIGSEGV, this fact is logged by the kernel and is visible in the syslog log files, so try grepping them for hints.
First, please make sure all errors are handled.
For core dump, you can refer generate a core dump in linux
You can use supervisor to reboot the program when it crashes.

Porting Unix ada app to Linux: Seg fault before program begins

I am an intern who was offered the task of porting a test application from Solaris to Red Hat. The application is written in Ada. It works just fine on the Unix side. I compiled it on the linux side, but now it is giving me a seg fault. I ran the debugger to see where the fault was and got this:
Warning: In non-Ada task, selecting an Ada task.
=> runtime tasking structures have not yet been initialized.
<non-Ada task> with thread id 0b7fe46c0
process received signal "Segmentation fault" [11]
task #1 stopped in _dl_allocate_tls
at 0870b71b: mov edx, [edi] ;edx := [edi]
This seg fault happens before any calls are made or anything is initialized. I have been told that 'tasks' in ada get started before the rest of the program, and the problem could be with a task that is running.
But here is the kicker. This program just generates some code for another program to use. The OTHER program, when compiled under linux gives me the same kind of seg fault with the same kind of error message. This leads me to believe there might be some little tweak I can use to fix all of this, but I just don't have enough knowledge about Unix, Linux, and Ada to figure this one out all by myself.
This is a total shot in the dark, but you can have tasks blow up like this at startup if they are trying to allocate too much local memory on the stack. Your main program can safely use the system stack, but tasks have to have their stack allocated at startup from dynamic memory, so typcially your runtime has a default stack size for tasks. If your task tries to allocate a large array, it can easily blow past that limit. I've had it happen to me before.
There are multiple ways to fix this. One way is to move all your task-local data into package global areas. Another is to dynamically allocate it all.
If you can figure out how much memory would be enough, you have a couple more options. You can make the task a task type, and then use a
for My_Task_Type_Name'Storage_Size use Some_Huge_Number;
statement. You can also use a "pragma Storage_Size(My_Task_Type_Name)", but I think the "for" statement is preferred.
Lastly, with Gnat you can also change the default task stack size with the -d flag to gnatbind.
Off the top of my head, if the code was used on Sparc machines, and you're now runing on an x86 machine, you may be running into endian problems.
It's not much help, but it is a common gotcha when going multiplat.
Hunch: the linking step didn't go right. Perhaps the wrong run-time startup library got linked in?
(How likely to find out what the real trouble was, months after the question was asked?)

Resources