shared object file path that is executed by the current thread - linux

Is there a way to get the file path/file name of the .so that is currently under execution by a thread? The program is written in c++ and run on a 64-bit Linux 3.0 machine.

You can sequentially read (from inside your process) the file /proc/self/maps to get the memory mapping (including those for shared objects) of the current process.
Then you could get your program counter (or those of the caller) and find in which segment it is. Perhaps backtrace or GCC builtin_return_address is relevant.
You might also use the dladdr function.
See proc(5), backtrace(3), dladdr(3) man pages, and also this answer.
addenda
From a signal handler you could get the program counter when the signal was sent by using sigaction(2) with SA_SIGINFO. The sa_sigaction function pointers gets a ucontext_t from which you could get the program counter register (using machine dependent C code). Then you could handle it.
I suggest looking into detail into what GCC is doing

I think the closes thing is to get a list of all shared libraries loaded by your process. You can do that either by using pmap or lsof.

Related

How to collect a minimum debug data from a truncated core of Linux C program? with or without GDB?

How to collect a minimum debug data from a truncated core of Linux C program?
I tried some hours to reproduce the segmentation-fault signal with the same program but I did not succed. I only succeded to get it once. I don't know how to extract a minimum information with gdb. I thougth I was experienced with gdb but now I think I have to learn a lot again...
Perhaps someone knows another debugger than GDB to try something with. By minimum information I mean the calling function which produces a segfault signal.
My core file is 32MB and GDB indicates the core allocated memory requiered is 700MB. How to know if the deeper stack function concerned by the segfault is identified inside the core file or not?
From the name of the file I already know the concerned thread name but it is not enougth to debug the program.
I collected the /proc/$PID/maps file of the main program but I don't know if it is usefull to retrieve the segfault function.
Moreover, how to know if the segfault signal was produced from inside the thread or if it came from outside the thread?

What is a memory image in *nix systems?

In the book Advanced Programming in the Unix Environment 3rd Edition, Chapter 10 -- Signals, Page 315, when talking about the actions taken by the processes that receive a signal , the author says
When the default action is labeled "terminate+core", it means that a memory image of the process is left in the file named core of the current working directory of the process.
What is a memory image? When is this created, what's the content of it, and what is it used for?
A memory image is simply a copy of the process's virtual memory, saved in a file. It's used when debugging the program, as you can examine the values of the program's variables and determine which functions were being called at the time of the failure.
As the documentation you quoted says, this file is created when the process is terminated due to a signal that has the "terminate+core" default action.'
A memory image is often called a core image. See core(5) and the core dump wikipage.
Grossly speaking, a core image describes the process virtual address space (and content) at time of crash (including call stacks of each active thread and writable data segments for global data and heaps, but often excluding text or code segments which are read-only and given in the executable ELF file or in shared libraries). It also contains the register state (for each thread).
The name core is understandable only by old guys like me (having seen computers built in the 1960 & 1970-s like IBM/360, PDP-10 and early PDP-11, both used for developing the primordial Unix), since long time ago (1950-1970) random access memory was made with magnetic core memory.
If you have compiled all your source code with debug information (e.g. using gcc -g -Wall) you can do some post-mortem debugging (after yourprogram crashed and dumped a core file!) using gdb as
gdb yourprogram core
and the first gdb command you'll try is probably bt to get the backtrace.
Don't forget to enable core dumps, with the setrlimit(2) syscall generally done in your shell with e.g. ulimit  -c
Several signals can dump core, see signal(7). A common cause is a segmentation violation, like when you dereference a NULL or bad pointer, which gives a SIGSEGV signal which (often) dumps a core file in the current directory.
See also gcore(1).

ptrace: get imagebase of tracee?

I am on ubuntu 13.10 and have this little stripped+packed elf file. I need to dump various pieces of information from its process in an automated way, so i hacked together a tiny tracer that traces my progress, similar to strace. Three questions arose:
1) after attaching to my process, how can i get it's imagebase?
2) where does the process break first? Apparently it is not the EP of the program.
3) any way i can be notified when a .so/.lib file is loaded? GDB can do this somehow, i think.
The first question really is the most important one. Any help is appreciated.
1) /proc/<PID>/maps contains list of everything the process mapped and from where, including pages mapped from an executable. By reading executable ELF headers you should be able to figure out where .text is.
2) Execution of dynamically linked binary typically starts with an interpreter. INTERP program header in an ELF executable (dump with readelf -e) will have its name. It's interpreter's entry point where execution starts. Typically it's a runtime linker ld-<some-variant>.so. It maps in executable's sections and may also map required shared libraries.
3) GDB has fairly detailed knowledge how runtime linker is implemented so it's able to intercept dynamic object loading by setting breakpoints in the right places. You can do the same. dlopen() seems like a good candidate for an interception point. As I noted in #2, shared objects may have been pre-loaded before the executable gets control.

Address of instruction causing SIGSEGV in external program

I want to get address of instruction that causes external program to SIGSEGV. I tried using ptrace for this, but I'm getting EIP from kernel space (probably default signal handler?). How GDB is able to get the correct EIP?
Is there a way to make GDB provide this information using some API?
edit:
I don't have sources of the program, only binary executable. I need automation, so I can't simply use "run", "info registers" in GDB. I want to implement "info registers" in my own mini-debugger :)
You can attach to a process using ptrace. I found an article at Linux Gazette.
It looks like you will want PTRACE_GETREGS for the registers. You will want to look at some example code like strace to see how it manages signal handling and such. It looks to me from reading the documentation that the traced child will stop at every signal and the tracing parent must wait() for the signal from the child then command it to continue using PTRACE_CONT.
Compile your program with -g, run gdb <your_app>, type run and the error will occur. After that use info registers and look in the rip register.
You can use objectdump -D <your_app> to get some more information about the code at that position.
You can enable core dumps with ulimit -c unlimited before running your external program.
Then you can examine the core dump file after a crash using gdb /path/to/program corefile
Because it is binary and not compiled with debugging options you will have to view details at the register and machine code level.
Try making a core dump, then analyse it with gdb. If you meant you wanted to make gdb run all your commands at one touch of a key by 'automate', gdb ca do that too. Type your commands into a file and look into the help user-defined section of manuals, gdb can handle canned commands.

What is a good way to dump a Linux core file from inside a process?

We have a server (written in C and C++) that currently catches a SEGV and dumps some internal info to a file. I would like to generate a core file and write it to disk at the time we catch the SEGV, so our support reps and customers don't have to fuss with ulimit and then wait for the crash to happen again in order to get a core file. We have used the abort function in the past, but it is subject to the ulimit rules and doesn't help.
We have some legacy code that reads /proc/pid/map and manually generates a core file, but it is out of date, and doesn't seem very portable (for example, I'm guessing it would not work in our 64 bit builds). What is the best way to generate and dump a core file in a Linux process?
Google has a library for generating coredumps from inside a running process called google-coredumper. This should ignore ulimit and other mechanisms.
The documentation for the call that generates the core file is here. According to the documentation, it seems that it is feasible to generate a core file in a signal handler, though it is not guaranteed to always work.
I saw pmbrett's post and thought "hey, thats cool" but couldn't find that utility anywhere on my system ( Gentoo ).
So I did a bit of prodding, and discovered GDB has this option in it.
gdb --pid=4049 --batch -ex gcore
Seemed to work fine for me.
Its not however very useful because it traps the very lowest function that was in use at the time, but it still does a good job outside that ( With no memory limitations, Dumped 350M snapshot of a firefox process with it )
Try using the Linux command gcore
usage: gcore [-o filename] pid
You'll need to use system (or exec) and getpid() to build up the right command line to call it from within your process
Some possible solutions^W ways of dealing with this situation:
Fix the ulimit!!!
Accept that you don't get a core file and run inside gdb, scripted to do a "thread all apply bt" on SIGSEGV
Accept that you don't get a core file and acquired a stack trace from within the application. The Stack Backtracing Inside Your Program article is pretty old but it should be possible these days too.
You can also change the ulimit() from within your program with setrlimit(2). Like the ulimit shell command, this can lower limits, or raise them as hard as the hard limit allows. At startup setrlimit() to allow core dumping, and you're fine.
I assume you have a signal handler that catches SEGV, for example, and does something like print a message and call _exit(). (Otherwise, you'd have a core file in the first place!) You could do something like the following.
void my_handler(int sig)
{
...
if (wantCore_ && !fork()) {
setrlimit(...); // ulimit -Sc unlimited
sigset(sig, SIG_DFL); // reset default handler
raise(sig); // doesn't return, generates a core file
}
_exit(1);
}
use backtrace and backtrace_symbols glibc calls to get the trace, just keep in mind that backtrace_symbols uses malloc internally and in case of heap corruption it might fail.
system ("kill -6 ")
I'd give it a try if you are still looking for something

Resources