addr2line translates addresses into file names and line numbers. I am still beginner in debugging, and have some questions about addr2line.
If am debugging a certain .so (binary) file, how the tool can locate
its source code file (from where can get it!), what if the source doesn't exist?
What is the relation between the address in a binary and the line
number in its source, so addr2line can do this kind of mapping?
In general, addr2line works best on ELF executables or shared libraries with debug information. That debug information is emitted by the compiler when you pass -g (or -g2, etc...) to GCC. It notably provides a mapping between source code location (name of source file, line number, column number) and functions, variable names, call stack frame organization, etc etc... The debug information is today in DWARF format (and is also processed by the gdb debugger, the libbacktrace library, etc etc...). Notice that the debug information contains source file paths (not the source file itself).
In practice, you can (and often should) pass the -g (or -g2) debugging option to GCC even with optimization flags like -O2. In that case, the debug information is slightly less precise but still practically usable. In some cases, stack frames may disappear (inlined function calls, tail call optimizations, ....).
You could use the strip(1) utility to remove debug information (and other symbol tables, etc) from some ELF executable.
Related
I am on ubuntu 13.10 and have this little stripped+packed elf file. I need to dump various pieces of information from its process in an automated way, so i hacked together a tiny tracer that traces my progress, similar to strace. Three questions arose:
1) after attaching to my process, how can i get it's imagebase?
2) where does the process break first? Apparently it is not the EP of the program.
3) any way i can be notified when a .so/.lib file is loaded? GDB can do this somehow, i think.
The first question really is the most important one. Any help is appreciated.
1) /proc/<PID>/maps contains list of everything the process mapped and from where, including pages mapped from an executable. By reading executable ELF headers you should be able to figure out where .text is.
2) Execution of dynamically linked binary typically starts with an interpreter. INTERP program header in an ELF executable (dump with readelf -e) will have its name. It's interpreter's entry point where execution starts. Typically it's a runtime linker ld-<some-variant>.so. It maps in executable's sections and may also map required shared libraries.
3) GDB has fairly detailed knowledge how runtime linker is implemented so it's able to intercept dynamic object loading by setting breakpoints in the right places. You can do the same. dlopen() seems like a good candidate for an interception point. As I noted in #2, shared objects may have been pre-loaded before the executable gets control.
The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.
I want to open a shared object as a data file and perform a verification check on it. The verification is a signature check, and I sign the shared object. If the verification is successful, I would like to load the currently opened shared object as a proper shared object.
First question: is it possible to call dlopen and load the shared object as a data file during the signature check so that code is not executed? According to the man pages, I don't believe so since I don't see a flag similar to RTLD_DATA.
Since I have the shared object open as a data file, I have the descriptor available. Upon successful verification, I would like to pass the descriptor to dlopen so the dynamic loader loads the shared object properly. I don't want to close the file then re-open it via dlopen because it could introduce a race condition (where the file verified is not the same file opened and executed).
Second question: how does one pass an open file to dlopen using a file descriptor so that dlopen performs customary initialization of the shared object?
On Linux, you probably could dlopen some /proc/self/fd/15 file (for file descriptor 15).
RTLD_DATA does not seems to exist. So if you want it, you have to patch your own dynamic loader. Perhaps doing that within MUSL Libc could be less hard. I still don't understand why you need it.
You have to trust the dlopen-ed plugin somehow (and it will run its constructor functions at dlopen time).
You could analyze the shared object plugin before dlopen-ing it by using some ELF parsing library, perhaps libelf or libbfd (from binutils); but I still don't understand what kind of analysis you want to make (and you really should explain that; in particular what happens if the plugin is indirectly linked to some bad behaving software). In other words you should explain more about your verification step. Notice that a shared object could overwrite itself....
Alternatively, don't use dlopen and just mmap your file (you'll need to parse some ELF and process relocations; see elf(5) and Levine's Linkers and Loaders for details, and look into the source code of your ld.so, e.g. in GNU glibc).
Perhaps using some JIT generation techniques might be useful (you would JIT generate code from some validated data), e.g. with GCCJIT, LLVM, or libjit or asmjit (or even LuaJit or SBCL) etc...
And if you have two file descriptors to the same shared object you probably won't have any race conditions.
An option is to build your ad-hoc static C or C++ source code analyzer (perhaps using some GCC plugin provided by you). That plugin might (with months, or perhaps years, of development efforts) check some property of the user C++ code. Beware of Rice's theorem (limiting the properties of every static source code analyzer). Then your program might (like my manydl.c does, or like RefPerSys will soon do, in mid 2020, or like the obsolete GCC MELT did a few years ago) take as input the user C++ code, run some static analysis on that C++ code (e.g. using your GCC plugin), compile that C++ code into a temporary shared object, and dlopen that shared object. So read Drepper's paper How to Write Shared Libraries.
I have been reading John R. Levine's Linkers and Loaders and I read that the properties of an object file will include one or more of the following.
file should be linkable
file should be loadable
file should be executable
Now, considering this example:
#include<stdio.h>
int main() {
printf("testing\n");
return 0;
}
Which I would compile and link with:
$ gcc -c t.c
$ gcc -o t t.o
I tried inspecting t.o using objdump and its type shows up as REL. What all properties does t.o satisfy? I believe that its linkable, non-executable. I would have believed that its not loadable(unless you create an .so file from the .o file); however the type REL means that its supposed to be relocated, and relocation would occur only in the context of loading, so I'm having a confusion here.
My doubts summarized :-
Are ".o" files loadable?
Reading resources regarding the sections present in a ".o", ".so" file - differences etc?
An object file (i.e., a file with the .o extension) is not loadable. This is because it lacks critical information about how to resolve all the symbols within it: in this case, the println symbol in particular would need additional information. (C compilers do not bind library identities into the object files they create, which is occasionally even useful.)
When you link the object file into a shared library (.so), you are adding that binding. Typically, you're also grouping a number of object files together and resolving references between them (plus a few more esoteric things). That then makes the result possible to load, since the loader can then just do resolution of references and loading of dependencies that it doesn't already know about.
Going from there to executable is typically just a matter of adding on the OS-defined program bootstrap. This is a small piece of code that the OS will start the program running by calling, and it typically works by loading the rest of the program and dependencies and then calling main() with information about the arguments. (It's also responsible for exiting cleanly if main returns.)
Just to set the context this link states somethings similar (emphasis for readability only);
A file may be linkable, used as input by a link editor or linking
loader. It my be executable, capable of being loaded into
memory and run as a program, loadable, capable of being loaded
into memory as a library along with a program, or any combination of
the three.
A .o file is a linker object file, which is according to this definition not executable and definitely linkable. Loadable is a tougher call, but since .o files are not loadable without some definitely not cross platform trickery, I'd say the spirit is that it's not loadable.
I have a stripped ld.so that I want to replace with the unstripped version (so that valgrind works). I have ensured that I have the same version of glib and the cross compiler.
I have compiled the shared object, calling 'file' on it shows that it is compiled correctly (the only difference with the original being the 'unstripped' and being about 15% bigger). Unfortunately, it then causes a kernel panic (unable to init) on start up. Stripping the newly compiled .so, readelf-ing it and diff-ing it with the original, shows that there were extra symbols in the new version of the .so . All of the old symbols were still present, so what I don't understand is why the kernel panics with those extra symbols there.
I would expect the extra symbols to have no affect on the kernel start up, as they should never be called, so why do I get a kernel panic?
NB: To be clear - I will still need to investigate why there are extra symbols, but my question is about why these unused symbols cause problems.
The kernel (assuming Linux) doesn't depend on or use ld.so in any way, shape or form. The reason it panics is most likely that it can't exec any of the user-level programs (such as /bin/init and /bin/sh), which do use ld.so.
As to why your init doesn't like your new ld.so, it's hard to tell. One common mistake is to try to replace ld.so with the contents of /usr/lib/debug/ld-X.Y.so. While that file looks like it's not much different from the original /lib/ld-X.Y.so, it is in fact very different, and can't be used to replace the original, only to debug the original (/usr/lib/debug/ld-X.Y.so usually contains only debug sections, but none of code and data sections of /lib/ld-X.Y.so, and so attempts to run it usually cause immediate SIGSEGV).
Perhaps you can set up a chroot, mimicking your embedded environment, and run /bin/ls in it? The error (or a core dump) this will produce will likely tell you what's wrong with your ld.so.