How to modify a binary while it is running in gdb - linux

Edit: The actual problem is with the method by which the binary is updated and isn't due to an issue with gdb. Please see the answer below for details.
Original question:
Somewhat recently, I can no longer compile a program while gdb is running the program and stopped at a breakpoint. Trying to write to the binary again will result in a "text file busy" error.
This is on Ubuntu 16.04 LTS 64-bit, kernel 4.4.0-75.
I don't think I'm looking for the right thing, as a few searches for "gdb text file busy" or similar isn't yielding any results. The gdb manual specifically mentions this behavior (compile again while running gdb) is supported and indeed I have done this many times previously.
Would appreciate any pointers on what has changed and how to prevent this from happening.

Some further searching indicates this excellent post https://unix.stackexchange.com/a/188041/10847 which explains that the method by which the binary is updated is relevant here. In this case, the build system is copying the binary using cp a b which will fail. cp -f a b will delete b, then overwrite with a, allowing gdb to continue debugging the old binary while the new one is written to disk.

Related

How to recover a gas assembly file that was accidentally overwritten

This is an open ended question, but essentially, I wrote a program in x86 assembly and compiled it to an executable. I accidentally ran the command cp program program.s while attempting to move files around, and I overwrote my asm source code with the binary. I am trying to recover the source file in its original form
I want to note that I also am working in a wsl linux environment and using vscode, and so maybe it's possible to directly recover the file through other ways, like I don't know if vscode saves some kind of cache in case this happens, and please let me know if this train of thought is viable, but I figure my best bet is to disassemble the binary.
I know I can do objdump -d program and get the assembly, but it is not in gas form, and there are a lot of other information in this output, so it would be a lot of manual labor to recover the original .s file with the output of this command. Are there any other ways to better disassemble an exe into the original assembly file?
Thanks

Investigating why an installed binary hangs [duplicate]

This question already has answers here:
How should strace be used?
(12 answers)
Closed 4 years ago.
I installed a package on my linux machine. When I run the installed binary, it hangs:
$installedBinary --help
is supposed to return a list of command line options. Instead, the program hangs and doesn't respond. It closes when I run control+c.
How can I investigate this problem?
Start with strace -ffo traces ./installedBinary --help. And then inspect traces.* log files, in particular the last lines where it may show what it is blocked on. See strace(1)
You can also do that from htop. Locate the blocked thread and press s for strace and l for lsof.
Maxim Egorushkin's answer is a good one. But on Linux, most programs have some documentation (often, at least a man page, see man(1) & man(7)), and most programs are free software. And the documentation should tell a lot more than --help does (the output of --help is a short summary of the documentation; for example sed(1) explains a lot more than sed --help). Maybe the behavior of your program is explained in the documentation (e.g. depends upon some environment variable).
So you should also read the documentation of your installedBinary and you probably could get its source code, study and recompile it. If you have the source code and have built it, you usually could compile it with DWARF debug information (e.g. adding -g to some CFLAGS in a Makefile...) and run it under gdb
Notice that even on Linux you might have malware (e.g. for Debian or Ubuntu you might have found a .deb source which is used to publish malware; this is unlikely, but not impossible). Trusting a binary package provider is a social issue, not a technical one. Your installedBinary might (in principle) be bad enough to put you in trouble. But it is probably some executable.
Perhaps your installedBinary is always waiting for some input from its stdin (such a behavior might be unusual but is not forbidden) or from some other source. Then you might try installedBinary < /dev/null and even installedBinary --help < /dev/null

Where the heck is that core dump?

TLDR: Can't find the core dump even after setting ulimit and looking into apport. Sick of working so hard to get a single backtrace. Questions on the bottom.
I'm having a little nightmare here. I'm currently doing some c coding, which in my case always means a metric ton of segfaults. Most of the times I'm able to reproduce the bug with little to no problem, but today I hit a wall.
My code produces segfaults inconsistently. I need that core dump it is talking about.
So I'm going on a hunt for a core dump, for my little precious a.out. And that is when I'm starting to pull my hair off.
My intuition would tell me, that core dump files should be stored somewhere in the working directory - which obviously isn't the case. After reading this, I happily typed:
ulimit -c 750000
And... nothing. Output of my program told me that it did the core dump - but I can't find it in cwd. So after reading this I learnt that I should do things to apport and core_pattern.
Changing core_pattern seems a bit too much for getting one core dump, I really don't wan't to mess with it, because I know I will forget about it later. And I tend to mess these things up really badly.
Apport has this magical property of chosing which core dumps are valuable and which are not. It's logs told me...
ERROR: apport (pid 7306) Sun Jan 3 14:42:12 2016: executable does not belong to a package, ignoring
...that my program isn't good enough for it.
Where is this core dump file?
Is there a way to get a core dump a single time manually, without having to set everything up? I rarely need those as files per se, GDB alone is enough most of the time. Something like let_me_look_at_the_core_dump <program name> would be great.
I'm already balding a little, so any help would be appreciated.
So, today I learnt:
ulimit resets after reopening the shell.
I did a big mistake in my .zshrc - zsh nested and reopened itself after typing some commands.
After fiddling a bit with this I also found solution to the second problem. Making a shell script:
ulimit -c 750000
./a.out
gdb ./a.out ./core
ulimit -c 0
echo "profit"

Is a core dump executable by itself?

The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.

Execute code in process's stack, on recent Linux

I want to use ptrace to write a piece of binary code in a running process's stack.
However, this causes segmentation fault (signal 11).
I can make sure the %eip register stores the pointer to the first instruction that I want to execute in the stack. I guess there is some mechanism that linux protects the stack data to be executable.
So, does anyone know how to disable such protection for stack. Specifically, I'm trying Fedora 15.
Thanks a lot!
After reading all replies, I tried execstack, which really makes code in stack executable. Thank you all!
This is probably due to the NX bit on modern processors. You may be able to disable this for your program using execstack.
http://advosys.ca/viewpoints/2009/07/disabling-the-nx-bit-for-specific-apps/
http://linux.die.net/man/8/execstack
As already mentioned it is due to the NX bit. But it is possible. I know for sure that gcc uses it itself for trampolines (which are a workaround to make e.g. function pointers of nested functions). I dont looked at the detailes, but I would recommend a look at the gcc code. Search in the sources for the architecture specific macro TARGET_ASM_TRAMPOLINE_TEMPLATE, there you should see how they do it.
EDIT: A quick google for that macro, gave me the hint: mprotect is used to change the permissions of the memory page. Also be carefull when you generate date and execute it - you maybe have in addition to flush the instruction cache.

Resources