I debug a remote Linux process with GdbServer. And I want to put a breakpoint in some function.
The problem is that this process use ASLR so each time that process load in another address. I can watch in /proc/PID/maps the base address of the process and calculate where the function is located but this is tedious.
Is there a way to put break point with GDB in address the rebase? So the GDB will automatically calculate the rebase of the process?
Is there a way to put break point
All the ways you can put a breakpoint in GDB are documented here.
You want something like $image_base(myprogram) + image_offset, which is not a supported address location.
What you could do is write a shell wrapper which computes the desired address and invokes GDB. Something along the lines of:
#/bin/bash
PID="$1" # process we'll attach.
IMAGE_BASE="0x$(grep myprogram /proc/$PID/maps | sed -e 's/-.*//' -eq)"
IMAGE_OFFSET=0x1234 # use whatever offset corresponds to your function
exec gdb -p "$PID" -ex "break *($IMAGE_BASE+$IMAGE_OFFSET)"
Related
I'm new to exploit development and looking for advice.
My question is: how can I keep giving input from one terminal and debug my program on another?
I usually use gdb.debug from pwntools when having graphical interface, but now I can only SSH remote to the machine which runs the binary, which means gdb.debug cannot create a new terminal.
I saw a video of a demonstration doing that technique in VIM. How can I achieve that?
gdb.debug should still work if you're using SSH as long as you set context.terminal to the right value (e.g. tmux).
How to use pwnlib.gdb
Here's a copy and paste of a response to a similar question:
You can use the pwnlib.gdb to interface with gdb.
You can use the gdb.attach() function:
From the docs:
bash = process('bash')
# Attach the debugger
gdb.attach(bash, '''
set follow-fork-mode child
break execve
continue
''')
# Interact with the process
bash.sendline('whoami')
or you can use gdb.debug():
# Create a new process, and stop it at 'main'
io = gdb.debug('bash', '''
# Wait until we hit the main executable's entry point
break _start
continue
# Now set breakpoint on shared library routines
break malloc
break free
continue
''')
# Send a command to Bash
io.sendline("echo hello")
# Interact with the process
io.interactive()
The pwntools template contains code to get you started with debugging with gdb. You can create the pwntools template by running pwn template ./binary_name > template.py. Then you have to add the GDB arg when you run template.py to debug: ./template.py GDB.
If you get [ERROR] Could not find a terminal binary to use., you might need to set context.terminal before you use gdb.
If you're using tmux, the following will automatically open up a gdb debugging session in a new horizontally split window:
context.terminal = ["tmux", "splitw", "-h"]
And to split the screen with the new gdb session window vertically:
context.terminal = ["tmux", "splitw", "-v"]
(To use tmux, install tmux on your machine, and then just type tmux to start it. Then type python template.py GDB.
If none of the above works, then you can always just start your script, use ps aux, find the PID, and then use gdb -p PID to attach to the running process.
Vim Explanation
You don't need to use vim to use pwntools's gdb features like the guy did in the video you linked, but here's an explanation on what he did (vim's also a nice tool regardless):
While editing his pwn script in vim, the guy first executed the following command:
:!./%
: enters command mode in vim
! executes a shell command
% is basically the name of the file you're currently editing in vim
So if your exploit script was named template.py running :!./% in vim would be the same as running ./template.py in your terminal. This just runs the exploit and enters interactive mode.
It's just a way shortcut to execute your script in vim.
Later, the guy also uses :!./% GDB to actually launch the pwntools gdb session. It's the same thing as running python template.py GDB.
I made a memory error that is quite difficult to debug, happening every once in a few command-line runs, each taking about two hours to complete. Because of that, I thought it could be a good idea to create logs like this:
while true; do
valgrind ./command 2>&1 | tee command
grep -q Invalid && break
done
The problem is that my debug logs and stack traces produced by Valgrind aren't enough, so I decided to add --vgdb-error=0 to the command line. Unfortunately, since Valgrind now adds a breakpoint on startup, I need to run the following:
$ gdb ./command
...gdb init string follows...
(gdb) target remote | /usr/lib/valgrind/../../bin/vgdb
Remote debugging using | /usr/lib/valgrind/../../bin/vgdb
relaying data between gdb and process 4361
Reading symbols from /lib/ld-linux.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/ld-linux.so.2
[Switching to Thread 4361]
0x04000840 in ?? () from /lib/ld-linux.so.2
(gdb) continue
Continuing.
How can I script the process so that either Valgrind does not break on startup or a script keeps attaching to vgdb processes and tell them to continue until one of the processes completes abnormally?
The argument to --vgdb-error is the number of errors to expect before valgrind stops the program and attaches the debugger. If you set it to 0, then it will stop immediately before running your program. You want --vgdb-error=1, which will run to the first error then stop. Then you can attach a debugger.
I have several Linux (Ubuntu 10.04) processes running on my computer. If I start one of them, I can see the verbose of it on my terminal. I have an other process that start a dozen of those processes so that they run in background. However, I would like to watch the output of one of these processes to see if the output is still ok and there are no error message. I know I could send everything into a log message, however, this would just use too much disk space. So, is there a way to "catch" the output of a running process in Linux using it's process Id?
You can attach gdb to your running process and redirect stdout/err or both to some log file whenever you feel like looking at the output (make sure the files exist before redirecting):
gdb attach PID
(gdb) p dup2(open("/tmp/mylogfile-stdout", 0), 1)
(gdb) p dup2(open("/tmp/mylogfile-stderr", 0), 2)
(gdb) detach
(gdb) quit
When you want them to go back to being silent just do:
gdb attach PID
(gdb) p dup2(open("/dev/null", 0), 1)
(gdb) p dup2(open("/dev/null", 0), 2)
(gdb) detach
(gdb) quit
The 'detach' part is important, otherwise you will kill the process you're attached to when gdb exits. For more details see this question.
Use redirection like
yourprogram arg1 arg2 > yourprog.out
or probably even (to redirect both stderr & stdout and run in the background)
yourprogram arg1 arg2 > yourprog.out 2>&1 &
In a different terminal, do
tail -f yourprog.out
With the -f option, the tail command will notice when the file is growing and will display its newest lines
But I can't see portable way to redirect after. Maybe screen, batch, at, cron might help you. Or opening the /proc/1234/fd/1 ...
BTW, I am surprised you don't have enough temporary disk space for your output...
And I do like running M-x shell under emacs, and running my programm there.
Best way to do is to check fd folder in /proc/<PID>/fd
in my case I have a python running on the system with a different terminal
I can see its logs by :
1. Get Process ID
ps -aux | grep python
2. CD into /proc/504951/fd and view all/needed files
cd /proc/504951/fd && tail -f *
Is there a way to automatically start a process under gdb on Linux? An equivalent of setting the Image File Execution Options on Windows.
I am trying to debug start-up phase of a process that is launched from another one.
I would normally move the real program out of the way, and replace it with a script that launches the program under GDB with the same parameters.
#!/bin/bash
exec gdb -args <realprog> "$#"
If that doesn't work due to the output being redirected to file, or something, then try this:
#!/bin/bash
exec xterm -e gdb -args <realprog> "$#"
That should give you a pop-up terminal with GDB running inside.
You don't have to go through all that registry voodoo on Linux :)
Simply:
1) Rename your program
2) Write a shell script that calls gdb with your (renamed) program and passes any arguments you want. Make sure you "chmod +rx" your script.
3) Name the shell script the original name of your program, and put it in the same directory as your program
4) Execute!
I've just tried using gdb on BackTrack Linux and I must say that its awesome. I wonder how gdb in backtrack is configured to act this way.
When I set a breakpoint, all the register values, a part of the stack, a part of the data section and the next 10-15 instructions to be executed are printed. The same happens when I step or next through the instructions.
I find this amazing and would love to have this on my Ubuntu machine too; how could I go about doing this?
They seem to be using this .gdbinit file:
https://github.com/gdbinit/Gdbinit/blob/master/gdbinit
I'm guessing that this is done using a post command hook:
http://sourceware.org/gdb/current/onlinedocs/gdb/Hooks.html#Hooks
inside of a system wide gdbinit:
http://sourceware.org/gdb/onlinedocs/gdb/System_002dwide-configuration.html
which may or may not reference shell commands and/or use gdb python scripts.
try:
strace gdb /bin/echo 2>&1 | grep gdbinit