Write hex in GDB - linux

I'm in a software security class and we are currently learning about buffer overflows and how they are exploited. I have a program that I know how to exploit, but I appear to be unable to do so because I have to write hex that it is not allowing me to write.
I need to write the data generated from:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";'
However, I cannot redirect that output into the command line arguments because the program runs interactively. Historically, I have used xclip to copy it to the clipboard and then paste it into the running application, but for some reason, this sequence of hex does not allow me to use xclip to copy it (it shows as nothing has been copied).
For example:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' | xclip -sel clip
If I ctrl+V after that, nothing gets pasted. If I simply copy and paste the output from the terminal window, the wrong hex is pasted (I'm assuming this is because the hex isn't visible ASCII).
My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
I'm aware that if the exploitable program took command line arguments, I could do:
run $(perl -e 'print "A"x48; print "\x1b\x88\x04\x08";')
But since it doesn't run via cli arguments, this isn't usable.
Any help would be awesome!

My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
Your question is based on mis-understanding: you appear to be under impression that GDB is somehow intercepting the "paste" you are performing, and not letting the characters to be read by the target program.
However, GDB is not intercepting any input, until and unless you are stopped at a breakpoint (or due to a signal). So while your program is running (and reading the input), GDB itself is blocked (in waitpid system call) waiting for something to happen.
So what prevents your program from receiving the control characters? Your terminal emulator does.
Ok, how can you arrange for the non-ASCII input? One of 3 ways (two are very similar):
use input from file
use input from named pipe
use gdbserver
For method#1:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/input
gdb ./a.out
(gdb) run < /tmp/input # voila: GDB reads terminal,
# your program reads /tmp/input
Method#2:
mkfifo /tmp/pipe
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/pipe &
# perl will block, waiting for someone to read the pipe
gdb ./a.out
(gdb) run < /tmp/pipe
Both of the above methods will work for "normal" programs (ones that read STDIN), but will fail for programs that read terminal directly (such as sudo, passwd, gpg).
Method#3:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' |
gdbserver :0 ./a.out # gdbserver will print a TCP port, e.g. 4321
# and stop the program at start
# in another window,
gdb ./a.out
(gdb) target remote :4321
# gdb will now attach to gdbserver, you can set breakpoints and continue.

Related

Record Bash Interaction, saving STDIN, STDOUT seperately

So I want to record my bash interaction, which I know I can do with script, or ttyrec. Except I want one feature more than they have. Save input (i.e STDIN), and output (i.e. STDOUT) separately.
So something like (where I typed the first "Hello World!"), except of course script takes one [file] arg, not two:
user#pc:~$ script input.txt output.txt
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
Script done
So input.txt looks like:
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
user#pc:~$ exit
And output.txt looks like:
Hello World!
exit
So I want a program like script which saves STDIN & STDOUT separately. Since currently, this would be the normal output of script (which I do not want, and need seperated):
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
exit
Script done
Does this exist, or is this possible?
Note the usage of paste command, since I had considered filtering the ouput file based on user#pc:~$, but in my case (as with paste) this would not work.
empty
empty is packaged for various linux distributions (it is empty-expect on ubuntu).
open two terminals
terminal 1 : run empty -f -i in.fifo -o out.fifo bash
terminal 1 : run tee stdout.log <out.fifo
terminal 2 : run stty -icanon -isig eol \001; tee stdin.log >in.fifo
type commands into terminal 2, watch the output in terminal 1
fix terminal settings with stty icanon isig -echo
log stderr separately from stdout with exec 2>stderr.log
when finished, exit the bash shell; both tee commands will quit
stdout.log and stdin.log contain the logs
Some other options:
peekfd
You could try peekfd (part of the psmisc package). It probably needs to be run as root:
peekfd -c pid fd fd ... > logfile
where pid is the process to attach to, -c says to attach to children too, and fd are list of file descriptors to watch (basically 0, 1, 2). There are various other options to tweak the output.
The logfile will need postprocessing to match your requirements.
SystemTap and similar
Over on unix stackexchange, use of the SystemTap tool has been proposed.
However, it is not trivial to configure and you'll still have to write a module that separates stdin and stdout.
sysdig and bpftrace also look interesting.
LD_PRELOAD / strace / ltrace
Using LD_PRELOAD, you can wrap lowlevel calls such as write(2).
You can run your shell under strace or ltrace and record data passed to system and library functions (such as write). Lots of postprocessing needed:
ltrace -f -o ltrace.log -s 10000000000 -e write bash
patch ttyrec
ttyrec.c is only 500 lines of fairly straightforward code and looks like it would be fairly easy to patch to use multiple logfiles.

BASH Reading prompt from GDB

My intentions are the following. I am debugging an object file compiled with gcc from a .c script. Lets call this compiled script "foo". When I run the command from my terminal on mac:
gdb -q ./foo
I get the an output of:
Reading symbols from ./foo...Reading symbols from /Users/john/Documents....done.
done.
And immediately I get a prompt from the shell looking like so:
(gdb) "Shell waiting for my input command here from keyboard"
At this point I want to automate the input of certain commands like:
break, list, x/x "symbol in .c file", x/s "symbol in .c file" and many more. For this automation I want to use a little bash script, and so far I have the following:
#!/bin/bash
SCRIPT=$1
gstdbuf -oL gdb -q $SCRIPT |
while read -r LINE
do
echo "$LINE"
done
At the point of executing this bash script, I see the following output on my terminal's shell:
Reading symbols from ./foo...Reading symbols from /Users/john/Documents....done.
done.
But I do not see the:
(gdb) "Shell waiting for my input command here from keyboard"
How can I detect this prompt from the gdb process in my shell script in order to be able to automate the commands I want instead of inputting them manually?
Many Thanks!
You can create a file .gdbinit and put the initial commands there. gdb will execute them on startup if you added the following line to $HOME/.gdbinit:
add-auto-load-safe-path /path/to/project/.gdbinit
Now you can place commands into /path/to/project/.gdbinit, like this:
break main
run --foo=bar

Pass File Input and Stdin to gdb

So I want to run a program in gdb with the contents of a file as an argument. Then, when an EOF is hit, I want to be able to enter user input again. For a normal program in a terminal I can do something like this with the following command.
(cat input.txt; cat) | ./program
In gdb I can pass in the file arguments like this, but it continues to enter newlines forever after the end of the file has been reached.
(gdb) run < input.txt
It is almost as if stdin was not passed back to the program, similar to what happens if I simply do
(cat input.txt) | ./program
without the second cat. Is this even possible to do in gdb?
You can run the program in one console and attach to it with gdb from another one when it is waiting for input. Therefore you will be able to enter program input in the 1st console and debug it in the 2nd.

How does Linux/Unix Piping decide when to output? Can it be changed? [duplicate]

Is there a way to run shell commands without output buffering?
For example, hexdump file | ./my_script will only pass input from hexdump to my_script in buffered chunks, not line by line.
Actually I want to know a general solution how to make any command unbuffered?
Try stdbuf, included in GNU coreutils and thus virtually any Linux distro. This sets the buffer length for input, output and error to zero:
stdbuf -i0 -o0 -e0 command
The command unbuffer from the expect package disables the output buffering:
Ubuntu Manpage: unbuffer - unbuffer output
Example usage:
unbuffer hexdump file | ./my_script
AFAIK, you can't do it without ugly hacks. Writing to a pipe (or reading from it) automatically turns on full buffering and there is nothing you can do about it :-(. "Line buffering" (which is what you want) is only used when reading/writing a terminal. The ugly hacks exactly do this: They connect a program to a pseudo-terminal, so that the other tools in the pipe read/write from that terminal in line buffering mode. The whole problem is described here:
http://www.pixelbeat.org/programming/stdio_buffering/
The page has also some suggestions (the aforementioned "ugly hacks") what to do, i.e. using unbuffer or pulling some tricks with LD_PRELOAD.
You could also use the script command to make the output of hexdump line-buffered (hexdump will be run in a pseudo terminal which tricks hexdump into thinking its writing its stdout to a terminal, and not to a pipe).
# cf. http://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe/
stty -echo -onlcr
script -q /dev/null hexdump file | ./my_script # FreeBSD, Mac OS X
script -q -c "hexdump file" /dev/null | ./my_script # Linux
stty echo onlcr
One should use grep or egrep "--line-buffered" options to solve this. no other tools needed.

process exits after getting command from pipe

On Ubuntu, I start a command line program (gnu backgammon) and let it get its commands from a pipe (commandpipe), like so
$ gnubg -t < commandpipe
from another terminal, I do
$ echo "new game" > commandpipe
This works fine, a new game is started, but after the program has finished processing that command, the process exits.
How can I prevent the backgammon process from exiting? I would like to continue sending commands to it via the commandpipe.
This is only because you used echo, which immediately quits after echoing. I believe when a program quits its file descriptors are closed. (OK, it's not an actual program in bash, it's a builtin, but I don't think this matters.) If you wrote an actual interactive program, e.g. with a GUI (and remember, StackOverflow is for programming questions, not Unix questions, which belong over there) and redirected its stdout to the named pipe, you would not have this problem.
The reader gets EOF and thus closes the FIFO. So you need a loop, like this:
$ while (true); do cat myfifo; done | ts
jan 05 23:01:56 a
jan 05 23:01:58 b
And in another terminal:
$ echo a > myfifo
$ echo b > myfifo
Substitute ts with gnubg -t
The problem is that the file descriptor is closed, and when the last write file descriptor is closed, it sends a signal to the read process.
As a quick hack, you can do this:
cat < z # read process in one terminal
cat > z & # Keep write File Descriptor open. Put in background (or run in new terminal)
echo hi > z # This will close the FD, but not signal the end of input
But you should really be writing in a real programming language where you can control your file descriptors.
To avoid EOF, you could use tail:
tail -f commandpipe | gnubg -t

Resources