Pass File Input and Stdin to gdb - linux

So I want to run a program in gdb with the contents of a file as an argument. Then, when an EOF is hit, I want to be able to enter user input again. For a normal program in a terminal I can do something like this with the following command.
(cat input.txt; cat) | ./program
In gdb I can pass in the file arguments like this, but it continues to enter newlines forever after the end of the file has been reached.
(gdb) run < input.txt
It is almost as if stdin was not passed back to the program, similar to what happens if I simply do
(cat input.txt) | ./program
without the second cat. Is this even possible to do in gdb?

You can run the program in one console and attach to it with gdb from another one when it is waiting for input. Therefore you will be able to enter program input in the 1st console and debug it in the 2nd.

Related

bash script stdin not detected clarification

This kind of got me confused.
this is my bash script:
filename: reader.sh
READ = $("cat")
echo "$READ"
So the first line reads stdin and the second line prints it.
Nevertheless, I get that when I start my terminal and start pressing keys on my keyboard it will pop up in the terminal due to the fact that the shell redirects stdin and stdout to e.g dev/pts/0, meaning that the file is used as stdin and also stdout.
Afterwards the shell (when return is found by the tty driver) kind of saves the first argument of the command line which is the utility, and looks at the rest of the command linux, then it puts a sort of array or list of arguments in the environment of the program that is being called so it can use arguments.
Why is it that the above bash script can print the output of a file through a piped stdin e.g ./reader.sh < otherfile, but not just ./reader.sh? I would expect in the second example that what's in stdin would be read from what was in dev/pts/0 since that's also just stdin.
Is it because when the arguments are parsed into a list, the dev/pts/0 file gets emptied?
When you use
./reader.sh < otherfile
stdin in the script is connected to the file, not /dev/pts/0. cat inherits this stdin, so it reads from the file.

Bash Shell Scrpiting, How can I write a script to run a program which ask for input?

I have a program, porodry, which will need read a parameters file to run, suppose the file called test1, so if I use bash
I can run
./porodry
terminal will show:
please input your parameters file name:
I will type
test1
then the program start to run, and will have some outputs shows on the terminal, like
Please input an int number:
then I will type something like:
1110
then the program will keep running.
I want to write a script, which will automatically read the input and output the terminal content to a test1.terminal file
Please help out!
It depends on what the program really does. If it just reads from the standard input, you cat just pipe the input to it and redirect the output to a file:
echo test1 | ./porodry > test1.terminal
If it messes with the terminal, you might need expect.

How to show full output on linux shell?

I have a program that runs and shows a GUI window. It also prints a lot of things on the shell. I need to view the first thing printed and the last thing printed. the problem is that when the program terminates, if I scroll to the top of the window, the stuff printed when it began is removed. So stuff printed during the program is now at the top. So that means I can't view the first thing printed.
Also I tried doing > out.txt, but the problem is that the file only gets closed and readable when I manually close the GUI window. If it gets outed to a file, nothing gets printed on the screen and I have no way to know if the program finished. I can't modify any of the code too.
Is there a way I can see the whole list of text printed on the shell?
Thanks
You can just use tee command to get output/error in a file as well on terminal:
your-command |& tee out.log
Though just keep in mind that this output is line buffered by default (4k in size).
When the output of a program goes to your terminal window, the program generally flushes its output after each newline. This is why you see the output interactively.
When you redirect the output of the program to out.txt, it only flushes its output when its internal buffer is full, which is probably after every 8KiB of output. This is why you don't see anything in the file right away, and you don't see the last things printed by the program until it exits (and flushes its last, partially-full buffer).
You can trick a program into thinking it's sending its output to a terminal using the script command:
script -q -f -c myprogram out.txt
This script command runs myprogram connected to a newly-allocated “pseudo-terminal” (or pty for short). This tricks myprogram into thinking it's talking to a terminal, so it flushes its output on every newline. The script command copies myprogram's output to your terminal window and to the file out.txt.
Note that script will write a header line to out.txt. I can't find a way to disable that on my test Linux system.
In the example above, I assumed your program takes no arguments. If it does, you either need to put the program and arguments in quotes:
script -q -f -c 'myprogram arg1 arg2 arg3' out.txt
Or put the program command line in a shell script and pass that shell script to the script command.

Write hex in GDB

I'm in a software security class and we are currently learning about buffer overflows and how they are exploited. I have a program that I know how to exploit, but I appear to be unable to do so because I have to write hex that it is not allowing me to write.
I need to write the data generated from:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";'
However, I cannot redirect that output into the command line arguments because the program runs interactively. Historically, I have used xclip to copy it to the clipboard and then paste it into the running application, but for some reason, this sequence of hex does not allow me to use xclip to copy it (it shows as nothing has been copied).
For example:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' | xclip -sel clip
If I ctrl+V after that, nothing gets pasted. If I simply copy and paste the output from the terminal window, the wrong hex is pasted (I'm assuming this is because the hex isn't visible ASCII).
My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
I'm aware that if the exploitable program took command line arguments, I could do:
run $(perl -e 'print "A"x48; print "\x1b\x88\x04\x08";')
But since it doesn't run via cli arguments, this isn't usable.
Any help would be awesome!
My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
Your question is based on mis-understanding: you appear to be under impression that GDB is somehow intercepting the "paste" you are performing, and not letting the characters to be read by the target program.
However, GDB is not intercepting any input, until and unless you are stopped at a breakpoint (or due to a signal). So while your program is running (and reading the input), GDB itself is blocked (in waitpid system call) waiting for something to happen.
So what prevents your program from receiving the control characters? Your terminal emulator does.
Ok, how can you arrange for the non-ASCII input? One of 3 ways (two are very similar):
use input from file
use input from named pipe
use gdbserver
For method#1:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/input
gdb ./a.out
(gdb) run < /tmp/input # voila: GDB reads terminal,
# your program reads /tmp/input
Method#2:
mkfifo /tmp/pipe
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/pipe &
# perl will block, waiting for someone to read the pipe
gdb ./a.out
(gdb) run < /tmp/pipe
Both of the above methods will work for "normal" programs (ones that read STDIN), but will fail for programs that read terminal directly (such as sudo, passwd, gpg).
Method#3:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' |
gdbserver :0 ./a.out # gdbserver will print a TCP port, e.g. 4321
# and stop the program at start
# in another window,
gdb ./a.out
(gdb) target remote :4321
# gdb will now attach to gdbserver, you can set breakpoints and continue.

Program dumps data to stdout fast. Looking for way to write commands without getting flooded

Program is dumping to stdout and while I try to type new commands I can't see what I'm writing because it gets thrown along with the output. Is there a shell that separates commands and outputs? Or can I use two shells where I can run commands on one and make it dump to the stdout of another?
You can redirect the output of the program to another terminal window. For example:
program > /dev/pts/2 &
The style of terminal name may depend on how your system is organized.
There's 'more' to let you pageinate through output, and 'tee' which lets you split a programs output, so it goes to both stdout and to a file.
$ yourapp | more // show in page-sized chunks
$ yourapp | tee output.txt // flood to stdout, but also save a copy in output.txt
and best of all
$ yourapp | tee output.txt | more // pageinate + save copy
Either redirect standard output and error when you run the program, so it doesn't bother you:
./myprog >myprog.out 2>&1
or, alternatively, run a different terminal to do your work in. That leaves your program free to output whatever it likes to its terminal without bothering you.
Having said that, I'd still capture the information from the program to a file in case you have to go back and look at it.

Resources