Record Bash Interaction, saving STDIN, STDOUT seperately - linux

So I want to record my bash interaction, which I know I can do with script, or ttyrec. Except I want one feature more than they have. Save input (i.e STDIN), and output (i.e. STDOUT) separately.
So something like (where I typed the first "Hello World!"), except of course script takes one [file] arg, not two:
user#pc:~$ script input.txt output.txt
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
Script done
So input.txt looks like:
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
user#pc:~$ exit
And output.txt looks like:
Hello World!
exit
So I want a program like script which saves STDIN & STDOUT separately. Since currently, this would be the normal output of script (which I do not want, and need seperated):
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
exit
Script done
Does this exist, or is this possible?
Note the usage of paste command, since I had considered filtering the ouput file based on user#pc:~$, but in my case (as with paste) this would not work.

empty
empty is packaged for various linux distributions (it is empty-expect on ubuntu).
open two terminals
terminal 1 : run empty -f -i in.fifo -o out.fifo bash
terminal 1 : run tee stdout.log <out.fifo
terminal 2 : run stty -icanon -isig eol \001; tee stdin.log >in.fifo
type commands into terminal 2, watch the output in terminal 1
fix terminal settings with stty icanon isig -echo
log stderr separately from stdout with exec 2>stderr.log
when finished, exit the bash shell; both tee commands will quit
stdout.log and stdin.log contain the logs
Some other options:
peekfd
You could try peekfd (part of the psmisc package). It probably needs to be run as root:
peekfd -c pid fd fd ... > logfile
where pid is the process to attach to, -c says to attach to children too, and fd are list of file descriptors to watch (basically 0, 1, 2). There are various other options to tweak the output.
The logfile will need postprocessing to match your requirements.
SystemTap and similar
Over on unix stackexchange, use of the SystemTap tool has been proposed.
However, it is not trivial to configure and you'll still have to write a module that separates stdin and stdout.
sysdig and bpftrace also look interesting.
LD_PRELOAD / strace / ltrace
Using LD_PRELOAD, you can wrap lowlevel calls such as write(2).
You can run your shell under strace or ltrace and record data passed to system and library functions (such as write). Lots of postprocessing needed:
ltrace -f -o ltrace.log -s 10000000000 -e write bash
patch ttyrec
ttyrec.c is only 500 lines of fairly straightforward code and looks like it would be fairly easy to patch to use multiple logfiles.

Related

Run bash commands and simulate key strokes remotely

In NodeJS, I can spawn a process and listen to its stdout using spawn().stdout. I'm trying to create an online terminal which will interact with this shell. I can't figure out how to send keystrokes to the shell process.
I've tried this:
echo -e "ls\n" > /proc/{bash pid}/fd/0
This doesn't really do anything other than output ls and a line break. And when I try to tail -f /proc/{bash pid}/fd/0, I can no longer even send keystrokes to the open bash terminal.
I'm really just messing around trying to understand how the bash process will interpret ENTER keys. I don't know if that is done through stdin or not since line breaks don't work.
I don't believe you can "remote control" an already started normal Bash session in any meaningful way. What you can do is start a new shell which reads from a named pipe; you can then write to that pipe to run commands:
$ cd "$(mktemp --directory)"
$ mkfifo pipe1
$ bash < pipe1 &
$ echo touch foo > pipe1
[1]+ Done bash < pipe1
$ ls
foo pipe1
See How to write several times to a fifo without having to reopen it? for more details.

Why does running gradlew in a non-interactive bash session closes the session's stdin?

I've noticed this strange issue of scripts exiting successfully early in a CI system when using gradlew. The following steps help outline this.
Create a file called script with the contents:
./gradlew
echo DONE
Get a random gradlew from somewhere
Run cat script | bash
Notice that DONE never appears
AFAICT, running bash non-interactively causes the exec java blah at the end of gradlew to somehow allow java to close stdin and never allow the echo DONE to be read from the script being read in via stdin from cat. Supporting facts of this are:
Changing the script to ./gradlew; echo DONE will print DONE
Replacing ./gradlew with ./gradlew < /dev/null will print DONE
If you have an exec something somewhere (within gradlew in your case), you are replacing the current process image (bash) with something else (java).
From help exec:
exec [-cl] [-a name] [command [arguments ...]] [redirection ...]
Replace the shell with the given command.
So the problem is not that stdin is getting closed, what is happening is that the new process (java) will be the one reading that input ("echo DONE") and probably doing nothing with it.
Explanation with example
Consider this script.sh:
#!/bin/bash
echo Hello
exec cat
echo World
If you execute it providing some input for cat:
$ ./script.sh <<< "Nice"
Hello
Nice
You may expect also the word World be printed on the screen... WRONG!
Here nothing happens because anything else is executed after the exec command.
Now, if you pipe the script to bash:
$ cat script.sh | bash
Hello <- bash interpreted "echo Hello" and printed Hello
echo World <- cat read "echo World" and printed it (no interpertation ocurred)
Here you can clearly see the process image replacement in action.

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Write hex in GDB

I'm in a software security class and we are currently learning about buffer overflows and how they are exploited. I have a program that I know how to exploit, but I appear to be unable to do so because I have to write hex that it is not allowing me to write.
I need to write the data generated from:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";'
However, I cannot redirect that output into the command line arguments because the program runs interactively. Historically, I have used xclip to copy it to the clipboard and then paste it into the running application, but for some reason, this sequence of hex does not allow me to use xclip to copy it (it shows as nothing has been copied).
For example:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' | xclip -sel clip
If I ctrl+V after that, nothing gets pasted. If I simply copy and paste the output from the terminal window, the wrong hex is pasted (I'm assuming this is because the hex isn't visible ASCII).
My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
I'm aware that if the exploitable program took command line arguments, I could do:
run $(perl -e 'print "A"x48; print "\x1b\x88\x04\x08";')
But since it doesn't run via cli arguments, this isn't usable.
Any help would be awesome!
My question is: does GDB have some way for me to insert generated text like this into an interactive, running program?
Your question is based on mis-understanding: you appear to be under impression that GDB is somehow intercepting the "paste" you are performing, and not letting the characters to be read by the target program.
However, GDB is not intercepting any input, until and unless you are stopped at a breakpoint (or due to a signal). So while your program is running (and reading the input), GDB itself is blocked (in waitpid system call) waiting for something to happen.
So what prevents your program from receiving the control characters? Your terminal emulator does.
Ok, how can you arrange for the non-ASCII input? One of 3 ways (two are very similar):
use input from file
use input from named pipe
use gdbserver
For method#1:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/input
gdb ./a.out
(gdb) run < /tmp/input # voila: GDB reads terminal,
# your program reads /tmp/input
Method#2:
mkfifo /tmp/pipe
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' > /tmp/pipe &
# perl will block, waiting for someone to read the pipe
gdb ./a.out
(gdb) run < /tmp/pipe
Both of the above methods will work for "normal" programs (ones that read STDIN), but will fail for programs that read terminal directly (such as sudo, passwd, gpg).
Method#3:
perl -e 'print "A"x48; print "\x1b\x88\x04\x08";' |
gdbserver :0 ./a.out # gdbserver will print a TCP port, e.g. 4321
# and stop the program at start
# in another window,
gdb ./a.out
(gdb) target remote :4321
# gdb will now attach to gdbserver, you can set breakpoints and continue.

How to pipe the output of a command to file on Linux

I am running a task on the CLI, which prompts me for a yes/no input.
After selecting a choice, a large amount of info scrolls by on the screen - including several errors. I want to pipe this output to a file so I can see the errors. A simple '>' is not working since the command expects keyboard input.
I am running on Ubuntu 9.1.
command &> output.txt
You can use &> to redirect both stdout and stderr to a file. This is shorthand for command > output.txt 2>&1 where the 2>&1 means "send stderr to the same place as stdout" (stdout is file descriptor 1, stderr is 2).
For interactive commands I usually don't bother saving to a file if I can use less and read the results right away:
command 2>&1 | less
echo yes | command > output.txt
Depending on how the command reads it's input (some programs discard whatever was on stdin before it displays it's prompt, but most don't), this should work on any sane CLI-environment.
Use 2> rather than just >.
If the program was written by a sane person what you probably want is the stderr not the stdout. You would achieve this by using something like
foo 2> errors.txt
you can use 2> option to send errors to the file.
example:
command 2> error.txt
(use of option 2>) --- see if their would be any error while the execution of the command it will send it to the file error.txt.

Resources