Can "tee" command in linux print both the input and the output of a C program? - linux

I have a simple C program where the it will ask to take an integer from the user, and then it will print that integer.
#include <stdio.h>
int main() {
int number;
printf("Enter an integer: ");
scanf("%d", &number);
printf("You entered: %d", number);
return 0;}
When I use this command:
gcc program.c -o test
./test | tee text.txt
The program running on terminal does not print the enter integer line but instead, waits for an input and when I provide that input, it prints it and also into the text.txt folder. I want to run the program as it is and store everything running on terminal into the text.txt folder including both the input and the output. Any possible way to do that?

The tee command works with one input, but you want to capture two. With some care, you could use two separate tee commands two copy both the input and the output to the same file, but you would be better off with a utility designed for your purpose, such as script.

For Debian based Linuxes, run apt install devscripts, and then try the annotate-output util. For example, run cat using a process substitution and a file that's not there at all:
annotate-output cat <(echo hello) /bin/nosuchfile
...which shows what otherwise would be input, output, and standard error output, all sent to standard output which could then be piped to a file:
13:01:03 I: Started cat /dev/fd/63 /bin/nosuchfile
13:01:03 O: hello
13:01:03 E: cat: /bin/nosuchfile: No such file or directory
13:01:03 I: Finished with exitcode 1

Related

Pipe Input on a timer

I have a program that asks for input but it takes a while to load up.
I need a bash script that will pipe out the output into a named pipe.
I need a command that will cause my echo to insert my input after the program prompts for input. This is my command right now but it pipes in the input before my prompt.
echo "R" | nc localhost 123 > fifo
This will result in the following output:
usernname#name:
R
Please enter in an input (R, Q, T):
So my command needs to "wait" until my program prompts then pipe in the input. Any ideas? This needs to be in a bash script
You can use sleep:
(sleep 3; echo "R") | nc localhost 123 > fifo
Obviously this has a race condition, and so for industrial applications you should use expect instead.

How to make an expect script to input commands into GDB?

I want to write an Expect script that will simply input commands into GDB regardless of its output. Then I want to take certain parts of the output of GDB and extract information from it using shell commands such as grep and sed. Then I want to use this information to input more commands into GDB.
For example, I would initiate a back trace by sending the command "bt" to GDB from the expect script. Then I would grep for a word such as "pardrivr" and get the line number associated with it. Then I would input "f lineNumberOfPardrivr" into GDB. This process would be repeated until the correct information is eventually extracted.
Is this possible. If so what is the best way to go about doing this?
Thanks
My $0.02: I'd use a coprocess or named pipe under ksh/bash/zsh. Much easier. See: https://unix.stackexchange.com/questions/86270/how-do-you-use-the-command-coproc-in-bash
Also, consider tee'ing the output of gdb into a named pipe that you cat in another xterm. Makes it much easier to debug what your script is reading if you can see a copy of the gdb output.
Edited to add:
Still can't post comments. *sigh*
gdb in batch mode, or via a simple shell redirect, won't let us define commands on the fly based upon current gdb output. A coprocess or named pipe approach is much the same technique, but it lets us create new input dynamically at will based upon gdb's output processed through grep/sed/awk/perl/whatever. Python or Perl might be even easier to use with their facilities for regular expressions and subprocesses. E.g. (perl) open("|gdb ...")
http://perldoc.perl.org/functions/open.html
Edited again to add:
A named pipe is a FIFO (first in first out) that exists much like a file in the filesystem. It's not really a file of course. It's just something that can be used like a file. Anything that you write to it can be read back out, within the limits of the OS buffering. (Otherwise writes will block.)
FIFO's are available under Unix, Linux, & Macs, but not windows. You create them with mkfifo. Any process can write to it. Any process can read from it. From that link I posted up above:
mkfifo in out
cmd <in >out &
exec 3> in 4< out
echo data >&3
read var <&4
From my own playing around to demo this...
#in BASH
mkfifo IN OUT
#or mkfifo IN OUT ERR
gdb < IN > OUT 2>&1 &
#or gdb < IN > OUT 2> ERR &
#or gdb < IN > OUT &
exec 3> IN
exec 4< OUT
echo "help bt" >&3
while read -t 0.001 var <&4 ; do echo $var; done
echo "help stack" >&3
while read -t 0.001 var <&4 ; do echo $var; done
#don't forget to kill the gdb process when you are done...
echo "quit" >&3
while read -t 0.001 var <&4 ; do echo $var; done
I want to write an Expect script that will simply input commands into GDB regardless of its output.
For non interactive control you don't need expect as gdb has a -batch mode and is able to read (-x) commands from a file.
Moreover, as gdbreads input from stdin and produces output to stdout standard redirection might do the trick.
For example, I wrote a simple C program:
sh$ cat hello.c
#include <stdio.h>
int main() {
char msg[] = "Hello world";
printf("%s\n", msg);
return 0;
}
sh$ gcc -ggdb hello.c -o hello
I'm able to "script" the gdb session like that:
sh$ gdb -q hello | awk '$2=="$1" { print "Var was #" $NF }'
br 6
r
print &msg
c
quit
EOF
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff7ffa000
Var was #0x7fffffffe230

Some strange output on my command in linux shell

I've written the following in the command:
$ cat /bin/ls > blah
$ cat blah blah blah > bbb
$ chmod u+x bbb
$ ./bbb
And it printed all the file names in the current working directory.
My question is why? and why not 3 times?
Because the Linux executable file format (ELF) is not a script that you can copy-paste three times in a row to get the same result. To be more precise, the header contains a single entry point (think of it as the address of where int main() has been stored), which is where the instructions are read from. Once you reach the final return 0; or whatever, the program stops, even if there is more (nicely structured) binary garbage following in the binary file.
TL;DR: Don't forget - /bin/ls is a compiled binary and not a shell script.

readline chops my console input off when I do ctrl+C / ctrl+V on gnome terminal

Environment:
Ubuntu 10.04 LTS
Gnome Desktop v2.30.2
gcc/g++ 4.4.3
libreadline 6.1
I was building an application that inputs a multiple line of input and process for it and I found that if the size of input is large, readline skips several bytes of characters. To make sure, I made a simple program like this:
#include <stdio.h>
#include <readline/readline.h>
int main() {
while (1) {
char *p = readline("> ");
if (!p) break;
fprintf(stderr, "%s\n", p);
}
return 0;
}
and generated 20000 lines of input, which consists of 120000 bytes.
seq -f "%05g" 1 20000 >gen.txt
and ran the test program on gnome terminal and performed copy-and-paste of the content of gen.txt:
g++ test.cpp -lreadline
./a.out 2>out.txt
[copy-and-paste the content of gen.txt]
I could see that out.txt was smaller than gen.txt, and many bytes are omitted.
wc -c out.txt
119966 out.txt
I want to know which component is flawed, whether gnome terminal or readline, and want to know how many bytes of clipboard content readline and gnome terminal assure that copy-and-paste can be done in that amount without problem.

on-the-fly output redirection, seeing the file redirection output while the program is still running

If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>

Resources