Shell script to call external program which has user-interface - linux

I have an external program, say a.out, which while running asks for an input parameter, i.e.,
./a.out
Please select either 1 or 2:
this will do something
this will do something else
Then when I enter '1', it will do its job. I don't have the code itself but just binary so can't change it.
I want to write a shell script which runs a.out and also inserts '1' in.
I tried many things including silly things like:
./a.out 1
./a.out << 1
./a.out < 1
etc.
but don't work.
Could you please let me know if there is any way to write such as shell script?
Thanks,
dbm368

I think you just need a pipe. For example:
echo 1 | ./a.out
In general terms a pipe takes whatever the program on the left writes to stdout and redirects to the stdin of the program on the right.

Related

Reading /proc/PID/maps of short-lived process

I have a binary that exits/segfaults shortly after executing. I am trying to read /proc/PID/maps of this binary before it exits but with no luck.
I've tried cat /proc/($./binary [input] & echo $!)/maps which should result in /proc/pidofprocess/maps but always get No such file or directory.
My goal is to see where things get loaded to on subsequent runs.
You solution cat /proc/$(<pipeline>)/maps won't call cat until the whole <pipeline> has ended (even if you use & within it), so you will never get your maps.
On the other hand,
<pipeline> & cat /proc/${!}/maps
will return immediately. Of course, as your binary exits immediately, cat may still run too late to capture /proc/${!}/maps
But you could try:
while true; do; ./binary [input] & cat /proc/${!}/maps ; done
This restarts a race between your binary and cat all day long, and sometimes cat may win (it does, in my case, with ls <nonexisting file> instead of ./binary 1 out of 30 times)
This prints a lot of garbage on your terminal, but you can collect the succesful maps by redirecting your cats output:
while true; do; ./binary [input] & cat /proc/${!}/maps >> mymaps; done
Elegant? No. Effective: I hope so!

What is the order of redirection in terminal?

I want to take input from file input.txt and write output of execution to output.txt What is the right order? The below does not work.
./a.out < input.txt > output.txt
EDIT
Do I have to wait for execution to complete for it to be written? I usually break in the middle to see if o/p is getting written as run time is very high.
CLARIFICATION:
This C program (P1) iterates through a loop and feeds the loop value x to a system() call which calls another C program (P2) using ./P2 < x. Program P2 executes for each value of x and outputs to screen. I want to the complete output of both programs to output.txt.
If you're killing the command before it finishes, this is probably a buffering issue. Line-buffered terminal output and block-buffered file output are default behaviors in the C stdio library, so redirection can cause output to be buffered until a few kilobytes have been written.
Some programs have a command line option to force line-buffered or unbuffered output. They do this by calling setvbuf. If that a.out is a program you wrote, you could addsetvbuf(stdout, NULL, _IOLBF, 0);
If the program is not yours and you can't recompile it, there is a utility called stdbuf that might help, as in stdbuf -oL ./a.out < in > out
stdbuf is kind of a kludge though. I wouldn't use it unless there is no other option.

Redirect stdout to fifo immediately

I have, for example, a c program that prints three lines, two seconds apart, that is:
printf("Wait 2 seconds...\n");
sleep(2);
printf("Two more\n");
sleep(2);
printf("Quitting in 2 seconds...\n");
sleep(2);
I execute the program and redirect it to a pipe:
./printer > myPipe
On another terminal
cat < myPipe
The second terminal prints all at once, 6 seconds later! I would like it to print the available lines immediatly. How can i do it?
Obs: I can't change the source code. It's actually the output of a boardgame algorithm, i have to get it immediatly so that i can plug it into another algorithm, get the answer back and plug into the first one...
Change the program to this approach:
printf("Wait 2 seconds...\n");
fflush (stdout);
sleep(2);
printf("Two more\n");
fflush (stdout);
sleep(2);
printf("Quitting in 2 seconds...\n");
fflush (stdout);
sleep(2);
Additional:
If you can't change the program, there really is no way to affect the program's built-in buffering without hacking it.
If you can relink the program, you could substitute a printf() function which flushes after each call. Or changes the startup initialization of stdout to be unbuffered—or at least line buffered.
If you can't change the source, you might want to try some of the solutions to this related question:
bash: force exec'd process to have unbuffered stdout
Basically, you have to make the OS execute this program interactively.
I'm assuming that the actual source file is complete. If so, then you have to compile the source and run it to get it to do anything. Using cat will just print the contents of the file, not run it.
If it was written in bash then it would have to be set mode bit +x, which would then make it executable. Allowing you to run it from a terminal ./script
No need to worry about the syntax since you've stated it's not an option to change it and... It's correctly written in C.

redirecting stdin _and_ stdout to a pipe

I would like to run a program "A", have its output go to the input to another program "B", as well as stdin going to intput of "B". If program "A" closes, I'd like "B" to continue running.
I can redirect A output to B input easily:
./a | ./b
And I can combine stderr into the output if I'd like:
./a 2>&1 | ./b
But I can't figure out how to combine stdin into the output. My guess would be:
./a 0>&1 | ./b
but it doesn't work.
Here's a test that doesn't require us to rewrite up any test programs:
$ echo ls 0>&1 | /bin/sh -i
$ a b info.txt
$
/bin/sh: Cannot set tty process group (No such process)
If possible, I'd like to do this using only bash redirection on the command line (I don't want to write a C program to fork off child processes and do anything complicated everytime I want to do some redirection of stdin to a pipe).
This cannot be done without writing an auxiliary program.
In general, stdin could be a read-only file descriptor (heck, it might refer to read-only file). So you cannot "insert" anything into it.
You will need to write a "helper" program that monitors two file descriptors (say, 0 and 3) in order to read from both and "merge" them. A simple select or poll loop would be sufficient, and you could write it in most scripting languages, but not the shell, I don't think.
Then you can use shell redirection to feed your program's output to descriptor 3 of the "helper".
Since what you want is basically the opposite of "tee", I might call it "eet"...
[edit]
If only you could launch "cat" in the background...
But that will fail because background processes with a controlling terminal cannot read from stdin. So if you could just detach "cat" from its controlling terminal and run it in the background...
On Linux, "setsid cat" should do it, roughly. But (a) I could not get it to work very well and (b) I really do not have time for this today and (c) it is non-standard anyway.
I would just write the helper program.
[edit 2]
OK, this seems to work:
{ seq 5 ; sleep 2 ; seq 5 ; } | /bin/bash -c 'set -m ; setsid cat ; echo HELLO'
The set -m thing forces bash to enable job control, which apparently is needed to prevent the shell from redirecting stdin from /dev/null.
Here, the echo HELLO represents your "program A". The seq commands (with the sleep in the middle) are just to provide some input. And yes, you can pipe this whole thing to process B.
About as ugly and non-portable a solution as you could ask for...
A pipe has two ends. One is for writing, and that which gets written appears in the other end, which is for reading.
It's a pipe, not a T or Y junction.
I don't think your scenario is possible. Having "stdin going to input of" anything doesn't make sense.
If I understand your requirements correctly, you want this set up (ASCII art to the fore):
o----+----->| A |----+---->| B |---->o
| ^
| |
+------------------+
with the additional constraint that if process A closes up shop, process B should be able to continue with the input stream going to B.
This is a non-standard setup, as you realize, and can only be achieved by using an auxilliary program to drive the input to A and B. You end up with some interesting synchronization issues but it will all work remarkably well as long as your messages are short enough.
The plumbing necessary to achieve this is notable - you'll need two pipes, one for the input to A and the other for the input to B, and the output of A will be connected to the input of B as well.
o---->| C |---------->| A |----+---->| B |---->o
| ^
| |
+--------------------------+
Note that C will be writing the data twice, once to A and once to B. Note, too, that the pipe from A to B is the same pipe as the pipe from C to A.
To make the given test case work you have to while ... read from the controlling terminal device /dev/tty inside a sh -c '...' construct.
Note the use of eval (could it be avoided here?) and that multi-line commands on input> will fail.
echo 'ls; export var=myval' | (
stdin="$(</dev/stdin)"
/bin/sh -i -c '
eval "$1";
while IFS="" read -e -r -p "input> " line; do
history -s "${line}"
eval "${line}";
done </dev/tty
' argv0 "${stdin}"
)
input> echo $var
For a similar problem and the use of named pipes see here:
BASH: Best architecture for reading from two input streams
This can't be done exactly as shown, but to perform your example you can make use of cat's ability to join files together:
cat <(echo ls) - | /bin/sh
(You can do -i, but then you'll have to have another process kill the /bin/sh, as your attempts to Ctrl-C and Ctrl-D out will fail.)
This assumes that you want to pass in your piped input and then accept from stdin. You can also make it so that it does something after stdin is done, or on both sides -- but it won't merge input character-by-character or line-by-line.
This seems to do what you want:
$ ( ./a <&-; cat ) | ./b
(It's not clear to me if you want a to get input...this solution sends all input to b)
Of course, in this case the inputs to b are strictly ordered: all of the output of a
is sent to b first, then a terminates, then input goes to b. If you want things
interleaved, try:
$ ( ./a <&- & cat ) | ./b

on-the-fly output redirection, seeing the file redirection output while the program is still running

If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>

Resources