On Ubuntu, I start a command line program (gnu backgammon) and let it get its commands from a pipe (commandpipe), like so
$ gnubg -t < commandpipe
from another terminal, I do
$ echo "new game" > commandpipe
This works fine, a new game is started, but after the program has finished processing that command, the process exits.
How can I prevent the backgammon process from exiting? I would like to continue sending commands to it via the commandpipe.
This is only because you used echo, which immediately quits after echoing. I believe when a program quits its file descriptors are closed. (OK, it's not an actual program in bash, it's a builtin, but I don't think this matters.) If you wrote an actual interactive program, e.g. with a GUI (and remember, StackOverflow is for programming questions, not Unix questions, which belong over there) and redirected its stdout to the named pipe, you would not have this problem.
The reader gets EOF and thus closes the FIFO. So you need a loop, like this:
$ while (true); do cat myfifo; done | ts
jan 05 23:01:56 a
jan 05 23:01:58 b
And in another terminal:
$ echo a > myfifo
$ echo b > myfifo
Substitute ts with gnubg -t
The problem is that the file descriptor is closed, and when the last write file descriptor is closed, it sends a signal to the read process.
As a quick hack, you can do this:
cat < z # read process in one terminal
cat > z & # Keep write File Descriptor open. Put in background (or run in new terminal)
echo hi > z # This will close the FD, but not signal the end of input
But you should really be writing in a real programming language where you can control your file descriptors.
To avoid EOF, you could use tail:
tail -f commandpipe | gnubg -t
Related
In NodeJS, I can spawn a process and listen to its stdout using spawn().stdout. I'm trying to create an online terminal which will interact with this shell. I can't figure out how to send keystrokes to the shell process.
I've tried this:
echo -e "ls\n" > /proc/{bash pid}/fd/0
This doesn't really do anything other than output ls and a line break. And when I try to tail -f /proc/{bash pid}/fd/0, I can no longer even send keystrokes to the open bash terminal.
I'm really just messing around trying to understand how the bash process will interpret ENTER keys. I don't know if that is done through stdin or not since line breaks don't work.
I don't believe you can "remote control" an already started normal Bash session in any meaningful way. What you can do is start a new shell which reads from a named pipe; you can then write to that pipe to run commands:
$ cd "$(mktemp --directory)"
$ mkfifo pipe1
$ bash < pipe1 &
$ echo touch foo > pipe1
[1]+ Done bash < pipe1
$ ls
foo pipe1
See How to write several times to a fifo without having to reopen it? for more details.
So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.
I want to write to the stdin of a running process (not Java). How can I get the Process object or the OutputStream directly? Runtime.getRuntime() only helps me spawn new things, not find existing processes.
This looks possible on Linux, no idea about elsewhere. Searching for "get stdin of running process" revealed several promising looking discussions:
Writing to stdin of background process
Write to stdin of a running process using pipe
Can I send some text to the STDIN of an active process running in a screen session?
Essentially, you can write to the 0th file descriptor of a process via /proc/$pid/fd/0. From there, you just have to open an OutputStream to that path.
I just tested this (not the Java part, that's presumably straightforward) and it worked as advertized:
Shell-1 $ cat
This blocks, waiting on stdin
Shell-2 $ ps aux | grep 'cat$' | awk '{ print $2 }'
1234
Shell-2 $ echo "Hello World" > /proc/1234/fd/0
Now back in Shell-1:
Shell-1 $ cat
Hello World
Note this does not close the process's stdin. You can keep writing to the file descriptor.
I am a newbie in shell scripting and I am using Ubuntu-11.10. In the terminal after using exec 1>file command, whatever commands I give to terminal, its output doesn't get shown in terminal. I know that STDOUT is getting redirected to the file, the output of those commands gets redirected to file.
My questions are here
Once I use exec 1>file, how can I get rid of this? i.e. How can I stop the redirection of STDOUT to file and restore the normal operation of STDOUT (i.e. redirection to terminal rather than file)?
I tried using exec 1>&- but it didn’t work as this closes the STDOUT file descriptor.
Please throw light on this entire operation of exec 1>file and exec 1>&-
What will happen if we close the standard file descriptors 0, 1, 2 by using exec 0>&- exec 1>&- exec 2>&-?
Q1
You have to prepare for the recovery before you do the initial exec:
exec 3>&1 1>file
To recover the original standard output later:
exec 1>&3 3>&-
The first exec copies the original file descriptor 1 (standard output) to file descriptor 3, then redirects standard output to the named file. The second exec copies file descriptor 3 to standard output again, and then closes file descriptor 3.
Q2
This is a bit open ended. It can be described at a C code level or at the shell command line level.
exec 1>file
simply redirects the standard output (1) of the shell to the named file. File descriptor one now references the named file; any output written to standard output will go to the file. (Note that prompts in an interactive shell are written to standard error, not standard output.)
exec 1>&-
simply closes the standard output of the shell. Now there is no open file for standard output. Programs may get upset if they are run with no standard output.
Q3
If you close all three of standard input, standard output and standard error, an interactive shell will exit as you close standard input (because it will get EOF when it reads the next command). A shell script will continue running, but programs that it runs may get upset because they're guaranteed 3 open file channels — standard input, standard output, standard error — and when your shell runs them, if there is no other I/O redirection, then they do not get the file channels they were promised and all hell may break loose (and the only way you'll know is that the exit status of the command will probably not be zero — success).
Q1: There is a simple way to restore stdout to the terminal after it has been redirected to a file:
exec >/dev/tty
While saving the original stdout file descriptor is commonly required for it to be restored later, in this particular case, you want stdout to be /dev/tty so there is no need to do more.
$ date
Mon Aug 25 10:06:46 CEST 2014
$ exec > /tmp/foo
$ date
$ exec > /dev/tty
$ date
Mon Aug 25 10:07:24 CEST 2014
$ ls -l /tmp/foo
-rw-r--r-- 1 jlliagre jlliagre 30 Aug 25 10:07 /tmp/foo
$ cat /tmp/foo
Mon Aug 25 10:07:05 CEST 2014
Q2: exec 1>file is a slightly more verbose way of doing exec >file which, as already stated, redirect stdout to the given file, provided you have the right to create/write it. The file is created if it doesn't exist, it is truncated if it does.
exec 1>&- is closing stdout, which is probably a bad idea in most situations.
Q3: The commands should be
exec 0<&-
exec 1>&-
exec 2>&-
Note the reversed redirection for stdin.
It might be simplified that way:
exec <&- >&- 2>&-
This command closes all three standard file descriptors. This is a very bad idea. Should you want a script to be disconnected from these channels, I would suggest this more robust approach:
exec </dev/null >/dev/null 2>&1
In that case, all output will be discarded instead of triggering an error, and all input will return just nothing instead of failing.
The accepted answer is too verbose for me.
So, I decided to sum up an answer to your original answer.
Using Bash version 4.3.39(2)-release
On a x86 32-Bit on Cygwin Machine
GIVEN:
Stdin is fd #0.
Stdout is fd #1.
Stderr is fd #2.
ANSWER (written in bash):
exec 1> ./output.test.txt
echo -e "First Line: Hello World!"
printf "%s\n" "2nd Line: Hello Earth!" "3rd Line: Hello Solar System!"
# This is uneccessary, but
# it stops or closes stdout.
# exec 1>&-
# Send stdout back to stdin
exec 1>&0
# Oops... I need to append some more data.
# So, lets append to the file.
exec 1>> ./output.test.txt
echo -e "# Appended this line and the next line is empty.\n"
# Send stdout back to stdin
exec 1>&0
# Output the contents to stdout
cat ./output.test.txt
USEFUL-KEYWORDS:
There are also here-docs, here-strings, and process-substitution for IO redirection in Bourne, Bash, tcsh, zsh for Linux, BSD, AIX, HP, Busybox, Toybox and etcetera.
While I completely agree with Jonathan's Q1, some systems have /dev/stdout, so you might be able to do exec 1>file; ...; exec 1>/dev/stdout
Program is dumping to stdout and while I try to type new commands I can't see what I'm writing because it gets thrown along with the output. Is there a shell that separates commands and outputs? Or can I use two shells where I can run commands on one and make it dump to the stdout of another?
You can redirect the output of the program to another terminal window. For example:
program > /dev/pts/2 &
The style of terminal name may depend on how your system is organized.
There's 'more' to let you pageinate through output, and 'tee' which lets you split a programs output, so it goes to both stdout and to a file.
$ yourapp | more // show in page-sized chunks
$ yourapp | tee output.txt // flood to stdout, but also save a copy in output.txt
and best of all
$ yourapp | tee output.txt | more // pageinate + save copy
Either redirect standard output and error when you run the program, so it doesn't bother you:
./myprog >myprog.out 2>&1
or, alternatively, run a different terminal to do your work in. That leaves your program free to output whatever it likes to its terminal without bothering you.
Having said that, I'd still capture the information from the program to a file in case you have to go back and look at it.