I want to write to the stdin of a running process (not Java). How can I get the Process object or the OutputStream directly? Runtime.getRuntime() only helps me spawn new things, not find existing processes.
This looks possible on Linux, no idea about elsewhere. Searching for "get stdin of running process" revealed several promising looking discussions:
Writing to stdin of background process
Write to stdin of a running process using pipe
Can I send some text to the STDIN of an active process running in a screen session?
Essentially, you can write to the 0th file descriptor of a process via /proc/$pid/fd/0. From there, you just have to open an OutputStream to that path.
I just tested this (not the Java part, that's presumably straightforward) and it worked as advertized:
Shell-1 $ cat
This blocks, waiting on stdin
Shell-2 $ ps aux | grep 'cat$' | awk '{ print $2 }'
1234
Shell-2 $ echo "Hello World" > /proc/1234/fd/0
Now back in Shell-1:
Shell-1 $ cat
Hello World
Note this does not close the process's stdin. You can keep writing to the file descriptor.
Related
So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.
Background: I have to revive my old program, which unfortunately fails when it comes to communication with subprocess. The program is written in C++ and creates subprocess for writing with opened pipe for reading. Nothing crashes, but there is no data to read.
My idea is to recreate entire scenario in bash, so I could interactively check what is going on.
Things I used in C++:
mkfifo for creating pipe, there is a bash equivalent
popen for creating subprocess (in my case for writing)
espeak -x -q -z 1> /dev/null 2> /tmp/my-pipe
open and read -- for opening the pipe and then reading, I hope simple cat will suffice
fwrite -- for writing to subprocess, will just redirection work?
So I hope open, read and fwrite will be straightforward, but how do I launch a program as a process (what is popen in bash)?
bash naturally makes piping between processes very easy, so commands to create and open pipes are not normally needed
program1 | program2
This is the equivalent of program1 running popen("program2","w");
It could also be achieved by program2 running popen("program1","r");
If you explicitly want to use a named pipe:
mkfifo /tmp/mypipe
program1 >/tmp/mypipe &
program2 </tmp/mypipe
rm /tmp/mypipe
A thought that might solve your original problem (and is a consideration for using pipes in shell):
Using stdio commands such as popen, fwrite, etc involve buffering. If a program on the write end of the pipe only writes a small amount of data to the pipe, the program on the reading end won't see any of it until a full block of data has been written to the pipe, after which, the block of data will be pushed along the pipe. If you wish to have the data get there sooner, you need either call fflush() on the writing end, or fclose() if you are not planning on sending any more data. Note that with bash, I don't believe there is any equivalent of fflush.
You simply run the process in the background.
espeak -x -q -z >/dev/null 2>/tmp/mypipe &
On Ubuntu, I start a command line program (gnu backgammon) and let it get its commands from a pipe (commandpipe), like so
$ gnubg -t < commandpipe
from another terminal, I do
$ echo "new game" > commandpipe
This works fine, a new game is started, but after the program has finished processing that command, the process exits.
How can I prevent the backgammon process from exiting? I would like to continue sending commands to it via the commandpipe.
This is only because you used echo, which immediately quits after echoing. I believe when a program quits its file descriptors are closed. (OK, it's not an actual program in bash, it's a builtin, but I don't think this matters.) If you wrote an actual interactive program, e.g. with a GUI (and remember, StackOverflow is for programming questions, not Unix questions, which belong over there) and redirected its stdout to the named pipe, you would not have this problem.
The reader gets EOF and thus closes the FIFO. So you need a loop, like this:
$ while (true); do cat myfifo; done | ts
jan 05 23:01:56 a
jan 05 23:01:58 b
And in another terminal:
$ echo a > myfifo
$ echo b > myfifo
Substitute ts with gnubg -t
The problem is that the file descriptor is closed, and when the last write file descriptor is closed, it sends a signal to the read process.
As a quick hack, you can do this:
cat < z # read process in one terminal
cat > z & # Keep write File Descriptor open. Put in background (or run in new terminal)
echo hi > z # This will close the FD, but not signal the end of input
But you should really be writing in a real programming language where you can control your file descriptors.
To avoid EOF, you could use tail:
tail -f commandpipe | gnubg -t
Program is dumping to stdout and while I try to type new commands I can't see what I'm writing because it gets thrown along with the output. Is there a shell that separates commands and outputs? Or can I use two shells where I can run commands on one and make it dump to the stdout of another?
You can redirect the output of the program to another terminal window. For example:
program > /dev/pts/2 &
The style of terminal name may depend on how your system is organized.
There's 'more' to let you pageinate through output, and 'tee' which lets you split a programs output, so it goes to both stdout and to a file.
$ yourapp | more // show in page-sized chunks
$ yourapp | tee output.txt // flood to stdout, but also save a copy in output.txt
and best of all
$ yourapp | tee output.txt | more // pageinate + save copy
Either redirect standard output and error when you run the program, so it doesn't bother you:
./myprog >myprog.out 2>&1
or, alternatively, run a different terminal to do your work in. That leaves your program free to output whatever it likes to its terminal without bothering you.
Having said that, I'd still capture the information from the program to a file in case you have to go back and look at it.
I am confused by how linux could let application read from pipe like "cat /etc/hosts | grep 'localhost'". I know in a independent program fork a child and communicate by pipe between each other. But for two independent program communicating by pipe I don't know how.
In example "cat /etc/hosts | grep 'localhost'" How could Grep know which file descriptor it should read to get the input from "cat /etc/hosts". Is there a "conventional" pipe provided by OS, to let Grep know where to get the input? I want to know the mechanism behind this.
grep in your example gets it from stdin. It is the shell's responsibility to call pipe(2) to create the pipe and then dup2(2) in each of the fork(2) children to assign their end of the pipe to stdin or stdout before calling one of the exec(3) functions to actually run the other executables.