Linux shell wrap a program's stdin and stdout using pipes - linux

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

Related

Linux server: How do I use nohup make sure job is working?

What I know and what I've tried: I have a script in R (called GAM.R) that I want to run in the background that outputs .rdata, .pdf, and .jpg files. Running this from the command line is relatively simple:
$ Rscript GAM.R
However, this code takes a very long time to run so I would love to send it to the background and let it work even after I have logged out and turned the computer off. I understand this is pretty easy, as well, and my code would look like this:
$ nohup Rscript GAM.R >/dev/null 2>&1 &
I used this to see if it was working:
$ fg
nohup Rscript GAM.R > /dev/null 2>&1
The problem: I don't know how to check if the code is working (is there a way I can see its progress?) and I don't know where the outputs are going. I can see the progress and output with the first code so I must not be too far off. It doesn't seem that the second code's outputs are going where the first code's outputs went.
Your command line is diverting all output to /dev/null aka, The Bit Bucket.
Consider diverting it to a temporary file:
$ nohup Rscript GAM.R >/tmp/GAM.R.output 2>&1 &
Then you can tail /tmp/GAM.R.output to see the results, it will show the last 10 lines of the file by default. You can use tail -f to show the end of the file, plus new output in real time.
Note, the /tmp/ filesystem is not guaranteed to exist between reboots. You can put the file somewhere else (like ~/GAM.R.output if you need to be sure.
Note, however, that if you turn your computer off, then all processing gets aborted. For this to work your machine must continue to run and not go to sleep, or shutdown.
What you are doing is that with the > you are redirecting the output of your script to /dev/null and by doing 2>&1 you are redirecting the error output to the same place. Finally nohup executes your process and detach it from the current terminal.
So to sum up what you are doing is creating a process and redirecting its output and error output to a file called null that is stored under /dev.
To answer your question I suggest you redirect your outputs to a folder that you can access as normal user and not super user. Then to make sure that everything is ok you can print this file.
For example you can do :
nohup Rscript GAM.R >/home/username/documents/output_file 2>&1 &
and then to see the file from a terminal you can do:
cat /home/username/documents/output_file
Lastly I don't think that your program will keep on running if your turn off your pc and I don't think there is a way to do that.
If you want to run your program in the background and access the output of the program you can easily do that by writing
exec 3< <(Rscript GAM.R)
And then when you wish to check the output of the program run
cat <&3 # or you can use 'cat /dev/fd/3'
Excellent! Thanks everyone for your helpful answers, particularly #Greg Tarsa. Ultimately I needed to use:
$ nohup Rscript GAM.R >/usr/emily/gams/2017_03_14 2>&1 &
The above is used to run the script and save the screen output to emily/gams (called "2017_03_14", a file to be made by the command, not a folder as I had origionally thought). This also outputs my .rdata, .pdf, and .jpg output filesf from the script to usr/emily.
Then I can see progress and running programs using:
$ tail -f 2017_03_14 #Shows the last 10 lines of the program's progress
$ ps #shows your running projects
$ ps -fu emily #see running projects regardless of session, where username==emily
In the spirit of completeness, I can also note here that to cancel a process, you can use:
$ kill -HUP processid #https://kb.iu.edu/d/adqw

Running Multiple Remote Commands Consecutively Through Matlab system()

I'd like to execute multiple commands consecutively using a matlab system call. I would first like to ssh into a remote machine then run a program on that machine. After the program starts I would like to then enter another command into this program's console. Here is what I would like to do in code:
system('ssh othermachine')
system('command on other machine')
%%WAIT FOR PROGRAM TO START RUNNING ON OTHER MACHINE
system('run command on other machine')
The problem is that Matlab will hang on the first system call and won't proceed to the next system call until the process form the first is exited. Is there a way around this?
Thanks for your help!
Prologue: Your problem is general and not just related to matlab.
When you want to run remote commands via ssh, they have to be issued in the ssh call. In a (linux) shell, you'd have
$ ssh remotemachine command1
for a single command. Hence, using a matlab system call you would have
>> system('ssh remotemachine command1').
When you want multiple commands to be executed sequentially, in a shell you'd write
$ ssh remotemachine "command1; command2"
i.e., in matlab, you'd write something like
>> system('ssh remotemachine "command1; command2"').
In general, is way more elegant to group your commands in a shell script, say script.sh, and pipe it in the ssh call
$ cat script.sh | ssh remotemachine
which, in matlab shell, sounds like
>> system('cat script.sh | ssh remotemachine').
There are a number of flags you can add in order to specify which behavior you want (e.g. in terms of session detachment/background execution, output collection,... look e.g. here).

bash how to close /dev/tty?

I want my interactive bash to run a program that will ultimately do things like:
echo Error: foobar >/dev/tty
and in another(python) component tries to prompt for and read a password from /dev/tty.
I want such reads and writes to fail, but not block.
Is there some way to close /dev/tty in the parent script and then run the program?
I tried
foo >&/tmp/outfile
which does not work.
What does sort of work is the 'at' command:
at now
at> foobar >&/tmp/outfile
/dev/tty is not open in your parent script. /dev/tty isn't a file descriptor but a path in the filesystem.
A line of script such as:
echo foobar > /dev/tty
opens a new descriptor for itself. To make that fail, we have to remove /dev/tty, or otherwise not make it work: change the permissions, or replace it with a nonexistent device. Needless to say, these are bad ideas.
If we want to run a script which does some useful things for us, but also does I/O to and from /dev/tty that we don't want (and we cannot change the script), the solutions range from creating an environment for that script in which the controlling terminal is some pseudo-tty (whose master side just throws data away), to doing a chroot to an environment in which /dev/tty is the same device as /dev/null.
Regarding the first option, there are utilities which create a pseudo tty, such as Expect.
For instance:
$ expect -c "spawn ./badscript ; expect"
will run badscript in an environment where its /dev/tty is a pseudo-tty that is connected to the expect interpreter. An echo foo > /dev/tty issued by badscript will still show up on your terminal, but now how it gets to your terminal is that expect reads it from badscript via the pseudo-tty device, and then repeats it; badscript is blocked from writing to your tty directly. Of course, with some scripting in expect's language, you can prevent this.
/dev/tty refers to the controlling terminal of the process. If you mean to get rid of it by describing it as closing, you must invoke setsid() which will make the process be controlling TTY-less, but one must be aware to always pass O_NOCTTY else a opened file, if a TTY, can become the controlling TTY.[2]

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

How do I launch an editor from a shell script?

I would like my tcsh script to launch an editor (e.g., vi, emacs):
#!/bin/tcsh
vi my_file
This starts up vi with my_file but first displays a warning "Vim: Warning: Output is not to a terminal" and my keystrokes don't appear on the screen. After I kill vi, my terminal window is messed up (no newlines), requiring a "reset". I tried "emacs -nw", "xemacs -nw", and pico with similar results. "xemacs" works but launches a separate window. I want to reuse the same terminal window.
Is there a way to launch an editor from a script so that it reuses the same terminal window?
I answered my own question! You have to redirect terminal input and output:
#!/bin/tcsh
vi my_file < `tty` > `tty`
The reason you're getting the error is that when you start a shell in your environment, it's starting in a subshell that has STDIN and STDOUT not connected to a TTY — probably because this is in something like a pipeline. When you redirect, you're opening a new connection directly to the device. So, for example, your command line turns
$ vi < `tty` > `tty`
into
$ vi < /dev/ttys000 > /dev/ttys000
So you're not really using your old STDIN/STDOUT, you're creating two new files and mapping them to your vi process's STDIN/STDOUT.
Now, tell us what you're doing with this and we'll tell you how to avoid this kludge.
I wanted to do something similar. I wanted an alias that would find the last file I was working on, and open it in vi(1) for editing. Anyway, I couldn't figure out how to do it as a readable alias (in tcsh) so I just created an ugly shell script (csh because I'm old) instead:
#!/bin/csh
set DIR = "~/www/TooMuchRock/shows/"
set file = $DIR`ls -t $DIR | head -1`
set tty = `tty`
vi $file <$tty >$tty
(1) kraftwerk:bin> which vi
vi: aliased to /usr/local/bin/vim -u ~/.exrc
Absolutely. :-)
Write your script and have it call the EDITOR environment variable, which you will have set to "emacsclient". Then start up Emacs, execute M-x server-start, switch to a shell buffer (M-x shell) and execute your script. Emacsclient will pop up the thing to be edited and C-x # will act as a "done" command and take you back to your script with edits completed or aborted, as you choose.
Enjoy.
Edit: I meant to add that these days Emacs IS my terminal program. I have dozens of shell buffers and never have to worry about losing output and can use all the power of Emacs to manipulate and analyse the terminal output. And have Emacs scripts generate input to the shells. Awesome actually. For example, watching Tomcat output scroll by in a shell buffer while editing sources or processing mail or doing most any Emacs thing is very convenient. When a Tomcat stack trace appears I can quickly respond to it.
Had the same trouble with 'pinfo' in a shell script 'while' loop. The line can be used in the script, it uses 'ps' to find the tty of the current process number, "$$", and stores that tty in $KEY_TTY:
KEY_TTY=/dev/`ps | grep $$ | tr -s '[:blank:]' | cut -d " " -f 3`
Later in the script, just call the tty-only proggie, with $KEY_TTY as input, in my case it was:
pinfo -m $s $page < $KEY_TTY
For 'vi' it'd be:
vi $a < $KEY_TTY > $KEY_TTY
The advantage is that the script as a whole can still accept STDIN input, and 'vi' (or whatever) should work fine -- without having to remember to set any environmental variables before running the script.
Set your terminal tty to a variable, and then redirect the editor i/o through that variable.
In your script:
#!/bin/sh
ls | while read a; do vi $a < $MYTTY >$MYTTY; done
And then execute the script with:
$ MYTTY=`tty` ./myscript >/tmp/log
I was able to get the desired behavior under bash+Cygwin+Terminator:
#!/bin/bash
vim foo
Run the script, vim loads, no error messages, behaves as normal. There are undoubtedly dozens of variations between our setups, however, so I can't hazard a guess as to what makes the difference. I'm curious what it is, but you got it working, which is the important part.

Resources