Running Multiple Remote Commands Consecutively Through Matlab system() - linux

I'd like to execute multiple commands consecutively using a matlab system call. I would first like to ssh into a remote machine then run a program on that machine. After the program starts I would like to then enter another command into this program's console. Here is what I would like to do in code:
system('ssh othermachine')
system('command on other machine')
%%WAIT FOR PROGRAM TO START RUNNING ON OTHER MACHINE
system('run command on other machine')
The problem is that Matlab will hang on the first system call and won't proceed to the next system call until the process form the first is exited. Is there a way around this?
Thanks for your help!

Prologue: Your problem is general and not just related to matlab.
When you want to run remote commands via ssh, they have to be issued in the ssh call. In a (linux) shell, you'd have
$ ssh remotemachine command1
for a single command. Hence, using a matlab system call you would have
>> system('ssh remotemachine command1').
When you want multiple commands to be executed sequentially, in a shell you'd write
$ ssh remotemachine "command1; command2"
i.e., in matlab, you'd write something like
>> system('ssh remotemachine "command1; command2"').
In general, is way more elegant to group your commands in a shell script, say script.sh, and pipe it in the ssh call
$ cat script.sh | ssh remotemachine
which, in matlab shell, sounds like
>> system('cat script.sh | ssh remotemachine').
There are a number of flags you can add in order to specify which behavior you want (e.g. in terms of session detachment/background execution, output collection,... look e.g. here).

Related

Running commands in server through local python script

I would like to run a batch of bash commands (all together) in a server shell through a python3 script in my local machine.
The reason why I'm not running the python3 script on my laptop is that I can't create the same environment on the server and I want to keep the settings I have on my machine while executing the script.
What I would like to do is:
-Run python commands locally
-Run at a certain point those command on the server
-Wait for the end of server execution
-Continue running python script
(This will be done in a loop)
What I'm trying is to put all the commands in a bash script ssh_commands.sh and use the following command:
subprocess.call('cat ssh_commands.sh | ssh -T -S {} -p {} {}'.format(socket, port, user_host).split(),shell=True)
But when the execution of the script reaches that line get stuck until subprocess.call timeout. The execution of the script anyway won't take that much. The only way to stop the script before is through Ctrl+C
I've also tried to set up the ssh connection in the ~/.ssh/config file but I'm getting the same result.
I know that ssh connection works fine and if I run ssh_commands.sh on the server manually, it runs without any problem.
Can somebody suggest:
- A way for fixing what I'm trying to do
- A better way for achieving the final result written above
- Some debugging way to find out what could be the problem
Thank you in advance
To expand on my comment - and I haven't tested your specific case with ssh, could be there are other complications there). This is actually copy/pasted from my own code in a situation that I already know works.
from subprocess import Popen, PIPE, DEVNULL
from shlex import split as sh_split
proc1 = Popen(sh_split(file_cmd1), stdout=PIPE)
proc2 = Popen(file_cmd2, shell=True, stdin=proc1.stdout, stdout=PIPE)
proc1.stdout.close()
I have a specific reason to use shell=True in the second, but you should probably be able to use shlex.split there too I'm guessing.
Basically you're running one command, outputting to `PIPEĀ“, then using this as input for the second command.

Linux shell wrap a program's stdin and stdout using pipes

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

Can you prompt for user input in a shell script that is running remotely?

Say I have a script that will be run on a remote machine.
While running, the script computes some value.
I want to prompt the user so she can change this value if needed.
Is this possible?
I am running the script like: ssh $usr#$machine 'bash -s' < a.sh "param1" "param2"
In a.sh the read alternateValue function call seems to be ignored.
Or can anyone suggest a different approach?
The read statement reads data from stdin, but you are redirecting stdin in your command line with the < operator, so read isn't going to do anything useful.
What if you were first to copy the script over to the remote host, and then run:
ssh $usr#$machine 'bash /path/to/a.sh param1 param2'
Because there is no redirection happening here, read would work without a problem.

Robot Framework parallel command execution

I have a testcase containing multiple Execute Commands (SSH Library) which are calling different commands in Linux environment. The main thing I would like to do is to run some of them in parallel. By default Robot performs one command and after it finishes, performs the next one.
As for me it is not a good behavior, I would like to have my command executed during execution of previous one. For example:
Execute Command ./script.sh
Execute Command ./script_parallel.sh
What I would like Robot to do:
Execute script.sh
During execution perform script_parallel.sh (which will finish before script.sh finishes)
Finish script.sh
Will it be possible to use GNU Parallel?
Execute Command parallel ::: ./script.sh ./script_parallel.sh
Have you tried Start command? It starts the command in background and returns immediately. To verify successful execution of commands you need Read Command Output.

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

Resources