Running commands in server through local python script - python-3.x

I would like to run a batch of bash commands (all together) in a server shell through a python3 script in my local machine.
The reason why I'm not running the python3 script on my laptop is that I can't create the same environment on the server and I want to keep the settings I have on my machine while executing the script.
What I would like to do is:
-Run python commands locally
-Run at a certain point those command on the server
-Wait for the end of server execution
-Continue running python script
(This will be done in a loop)
What I'm trying is to put all the commands in a bash script ssh_commands.sh and use the following command:
subprocess.call('cat ssh_commands.sh | ssh -T -S {} -p {} {}'.format(socket, port, user_host).split(),shell=True)
But when the execution of the script reaches that line get stuck until subprocess.call timeout. The execution of the script anyway won't take that much. The only way to stop the script before is through Ctrl+C
I've also tried to set up the ssh connection in the ~/.ssh/config file but I'm getting the same result.
I know that ssh connection works fine and if I run ssh_commands.sh on the server manually, it runs without any problem.
Can somebody suggest:
- A way for fixing what I'm trying to do
- A better way for achieving the final result written above
- Some debugging way to find out what could be the problem
Thank you in advance

To expand on my comment - and I haven't tested your specific case with ssh, could be there are other complications there). This is actually copy/pasted from my own code in a situation that I already know works.
from subprocess import Popen, PIPE, DEVNULL
from shlex import split as sh_split
proc1 = Popen(sh_split(file_cmd1), stdout=PIPE)
proc2 = Popen(file_cmd2, shell=True, stdin=proc1.stdout, stdout=PIPE)
proc1.stdout.close()
I have a specific reason to use shell=True in the second, but you should probably be able to use shlex.split there too I'm guessing.
Basically you're running one command, outputting to `PIPEĀ“, then using this as input for the second command.

Related

Trigger a script when nohup process terminates

I have some fairly time consuming python scripts to run ~3 hours or so on my machine. I don't want to run them concurrently since it might crash my machine. Alone I have more than enough memory but running 5 or so might cause an issue. I am running them remotely so I ssh into my server and run them like this:
nohup python my_script.py > my_output.txt &
That way if my connection gets interrupted I can re-establish the connection and my result is right there. I want to run the same python script a couple times with different command line arguments sequentially so I can run everything I need without me needing to set up the next one every few hours. I could manually code all of the arguments into a python script and do it that way but it seems inelegant. I don't want to have to fiddle with my python script every time I do this. Is there some sort of listener I could use to trigger the next one when one of them finishes?
I'd suggest writing a bash script that runs the python jobs sequentially:
#!/bin/bash
python3 my_script1.py > my_output1.txt
python3 my_script2.py > my_output2.txt
Then nohup that:
nohup ./driver.sh &
You really want to read up on utilities like tmux or screen and just script the while thing.

Calling a shell script using subprocess doesn't run all the commands in shell script

I have a Python script which loops through a folder, creating a shell command for each file.
Each command is written to a shell script and this script is then run using subprocess.Popen. (I need to do this because I also need to set up the environment before for the commands to work).
Here is some pseudocode:
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
# command to run
base_command="run this"
#list of files
command_list=[]
#loop through the files to create a folder
for file in my_folder:
command_list.append(base_command+" "+file)
# open the shell script
scriptname="shell.sh"
shellscript = open(scriptname,'w')
# set the environment using '.' as using bash below to run shell script
shellscript.write("#!/bin/bash\n. /etc/profile.d/set_environment\n")
#loop through commands and write to shellscript
for command in command_list:
shellscript.write(command+"\n")
# use subprocess to run the shell script. Use bash to interpret the environment
cmd="bash "+scriptname
proc = subprocess.Popen([cmd], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
When I run this python script only the first 6 commands within the shell script are executed. The error message from the command suggests the command is truncated as it is read by subprocess.
When I run the shell script manually all commands are executed as expected so I know the shell script is correct.
Each command is pretty instantaneous but I can't imagine the speed causing an issue.
I did try running a subprocess command for each file but I ran into difficulties setting the environment and I like the approach of creating a single sh script as it also serves as a log file.
I have read the subprocess docs but haven't spotted anything and google hasn't helped.
You should close the shellscript file object after writing the commands to it and before running it via Popen. Otherwise, the file might not be written completely before you execute it.
The most elegant solution is to use a context manager, which closes the file automatically:
with open(scriptname, "w") as f:
f.write(...)
Don't use Popen if you don't understand what it does. It creates a process, but it will not necessarily run to completion until you take additional steps.
You are probably looking simply for subprocess.check_call. Also, storing the commands in a file is unnecessary and somewhat fragile. Just run subprocess.check_call(['bash', '-c', string_of_commands) where string_of_commands has the commands separated by newlines or semicolons.
If you do want or need to use Popen, you will need to call communicate() or at least wait() on the Popen object.
Finally, avoid shell=True if you are passing in a list of commands; the purpose of the shell is to parse a string argument, and does nothing (useful) if that has already been done, or isn't necessary.
Here is an attempt at refactoring your script.
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
command_list=['. /etc/profile.d/set_environment']
for file in my_folder:
command_list.append("run this "+file)
subprocess.check_call(['bash', '-c', ''.join(command_list)],
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
If you want check_call to catch any error in any individual command, pass in the -e option, or include set -e before the commands which must not fail (but be aware that many innocent-looking constructs technically produce an error, such as false or grep nomatch file).
The majority of functions in the subprocess module are simple wrappers which use and manipulate a Popen object behind the scenes. You should only resort to the fundamental Popen if none of the other functions are suitable for your scenario.
If you genuinely need both stderr and stdout, perhaps you do need Popen, but then your question needs to describe how exactly your program manipulates the input and output of the shell.

Programmatically running a shell script in matlab

I am trying to open a terminal and run a script using matlab. The script will open an ssh connection. The matlab command is:
system(['lxterminal -e "bash ' scriptName '" &'],'-echo');
When I execute the matlab command the script runs but fails fails to validate SSL credentials.
The script is running ssh through the python paramiko package.
The error arises from the cli.py module.
The problem is solved if I run
system(['lxterminal -e "sudo bash ' scriptName '" &'],'-echo');
but then I have to enter the user password each time I execute the script.
If I open an lxterminal and run the same command:
bash scriptName
it works without the sudo.
I think it is related to some environmental variables / configuration which are not loaded in lxterminal before running the script, but cannot figure out it.
Using xterm instead of lxterminal has the same behavior.
Any ideas?
The fix, might be dirty, was to empty the LD_LIBRARY_PATH from the matlab environmental variables before calling the system command using the following command in the matlab script
setenv('LD_LIBRARY_PATH');
Probably the matlab LD_LIBRARY_PATH path is using obsolete libraries compared to the ones python needs.
A better approach might be to start removing one by one the paths until finding the one causing the problem.

Running Multiple Remote Commands Consecutively Through Matlab system()

I'd like to execute multiple commands consecutively using a matlab system call. I would first like to ssh into a remote machine then run a program on that machine. After the program starts I would like to then enter another command into this program's console. Here is what I would like to do in code:
system('ssh othermachine')
system('command on other machine')
%%WAIT FOR PROGRAM TO START RUNNING ON OTHER MACHINE
system('run command on other machine')
The problem is that Matlab will hang on the first system call and won't proceed to the next system call until the process form the first is exited. Is there a way around this?
Thanks for your help!
Prologue: Your problem is general and not just related to matlab.
When you want to run remote commands via ssh, they have to be issued in the ssh call. In a (linux) shell, you'd have
$ ssh remotemachine command1
for a single command. Hence, using a matlab system call you would have
>> system('ssh remotemachine command1').
When you want multiple commands to be executed sequentially, in a shell you'd write
$ ssh remotemachine "command1; command2"
i.e., in matlab, you'd write something like
>> system('ssh remotemachine "command1; command2"').
In general, is way more elegant to group your commands in a shell script, say script.sh, and pipe it in the ssh call
$ cat script.sh | ssh remotemachine
which, in matlab shell, sounds like
>> system('cat script.sh | ssh remotemachine').
There are a number of flags you can add in order to specify which behavior you want (e.g. in terms of session detachment/background execution, output collection,... look e.g. here).

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

Resources