When I use os.system in my python script the following LUA command runs. The script does not wait for this LUA process to finish though.
os.system("cd ~/code/CNNMRF; qlua cnnmrf.lua -max_size 750 -content_name test -style_name style_img")
My understanding is that I need to use subprocess. How would I map this os.system command to subprocess?
When I look at the examples I see subprocess.run(["ls", "-l"]) but I'm not sure how to modify this for my scenario.
os.system runs process. cd ... ; is a shell command.
What you need is subprocess.run(["lua", "cnnmrf.lua" ...], shell=True, cwd='/home/<your user>/code/CNNMRF');
In subprocess.call you can pass current work directory. There you cannot use ~. You need to pass regular path with /home/.../code/CNNMRF
Arguments of subprocess.run - is list. So you need to split your command by spaces.
Related
I was recently introduced to the useful subprocess library and its Popen and call methods to fork out processes from the python process. However, I am somewhat confused by the general rules of how to form the list which is passed as an argument to popen. The first question is, why cannot I just pass a single string, just like what I would input to a terminal, instead of separating the elements into list items?
And then, what are the general rules? As an example, if the shell command looks like
/usr/bin/env python3 script.py arg1 arg2
then how should the Popen argument list look like? Can I pass any shell command? Then where can I find the general rules to split any shell command into list items?
This might be a duplicate of:
Using a Python subprocess call to invoke a Python script
you want to start a python script in a new subprocess?
Note that forking is totally differnt in Linux then it is in Windows.
generally you would pass a list of arguments like this:
p = subprocess.Popen([command, arg1, arg2,],
stdout=subprocess.PIPE)
where
command = "python" # or "python3"
and you can optionally get the output or handle the error:
output, error = p.communicate()
this alias must be in your PATH, otherwise you should use the full path to that binary or virtualenv...
Consider using async, if you are on Python3, but of course maybe this has no benefits for your usecase.
I'm in python 3 and on debian
I would like to have a function that uses os.system. For simplicity's sake something along the lines of:
def notxt():
command = "rm *.txt"
os.system(command)
notxt()
but when I run the script, it hangs without carrying out the command. Is there a way around this or am I approaching it incorrectly?
I would use subproccess here then use run. Like this
import subprocess
subprocess.run(["rm", "1.txt"])
You might to find another bash command to delete all txt files
Here is the docs:
Python3 Docs for Subproccess
I have a Python script which loops through a folder, creating a shell command for each file.
Each command is written to a shell script and this script is then run using subprocess.Popen. (I need to do this because I also need to set up the environment before for the commands to work).
Here is some pseudocode:
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
# command to run
base_command="run this"
#list of files
command_list=[]
#loop through the files to create a folder
for file in my_folder:
command_list.append(base_command+" "+file)
# open the shell script
scriptname="shell.sh"
shellscript = open(scriptname,'w')
# set the environment using '.' as using bash below to run shell script
shellscript.write("#!/bin/bash\n. /etc/profile.d/set_environment\n")
#loop through commands and write to shellscript
for command in command_list:
shellscript.write(command+"\n")
# use subprocess to run the shell script. Use bash to interpret the environment
cmd="bash "+scriptname
proc = subprocess.Popen([cmd], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
When I run this python script only the first 6 commands within the shell script are executed. The error message from the command suggests the command is truncated as it is read by subprocess.
When I run the shell script manually all commands are executed as expected so I know the shell script is correct.
Each command is pretty instantaneous but I can't imagine the speed causing an issue.
I did try running a subprocess command for each file but I ran into difficulties setting the environment and I like the approach of creating a single sh script as it also serves as a log file.
I have read the subprocess docs but haven't spotted anything and google hasn't helped.
You should close the shellscript file object after writing the commands to it and before running it via Popen. Otherwise, the file might not be written completely before you execute it.
The most elegant solution is to use a context manager, which closes the file automatically:
with open(scriptname, "w") as f:
f.write(...)
Don't use Popen if you don't understand what it does. It creates a process, but it will not necessarily run to completion until you take additional steps.
You are probably looking simply for subprocess.check_call. Also, storing the commands in a file is unnecessary and somewhat fragile. Just run subprocess.check_call(['bash', '-c', string_of_commands) where string_of_commands has the commands separated by newlines or semicolons.
If you do want or need to use Popen, you will need to call communicate() or at least wait() on the Popen object.
Finally, avoid shell=True if you are passing in a list of commands; the purpose of the shell is to parse a string argument, and does nothing (useful) if that has already been done, or isn't necessary.
Here is an attempt at refactoring your script.
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
command_list=['. /etc/profile.d/set_environment']
for file in my_folder:
command_list.append("run this "+file)
subprocess.check_call(['bash', '-c', ''.join(command_list)],
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
If you want check_call to catch any error in any individual command, pass in the -e option, or include set -e before the commands which must not fail (but be aware that many innocent-looking constructs technically produce an error, such as false or grep nomatch file).
The majority of functions in the subprocess module are simple wrappers which use and manipulate a Popen object behind the scenes. You should only resort to the fundamental Popen if none of the other functions are suitable for your scenario.
If you genuinely need both stderr and stdout, perhaps you do need Popen, but then your question needs to describe how exactly your program manipulates the input and output of the shell.
I have to write a module that sends a command to the system.
For example: The "ls" command, as you know, is a common bash command.
Since I am new at this module thing, I need some help.
As far as I know, ls is no bash builtin command.
You can call any program (for example ls) with the complete path (/usr/bin/ls).
I was wondering if there is a way to get Linux commands with a perl script. I am talking about commands such as cd ls ll clear cp
You can execute system commands in a variety of ways, some better than others.
Using system();, which prints the output of the command, but does not return the output to the Perl script.
Using backticks (``), which don't print anything, but return the output to the Perl script. An alternative to using actual backticks is to use the qx(); function, which is easier to read and accomplishes the same thing.
Using exec();, which does the same thing as system();, but does not return to the Perl script at all, unless the command doesn't exist or fails.
Using open();, which allows you to either pipe input from your script to the command, or read the output of the command into your script.
It's important to mention that the system commands that you listed, like cp and ls are much better done using built-in functions in Perl itself. Any system call is a slow process, so use native functions when the desired result is something simple, like copying a file.
Some examples:
# Prints the output. Don't do this.
system("ls");
# Saves the output to a variable. Don't do this.
$lsResults = `ls`;
# Something like this is more useful.
system("imgcvt", "-f", "sgi", "-t", "tiff", "Image.sgi", "NewImage.tiff");
This page explains in a bit more detail the different ways that you can make system calls.
You can, as voithos says, using either system() or backticks. However, take into account that this is not recommended, and that, for instance, cd won't work (won't actually change the directory). Note that those commands are executed in a new shell, and won't affect the running perl script.
I would not rely on those commands and try to implement your script in Perl (if you're decided to use Perl, anyway). In fact, Perl was designed at first to be a powerful substitute for sh and other UNIX shells for sysadmins.
you can surround the command in back ticks
`command`
The problem is perl is trying to execute the bash builtin (i.e. source, ...) as if they were real files, but perl can't find them as they don't exist. The answer is to tell perl what to execute explicitly. In the case of bash builtins like source, do the following and it works just fine.
my $XYZZY=`bash -c "source SOME-FILE; DO_SOMETHING_ELSE; ..."`;
of for the case of cd do something like the following.
my $LOCATION=`bash -c "cd /etc/init.d; pwd"`;