How does one create a function containing os.system in python3? - python-3.x

I'm in python 3 and on debian
I would like to have a function that uses os.system. For simplicity's sake something along the lines of:
def notxt():
command = "rm *.txt"
os.system(command)
notxt()
but when I run the script, it hangs without carrying out the command. Is there a way around this or am I approaching it incorrectly?

I would use subproccess here then use run. Like this
import subprocess
subprocess.run(["rm", "1.txt"])
You might to find another bash command to delete all txt files
Here is the docs:
Python3 Docs for Subproccess

Related

How to execute a shell program taking inputs with python?

First of all, I'm using Ubuntu 20.04 and Python 3.8.
I would like to run a program that takes command line inputs. I managed to start the program from python with the os.system() command, but after starting the program it is impossible to send the inputs. The program in question is a product interface application that uses the CubeSat Space Protocol (CSP) as a language. However, the inputs used are encoded in a .c file with their corresponding .h header.
In the shell, it looks like this:
starting the program
In python, it looks like this:
import os
os.chdir('/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1')
os.system('./waf')
os.system('./build/csp-client -k/dev/ttyUSB1')
os.system('cmp ident') #cmp ident is typically the kind of command that does not work on python
The output is the same as in the shell but without the "cmp ident output", that is to say it's impossible for me to use the csp-client#
As you can probably see, I'm a real beginner trying to be as clear and precise as possible. I can of course try to give more information if needed. Thanks for your help !
It sounds like the pexpect module might be what you're looking for rather than using os.system it's designed for controlling other applications and interacting with them like a human is using them. The documentation for it is available here. But what you want will probably look something like this:
import pexpect
p = pexpect.spawnu("/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1")
p.expect("csp-client")
p.sendline("cmp indent")
print(p.read())
p.close()
I'll try and give you some hints to get you started - though bear in mind I do not know any of your tools, i.e. waf or csp-client, but hopefully that will not matter.
I'll number my points so you can refer to the steps easily.
Point 1
If waf is a build system, I wouldn't keep running that every time you want to run your csp-client. Just use waf to rebuild when you have changed your code - that should save time.
Point 2
When you change directory to /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1 and then run ./build/csp-client you are effectively running:
/home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client -k/dev/ttyUSB1
But that is rather annoying, so I would make a symbolic link to that that from /usr/local/bin so that you can run it just with:
csp-client -k/dev/ttyUSB1
So, I would make that symlink with:
ln -s /home/augustin/workspaceGS/gs-sw-nanosoft-product-interface-application-2.5.1/build/csp-client /usr/local/bin/csp-client
You MAY need to put sudo at the start of that command. Once you have that, you should be able to just run:
csp-client -k/dev/ttyUSB1
Point 3
Your Python code doesn't work because every os.system() starts a completely new shell, unrelated to the previous line or shell. And the shell that it starts then exits before your next os.system() command.
As a result, the cmp ident command never goes to the csp-client. You really need to send the cmp ident command on the stdin or "standard input" of csp-client. You can do that in Python, it is described here, but it's not all that easy for a beginner.
Instead of that, if you just have aa few limited commands you need to send, such as "take a picture", I would make and test complete bash scripts in the Terminal, till I got them right and then just call those from Python. So, I would make a bash script in your HOME directory called, say csp-snap and put something like this in it:
#/bin/bash
# Extend PATH so we can find "/usr/local/bin/csp-client"
PATH=$PATH:/usr/local/bin
{
# Tell client to take picture
echo "nanoncam snap"
# Exit csp-client
echo exit
} | csp-client -k/dev/ttyUSB1
Now make that executable (only necessary once) with:
chmod +x $HOME/csp-snap
And then you can test it with:
$HOME/csp-snap
If that works, you can copy the script to /usr/local/bin with:
cp $HOME/csp-snap /usr/local/bin
You may need sudo at the start again.
Then you should be able to take photos from anywhere just with:
csp-snap
Then your Python code becomes easy:
os.system('/usr/local/bin/csp-snap')

Usage of subprocess to run a LUA script

When I use os.system in my python script the following LUA command runs. The script does not wait for this LUA process to finish though.
os.system("cd ~/code/CNNMRF; qlua cnnmrf.lua -max_size 750 -content_name test -style_name style_img")
My understanding is that I need to use subprocess. How would I map this os.system command to subprocess?
When I look at the examples I see subprocess.run(["ls", "-l"]) but I'm not sure how to modify this for my scenario.
os.system runs process. cd ... ; is a shell command.
What you need is subprocess.run(["lua", "cnnmrf.lua" ...], shell=True, cwd='/home/<your user>/code/CNNMRF');
In subprocess.call you can pass current work directory. There you cannot use ~. You need to pass regular path with /home/.../code/CNNMRF
Arguments of subprocess.run - is list. So you need to split your command by spaces.

What is the listing format of Popen and call in python subprcoess?

I was recently introduced to the useful subprocess library and its Popen and call methods to fork out processes from the python process. However, I am somewhat confused by the general rules of how to form the list which is passed as an argument to popen. The first question is, why cannot I just pass a single string, just like what I would input to a terminal, instead of separating the elements into list items?
And then, what are the general rules? As an example, if the shell command looks like
/usr/bin/env python3 script.py arg1 arg2
then how should the Popen argument list look like? Can I pass any shell command? Then where can I find the general rules to split any shell command into list items?
This might be a duplicate of:
Using a Python subprocess call to invoke a Python script
you want to start a python script in a new subprocess?
Note that forking is totally differnt in Linux then it is in Windows.
generally you would pass a list of arguments like this:
p = subprocess.Popen([command, arg1, arg2,],
stdout=subprocess.PIPE)
where
command = "python" # or "python3"
and you can optionally get the output or handle the error:
output, error = p.communicate()
this alias must be in your PATH, otherwise you should use the full path to that binary or virtualenv...
Consider using async, if you are on Python3, but of course maybe this has no benefits for your usecase.

Calling a shell script using subprocess doesn't run all the commands in shell script

I have a Python script which loops through a folder, creating a shell command for each file.
Each command is written to a shell script and this script is then run using subprocess.Popen. (I need to do this because I also need to set up the environment before for the commands to work).
Here is some pseudocode:
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
# command to run
base_command="run this"
#list of files
command_list=[]
#loop through the files to create a folder
for file in my_folder:
command_list.append(base_command+" "+file)
# open the shell script
scriptname="shell.sh"
shellscript = open(scriptname,'w')
# set the environment using '.' as using bash below to run shell script
shellscript.write("#!/bin/bash\n. /etc/profile.d/set_environment\n")
#loop through commands and write to shellscript
for command in command_list:
shellscript.write(command+"\n")
# use subprocess to run the shell script. Use bash to interpret the environment
cmd="bash "+scriptname
proc = subprocess.Popen([cmd], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
When I run this python script only the first 6 commands within the shell script are executed. The error message from the command suggests the command is truncated as it is read by subprocess.
When I run the shell script manually all commands are executed as expected so I know the shell script is correct.
Each command is pretty instantaneous but I can't imagine the speed causing an issue.
I did try running a subprocess command for each file but I ran into difficulties setting the environment and I like the approach of creating a single sh script as it also serves as a log file.
I have read the subprocess docs but haven't spotted anything and google hasn't helped.
You should close the shellscript file object after writing the commands to it and before running it via Popen. Otherwise, the file might not be written completely before you execute it.
The most elegant solution is to use a context manager, which closes the file automatically:
with open(scriptname, "w") as f:
f.write(...)
Don't use Popen if you don't understand what it does. It creates a process, but it will not necessarily run to completion until you take additional steps.
You are probably looking simply for subprocess.check_call. Also, storing the commands in a file is unnecessary and somewhat fragile. Just run subprocess.check_call(['bash', '-c', string_of_commands) where string_of_commands has the commands separated by newlines or semicolons.
If you do want or need to use Popen, you will need to call communicate() or at least wait() on the Popen object.
Finally, avoid shell=True if you are passing in a list of commands; the purpose of the shell is to parse a string argument, and does nothing (useful) if that has already been done, or isn't necessary.
Here is an attempt at refactoring your script.
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
command_list=['. /etc/profile.d/set_environment']
for file in my_folder:
command_list.append("run this "+file)
subprocess.check_call(['bash', '-c', ''.join(command_list)],
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
If you want check_call to catch any error in any individual command, pass in the -e option, or include set -e before the commands which must not fail (but be aware that many innocent-looking constructs technically produce an error, such as false or grep nomatch file).
The majority of functions in the subprocess module are simple wrappers which use and manipulate a Popen object behind the scenes. You should only resort to the fundamental Popen if none of the other functions are suitable for your scenario.
If you genuinely need both stderr and stdout, perhaps you do need Popen, but then your question needs to describe how exactly your program manipulates the input and output of the shell.

Is it possible to use "py" instead of "python" at the command line in Linux?

Do you think it's possible in Ubuntu instead of typing python3 test.py every time I want to run a script, to use a shorthand equivalent such like this: py test.py? In other words, what I want is to make a shorthand command for python3 that should look like this: py. Can I do that?
Sure. You can use the bash alias command to do something like this:
alias py=python3
Put it somewhere like .bashrc and IIRC it will be set up for each (interactive) session.
Why not ensure the top line of your python code has
#!/bin/python3
Then make sure the file is executable
chmod my_python_code.py +x
Now just run it
./my_python_code.py
The previous alias command will also work - even if you do the above steps.
./bye.py

Resources