I'm experimenting with subprocess.run in Python 3.5. To chain two commands together, I would have thought that the following should work:
import subprocess
ps1 = subprocess.run(['ls'], universal_newlines=True, stdout=subprocess.PIPE)
ps2 = subprocess.run(['cowsay'], stdin=ps1.stdout)
However, this fails with:
AttributeError: 'str' object has no attribute 'fileno'
ps2 was expecting a file-like object, but the output of ps1 is a simple string.
Is there a way to chain commands together with subprocess.run?
subprocess.run() can't be used to implement ls | cowsay without the shell because it doesn't allow to run the individual commands concurrently: each subprocess.run() call waits for the process to finish that is why it returns CompletedProcess object (notice the word "completed" there). ps1.stdout in your code is a string that is why you have to pass it as input and not the stdin parameter that expects a file/pipe (valid .fileno()).
Either use the shell:
subprocess.run('ls | cowsay', shell=True)
Or use subprocess.Popen, to run the child processes concurrently:
from subprocess import Popen, PIPE
cowsay = Popen('cowsay', stdin=PIPE)
ls = Popen('ls', stdout=cowsay.stdin)
cowsay.communicate()
ls.wait()
See How do I use subprocess.Popen to connect multiple processes by pipes?
Turns out that subprocess.run has an input argument to handle this:
ps1 = subprocess.run(['ls'], universal_newlines=True, stdout=subprocess.PIPE)
ps2 = subprocess.run(['cowsay'], universal_newlines=True, input=ps1.stdout)
Also, the following works as well, which doesn't use input:
ps1 = subprocess.run(['ls'], universal_newlines=True, stdout=subprocess.PIPE)
ps2 = subprocess.run(['cowsay', ps1.stdout], universal_newlines=True)
Related
I know this may seem weird, but I'm trying to understand why the following happens:
I'm editing a Python program at work and when I run the following Python function:
def execute_shell_cmd(cmd):
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
for c in iter(lambda: process.stdout.read(1), b''):
print("type_c = ", type(c))
sys.stdout.write(c)
for e in iter(lambda: process.stderr.read(1), b''):
sys.stdout.write(e)
execute_shell_cmd("ls -l")
I get in the output that type_c is "bytes" and sys.stdout.write(c) runs regularly and prints one byte at a time.
But when I run this function from a standalone program, I get the following error:
TypeError: write() argument must be str, not bytes
How is that possible?
In Python 3, sys.stdout is always str-typed, with an encoding chosen by the PYTHONIOENCODING environment variable (and/or PYTHONUTF8 on Windows).
sys.stdout.buffer (i.e. a TextIOBase::buffer) is the underlying bytestream for the text-encoded stdout stream.
Since you're reading bytes from the subprocess, you'll need to also write to the byte-typed stream.
for c in iter(lambda: process.stdout.read(1), b''):
sys.stdout.buffer.write(c)
If, on the other hand, you do expect to be working with text, you may wish to configure the subprocess object to decode output to strings.
I'm trying to run a function after entering in npyscreen, tried a few things and am still stuck. Just exits npyscreen and returns to a bash screen. This function is supposed to start a watchdog/rsync watch-folder waiting for files to backup.
#!/usr/bin/env python
# encoding: utf-8
import npyscreen as np
from nextscript import want_to_run_this_function
class WuTangClan(np.NPSAppManaged):
def onStart(self):
self.addForm('MAIN', FormMc, name="36 Chambers")
class FormMc(np.ActionFormExpandedV2):
def create(self):
self.rza_gfk = self.add(np.TitleSelectOne, max_height=4, name="Better MC:", value=[0], values=["RZA", "GhostFace Killah"], scroll_exit=True)
def after_editing(self):
if self.rza_gfk.value == [0]:
want_to_run_this_function()
self.parentApp.setNextForm(None)
else:
self.parentApp.setNextForm(None)
if __name__ == "__main__":
App = WuTangClan()
App.run()
I not sure if i understood correctly what you want.
For executing any kind of bash command i like to use subprocess module, he has the Popen constructor, which you can use to run anything from a bash.
e.g, on windows
import subprocess
process = subprocess.Popen(['ipconfig','/all'])
On unix like system:
import subprocess
process = subprocess.Popen(['ip','a'])
If you have a ".py" file you can pass the parameters like if you where running it from the terminal
e.g
import subprocess
process = subprocess.Popen(['python3','sleeper.py'])
You can even retrieve the process pid and kill it whenever you want, you can look at subprocess module documentation here
I'm using the subprocess module to run a bash command. I want to display the result in real time, including when there's no new line added, but the output is still modified.
I'm using python 3. My code is running with subprocess, but I'm open to any other module. I have some code that return a generator for every new line added.
import subprocess
import shlex
def run(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
while True:
line = process.stdout.readline().rstrip()
if not line:
break
yield line.decode('utf-8')
cmd = 'ls -al'
for l in run(cmd):
print(l)
The problem comes with commands of the form rsync -P file.txt file2.txt for example, which shows a progress bar.
For example, we can start by creating a big file in bash:
base64 /dev/urandom | head -c 1000000000 > file.txt
Then try to use python to display the rsync command:
cmd = 'rsync -P file.txt file2.txt'
for l in run(cmd):
print(l)
With this code, the progress bar is only printed at the end of the process, but I want to print the progress in real time.
From this answer you can disable buffering when print in python:
You can skip buffering for a whole python process using "python -u"
(or #!/usr/bin/env python -u etc) or by setting the environment
variable PYTHONUNBUFFERED.
You could also replace sys.stdout with some other stream like wrapper
which does a flush after every call.
Something like this (not really tested) might work...but there are
probably problems that could pop up. For instance, I don't think it
will work in IDLE, since sys.stdout is already replaced with some
funny object there which doesn't like to be flushed. (This could be
considered a bug in IDLE though.)
>>> class Unbuffered:
.. def __init__(self, stream):
.. self.stream = stream
.. def write(self, data):
.. self.stream.write(data)
.. self.stream.flush()
.. def __getattr__(self, attr):
.. return getattr(self.stream, attr)
..
>>> import sys
>>> sys.stdout=Unbuffered(sys.stdout)
>>> print 'Hello'
Hello
I am trying to build this script here which will accept a tracking number as an input, build the URL and then get the HTML response. I am trying to display this response in the terminal using the html2text program. I am trying to emulate the command "html2text filename" which is typed in the terminal into my python script, however the raw HTML file is displayed instead of the standard html2text output. where am i going wrong here ?
#!/usr/bin/python3
#trial using bash calls no html2text library
import requests
import subprocess # to execute bash commands
try:
check_for_package = subprocess.Popen(("dpkg","-s","html2text"), stdout=subprocess.PIPE)
output = subprocess.check_output(("grep", "Status"), stdin=check_for_package.stdout)
check_for_package.wait()
opstr=str(output, 'utf-8')
print(opstr)
if opstr == "Status: install ok installed\n" :
print("Package installed")
except:
print("installing html2text..............................")
install_pkg = subprocess.check_call("sudo apt install html2text", shell=True)
r = requests.get("http://ipsweb.ptcmysore.gov.in/ipswebtracking/IPSWeb_item_events.asp?itemid=RT404715658HK&Submit=Submit")
print(r.status_code)
raw_html=r.text
#print(raw_html)
#raw_html = str(raw_html , 'utf-8')
view_html = subprocess.Popen(["html2text", raw_html])
output = view_html.communicate()
view_html.wait()
#view_html = subprocess.Popen("html2text template", shell=True)
print(output)
Update: I have got around the issue currently but storing the output of r.text in a file and then calling it with html2text
The version of html2text you're using expects the argument to be a filename, not the HTML. To provide the HTML to it, you need to run the command with no argument, and provide the HTML on its standard input.
view_html = subprocess.Popen(["html2text"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
view_html.stdin.write(raw_html)
view_html.stdin.close() # Close the pipe so html2text will get EOF
output = view_html.stdout.read()
I have looked at other similar questions but was not able to find an answer to my question.
This is what I want to execute:
gagner -arg1 < file1
This is my code so far:
filePath = tkinter.filedialog.askopenfilename(filetypes=[("All files", "*.*")])
fileNameStringForm = (basename(filePath ))
fileNameByteForm = fileNameStringForm.encode(encoding='utf-8')
process = subprocess.Popen(['gagner','-arg1'], shell = True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process .communicate(fileNameByteForm)
stringOutput = stdout.decode('utf-8')
print(stringOutput)
Currently if I run this code, nothing happens. I get no errors, but no output is also printed.
Could someone please show me how I can execute the linux command above using python.
You shouldn't pass the name to the process (it expects to read file data from stdin, not a file name); instead, pass the file handle itself (or using PIPE, the raw file data).
So you could just do:
with open(fileNameStringForm, 'rb') as f:
process = subprocess.Popen(['gagner','-arg1'], stdin=f, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
or somewhat less efficiently (since Python has to read it, then write it, instead of letting the process read directly):
process = subprocess.Popen(['gagner','-arg1'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with open(fileNameStringForm, 'rb') as f:
stdout, stderr = process.communicate(f.read())
Note that I removed shell=True from both invocations; using list command form removes the need, and it's both faster, safer and more stable to avoid the shell wrapping.