subprocess in python3 and it's parameters - python-3.x

How is subprocess.Popen different from os.fork()? The doc just says it creates a child program in a new process.
And what does this code specifically do?
from subprocess import PIPE
subprocess.Popen("ls", shell = True, stdin = PIPE, stdout = PIPE, stderr = PIPE)
I'm specifically confused about the parameters, stdin = PIPE, stdout and stderr = PIPE), again, the doc says that they are special valuesthat can be used to Popen and indicates a pipe to the standard stream should be opened.
Does this mean that it's setting the default standard stream to stdin, stdout or stderr respectively?
Plus, I didn't get the documentation listing something like:
process = Popen(some_command, shell = True, stdin = Pipe....#same code)
process.stdin.write(#some binary data)
We're not accessing the arguments here are we?
How is this code all working?

Related

Subprocess stdout: remove unnecessary char

I´m using the subprocess module and it works fine, the only thing is that the stdout returns a value "b'" or in some cases longer text like "user config - ignore ...". Is it possible to remove this first part of the stdout without using str.substring() or similar methodes.
output = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
in the abrove example the std.out.decode() function could be used and it will be saved as < str >
decoded_output = nodes.stdout.decode()
And if some type of commands support json output(for example pvesh in proxmox) you could use the string and load it as json.
json_output = json.loads(decoded_output)

asyncio readline from c subprocess stdout seems to block on windows [duplicate]

Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program:
#include <stdio.h>
int main() {
while (1) {
printf("2000\n");
sleep(1);
}
return 0;
}
To simulate the program that I will be using, which takes readings from a sensor constantly.
Then I'm trying to read the output (in this case "2000") from the C program with subprocess in python:
#!usr/bin/python
import subprocess
process = subprocess.Popen("./main", stdout=subprocess.PIPE)
while True:
for line in iter(process.stdout.readline, ''):
print line,
but this is not working. From using print statements, it runs the .Popen line then waits at for line in iter(process.stdout.readline, ''):, until I press Ctrl-C.
Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file.
Is there a way of making it run only when there is something to be read?
It is a block buffering issue.
What follows is an extended for your case version of my answer to Python: read streaming input from subprocess.communicate() question.
Fix stdout buffer in C program directly
stdio-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed.
To avoid calling fflush() after each printf() call, you could force line buffered output by calling in a C program at the very beginning:
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
As soon as a newline is printed the buffer is flushed in this case.
Or fix it without modifying the source of C program
There is stdbuf utility that allows you to change buffering type without modifying the source code e.g.:
from subprocess import Popen, PIPE
process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate() # close process' stream, wait for it to exit
There are also other utilities available, see Turn off buffering in pipe.
Or use pseudo-TTY
To trick the subprocess into thinking that it is running interactively, you could use pexpect module or its analogs, for code examples that use pexpect and pty modules, see Python subprocess readlines() hangs. Here's a variation on the pty example provided there (it should work on Linux):
#!/usr/bin/env python
import os
import pty
import sys
from select import select
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering
process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT,
bufsize=0, close_fds=True)
timeout = .1 # ugly but otherwise `select` blocks on process' exit
# code is similar to _copy() from pty.py
with os.fdopen(master_fd, 'r+b', 0) as master:
input_fds = [master, sys.stdin]
while True:
fds = select(input_fds, [], [], timeout)[0]
if master in fds: # subprocess' output is ready
data = os.read(master_fd, 512) # <-- doesn't block, may return less
if not data: # EOF
input_fds.remove(master)
else:
os.write(sys.stdout.fileno(), data) # copy to our stdout
if sys.stdin in fds: # got user input
data = os.read(sys.stdin.fileno(), 512)
if not data:
input_fds.remove(sys.stdin)
else:
master.write(data) # copy it to subprocess' stdin
if not fds: # timeout in select()
if process.poll() is not None: # subprocess ended
# and no output is buffered <-- timeout + dead subprocess
assert not select([master], [], [], 0)[0] # race is possible
os.close(slave_fd) # subproces don't need it anymore
break
rc = process.wait()
print("subprocess exited with status %d" % rc)
Or use pty via pexpect
pexpect wraps pty handling into higher level interface:
#!/usr/bin/env python
import pexpect
child = pexpect.spawn("/.main")
for line in child:
print line,
child.close()
Q: Why not just use a pipe (popen())? explains why pseudo-TTY is useful.
Your program isn't hung, it just runs very slowly. Your program is using buffered output; the "2000\n" data is not being written to stdout immediately, but will eventually make it. In your case, it might take BUFSIZ/strlen("2000\n") seconds (probably 1638 seconds) to complete.
After this line:
printf("2000\n");
add
fflush(stdout);
See readline docs.
Your code:
process.stdout.readline
Is waiting for EOF or a newline.
I cannot tell what you are ultimately trying to do, but adding a newline to your printf, e.g., printf("2000\n");, should at least get you started.

Why does input flush stdout

When i create a subprocess and i communicate through stdin end stdout, then the messages dont arrive unless i either flush the buffer or execute input().
So i wonder if input() flushes the buffer, and if so i want to know why.
# file1
import subprocess
import time
import select
process = subprocess.Popen(['python3', 'file2.py'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
read_ready, _, _ = select.select([process.stdout], [], [])
message = read_ready[0].readline().decode()
print(message)
time.sleep(11)
process.kill()
-
# file2
import sys
import time
print('1')
message = input()
# I added the sleep because the buffer gets flushed if the program stops
time.sleep(10)
If i execute this code it prints 1 immediatly. If i comment out the line with input(), then i need to wait until the file closes
Yes, the input() function flushes the buffer. It has to, if you think about it - the purpose of the function is to present a prompt to the user and then ask for their input, and in order to make sure the user sees the prompt, the print buffer needs to be flushed.

How can I redirect a file in a subprocess using "<"?

I have looked at other similar questions but was not able to find an answer to my question.
This is what I want to execute:
gagner -arg1 < file1
This is my code so far:
filePath = tkinter.filedialog.askopenfilename(filetypes=[("All files", "*.*")])
fileNameStringForm = (basename(filePath ))
fileNameByteForm = fileNameStringForm.encode(encoding='utf-8')
process = subprocess.Popen(['gagner','-arg1'], shell = True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process .communicate(fileNameByteForm)
stringOutput = stdout.decode('utf-8')
print(stringOutput)
Currently if I run this code, nothing happens. I get no errors, but no output is also printed.
Could someone please show me how I can execute the linux command above using python.
You shouldn't pass the name to the process (it expects to read file data from stdin, not a file name); instead, pass the file handle itself (or using PIPE, the raw file data).
So you could just do:
with open(fileNameStringForm, 'rb') as f:
process = subprocess.Popen(['gagner','-arg1'], stdin=f, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
or somewhat less efficiently (since Python has to read it, then write it, instead of letting the process read directly):
process = subprocess.Popen(['gagner','-arg1'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with open(fileNameStringForm, 'rb') as f:
stdout, stderr = process.communicate(f.read())
Note that I removed shell=True from both invocations; using list command form removes the need, and it's both faster, safer and more stable to avoid the shell wrapping.

Terminating process.stdin in node.js

The program below simply reads a string and outputs it. When I run this on cmd, the program doesn't print out the string. It keeps reading inputs until I terminate with Ctrl+C. How do I tell the program when my input string is over, so it can print the output?
var concat=require('concat-stream');
var str=[];
process.stdin.pipe(concat(function(buff){
console.log(buff.toString());
}));
concat-stream is waiting to receive a finish event. In your example that will happen when you close stdin. If you’re running this in a shell you can close stdin by pressing Ctrl+D. If you’re piping something to your process, make sure it closes its stdout when it’s done.
If you’re trying to make your script interactive in the shell, try split:
process.stdin
.pipe(require('split')())
.on('data', function (line) {
console.log('got “%s”', line);
});
Obviously the answer by Todd Yandell is the right one, and I have already upvoted it, but I wanted to add that besides split, you may also consider the use of through which creates a sort of transformer and it would also work in a interactive way, since it is not an aggregation pipe.
Like this example in which everything you write in the standard input gets uppercased in standard output interactively:
var through = require('through');
function write(buffer){
var text = buffer.toString();
this.queue(text.toUpperCase());
}
function end(){
this.queue(null);
}
var transform = through(write, end);
process.stdin.pipe(transform).pipe(process.stdout);
You may even combine it with split by doing:
process.stdin
.pipe(split())
.pipe(transform)
.pipe(process.stdout);

Resources