Subprocess is blocking print from executing - python-3.x

i have the following loop with a print and a subprocess call
for file in os.listdir(dir):
print(file)
subprocess.call(['python', 'otherscript.py', file])
otherscript.py prints some stuff as well. so when i execute my main script, everything that my main script should print before calling otherscript.py will be printed after otherscript.py is called for the last time:
output from subprocess 1
output from subprocess 2
output from subprocess 3
output from main 1
output from main 2
output from main 3
how can i make it print before calling the subprocess?

The subprocess stdout buffer is flushed then the child python script exits while the content buffered by print() function in the parent may still be there.
The solution is to make sure that nothing is buffered before running subprocess.call(). See How to flush output of Python print?

Related

examine output of shell command using subprocess in Python

I'm running a shell command in a Jupyter Notebook using subprocess or os.system() . The actual output is a dump of thousands of lines of code which takes at least a minute to stdout in terminal. In my notebook, I just want to know if the output is more than a couple of lines because if it was an error, the output would only be 1 or 2 lines. What's the best way to check if I'm receiving 20+ lines and then stop the process and move on to the next?
you could read line by line using subprocess.Popen and count the lines (redirecting & merging output and error streams, maybe merging is not needed, depends on the process)
If the number of lines exceeds 20, kill the process and break the loop.
If the loop ends before the number of lines reaches 20, print/handle an error
code:
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for lineno,line in enumerate(iter(p.stdout.readline, b'')):
if lineno == 20:
print("process okay")
p.kill()
break
else:
# too short, break wasn't reached
print("process failed return code: {}".format(p.wait()))
note that p.poll() is not None can help to figure out if the process has ended prematurely too

isatty() always returning False?

I want to pipe data via stdin to a Python script for onwards processing. The command is:
tail -f /home/pi/ALL.TXT | python3 ./logcheck.py
And the Python code is:
import sys
while 1:
if (sys.stdin.isatty()):
for line in sys.stdin:
print(line)
I want the code to continuously watch stdin and then process each row when received. The tail command is working when run on its own but the python script never outputs anything.
Checking isatty() it appears to always return False?
Help!
A TTY is when you use your regular terminal - as in opening up a python in your shell, and typing
BASH>python
>>>from sys import stdin
>>>stdin.isatty() #True
In your case the standard input is coming from something which is not a tty. Just add a not in the if statement.

Subprocess.Popen vs .call: What is the correct way to call a C-executable from shell script using python where all 6 jobs can run in parallel

Using subprocess.Popen is producing incomplete results where as subprocess.call is giving correct output
This is related to a regression script which has 6 jobs and each job performs same task but on different input files. And I'm running everything in parallel using SubProcess.Popen
Task is performed using a shell script which has calls to a bunch of C-compiled executables whose job is to generate some text reports followed by converting text report info into jpg images
sample of shell script (runit is the file name) with calling C-compile executables
#!/bin/csh -f
#file name : runit
#C - Executable 1
clean_spgs
#C - Executable 2
scrub_spgs_all file1
scrub_spgs_all file2
#C - Executable 3
scrub_pick file1 1000
scrub_pick file2 1000
while using subprocess.Popen, both scrub_spgs_all and scrub_pick are trying to run in parallel causing the script to generate incomplete results i.e. output text files doesn't contain complete information and also missing some of output text reports.
subprocess.Popen call is
resrun_proc = subprocess.Popen("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
where runrescompare is a shell script and has
#!/bin/csh
#some other text
./runit
Where as using subprocess.call is generating all the output text files and jpg images correctly but I can't run all 6 jobs in parallel.
resrun_proc = subprocess.call("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
What is the correct way to call a C-exctuable from shell script using python subprocess calls where all 6 jobs can run in parallel(using python 3.5.1?
Thanks.
You tried to simulate multiprocessing with subprocess.Popen() which does not work like you want: the output is blocked after a while unless you consume it, for instance with communicate() (but this is blocking) or by reading the output, but with 6 concurrent handles in a loop, you are bound to get deadlocks.
The best way is run the subprocess.call lines in separate threads.
There are several ways to do it. Small simple example with locking:
import threading,time
lock=threading.Lock()
def func1(a,b,c):
lock.acquire()
print(a,b,c)
lock.release()
time.sleep(10)
tl=[]
t = threading.Thread(target=func1,args=[1,2,3])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=[4,5,6])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
I took the time to create an example more suitable to your needs:
2 threads executing a command and getting the output, then printing it within a lock to avoid mixup. I have used check_output method for this. I'm using windows, and I list C and D drives in parallel.
import threading,time,subprocess
lock=threading.Lock()
def func1(runrescompare,rescompare_dir):
resrun_proc = subprocess.check_output(runrescompare, shell=True, cwd=rescompare_dir, stderr=subprocess.PIPE, universal_newlines=True)
lock.acquire()
print(resrun_proc)
lock.release()
tl=[]
t=threading.Thread(target=func1,args=["ls","C:/"])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=["ls","D:/"])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()

python subprocess.readline() blocking when calling another python script

I've been playing with using the subprocess module to run python scripts as sub-processes and have come accross a problem with reading output line by line.
The documentation I have read indicates that you should be able to use subprocess and call readline() on stdout, and this does indeed work if the script I am calling is a bash script. However when I run a python script readline() blocks until the whole script has completed.
I have written a couple of test scripts that repeat the problem. In the test scripts I attmept to run a python script (tst1.py) as a sub-process from within a python script (tst.py) and then read the output of tst1.py line by line.
tst.py starts tst1.py and tries to read the output line by line:
#!/usr/bin/env python
import sys, subprocess, multiprocessing, time
cmdStr = 'python ./tst1.py'
print(cmdStr)
cmdList = cmdStr.split()
subProc = subprocess.Popen(cmdList, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while(1):
# this call blocks until tst1.py has completed, then reads all the output
# it then reads empty lines (seemingly for ever)
ln = subProc.stdout.readline()
if ln:
print(ln)
tst1.py simply loops printing out a message:
#!/usr/bin/env python
import time
if __name__ == "__main__":
x = 0
while(x<20):
print("%d: sleeping ..." % x)
# flushing stdout here fixes the problem
#sys.stdout.flush()
time.sleep(1)
x += 1
If tst1.py is written as a shell script tst1.sh:
#!/bin/bash
x=0
while [ $x -lt 20 ]
do
echo $x: sleeping ...
sleep 1
let x++
done
readline() works as expected.
After some playing about I discovered the situation can be resolved by flushing stdout in tst1.py, but I do not understand why this is required. I was wondering if anyone had an explanation for this behaviour ?
I am running redhat 4 linux:
Linux lb-cbga-05 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Because if the output is buffered somewhere the parent process won't see it until the child process exists at that point the output is flushed and all fd's are closed. As for why it works with bash without explicitly flushing the output, because when you type echo in a most shells it actually forks a process that executes echo (which prints something) and exists so the output is flushed too.

How to pause bash script from python script?

I want to pause bash script from python script, the steps look like this:
I start script writer.sh from python script reader.py.
When writer.sh outputs third line I want executing of this script to be paused using some command in reader.py.
I want to resume execution of writer.sh using some command in reader.py.
Here are those two scripts, the problem is that writer.sh doesn't pause when sleep command is executed in reader.py. So my question is, how can I pause writer.sh when it outputs string "third"? To be exact (this is my practical problem in my job) I want writer.sh to stop because reader.py stopped reading output of writer.sh.
reader.py:
import subprocess
from time import sleep
print 'One line at a time:'
proc = subprocess.Popen('./writer.sh',
shell=False,
stdout=subprocess.PIPE,
)
try:
for i in range(4):
output = proc.stdout.readline()
print output.rstrip()
if i == 2:
print "sleeping"
sleep(200000000000000)
except KeyboardInterrupt:
remainder = proc.communicate()[0]
print "remainder:"
print remainder
writer.sh:
#!/bin/sh
echo first;
echo second;
echo third;
echo fourth;
echo fith;
touch end_file;
Related question is, will using pipes on Linux pause script1, if script1 outputs lines of text, e.g. script1 | script2,
and script2 pauses after reading third line of input?
To pause the bash script, you can send the SIGSTOP signal to the PID.
If you want it to resume, you can send the SIGCONT signal.
You can get the pid of the subprocess with pid = proc.pid.
See man 7 signal

Resources