I want to pause bash script from python script, the steps look like this:
I start script writer.sh from python script reader.py.
When writer.sh outputs third line I want executing of this script to be paused using some command in reader.py.
I want to resume execution of writer.sh using some command in reader.py.
Here are those two scripts, the problem is that writer.sh doesn't pause when sleep command is executed in reader.py. So my question is, how can I pause writer.sh when it outputs string "third"? To be exact (this is my practical problem in my job) I want writer.sh to stop because reader.py stopped reading output of writer.sh.
reader.py:
import subprocess
from time import sleep
print 'One line at a time:'
proc = subprocess.Popen('./writer.sh',
shell=False,
stdout=subprocess.PIPE,
)
try:
for i in range(4):
output = proc.stdout.readline()
print output.rstrip()
if i == 2:
print "sleeping"
sleep(200000000000000)
except KeyboardInterrupt:
remainder = proc.communicate()[0]
print "remainder:"
print remainder
writer.sh:
#!/bin/sh
echo first;
echo second;
echo third;
echo fourth;
echo fith;
touch end_file;
Related question is, will using pipes on Linux pause script1, if script1 outputs lines of text, e.g. script1 | script2,
and script2 pauses after reading third line of input?
To pause the bash script, you can send the SIGSTOP signal to the PID.
If you want it to resume, you can send the SIGCONT signal.
You can get the pid of the subprocess with pid = proc.pid.
See man 7 signal
Related
I have a program that asks for input but it takes a while to load up.
I need a bash script that will pipe out the output into a named pipe.
I need a command that will cause my echo to insert my input after the program prompts for input. This is my command right now but it pipes in the input before my prompt.
echo "R" | nc localhost 123 > fifo
This will result in the following output:
usernname#name:
R
Please enter in an input (R, Q, T):
So my command needs to "wait" until my program prompts then pipe in the input. Any ideas? This needs to be in a bash script
You can use sleep:
(sleep 3; echo "R") | nc localhost 123 > fifo
Obviously this has a race condition, and so for industrial applications you should use expect instead.
Using subprocess.Popen is producing incomplete results where as subprocess.call is giving correct output
This is related to a regression script which has 6 jobs and each job performs same task but on different input files. And I'm running everything in parallel using SubProcess.Popen
Task is performed using a shell script which has calls to a bunch of C-compiled executables whose job is to generate some text reports followed by converting text report info into jpg images
sample of shell script (runit is the file name) with calling C-compile executables
#!/bin/csh -f
#file name : runit
#C - Executable 1
clean_spgs
#C - Executable 2
scrub_spgs_all file1
scrub_spgs_all file2
#C - Executable 3
scrub_pick file1 1000
scrub_pick file2 1000
while using subprocess.Popen, both scrub_spgs_all and scrub_pick are trying to run in parallel causing the script to generate incomplete results i.e. output text files doesn't contain complete information and also missing some of output text reports.
subprocess.Popen call is
resrun_proc = subprocess.Popen("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
where runrescompare is a shell script and has
#!/bin/csh
#some other text
./runit
Where as using subprocess.call is generating all the output text files and jpg images correctly but I can't run all 6 jobs in parallel.
resrun_proc = subprocess.call("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
What is the correct way to call a C-exctuable from shell script using python subprocess calls where all 6 jobs can run in parallel(using python 3.5.1?
Thanks.
You tried to simulate multiprocessing with subprocess.Popen() which does not work like you want: the output is blocked after a while unless you consume it, for instance with communicate() (but this is blocking) or by reading the output, but with 6 concurrent handles in a loop, you are bound to get deadlocks.
The best way is run the subprocess.call lines in separate threads.
There are several ways to do it. Small simple example with locking:
import threading,time
lock=threading.Lock()
def func1(a,b,c):
lock.acquire()
print(a,b,c)
lock.release()
time.sleep(10)
tl=[]
t = threading.Thread(target=func1,args=[1,2,3])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=[4,5,6])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
I took the time to create an example more suitable to your needs:
2 threads executing a command and getting the output, then printing it within a lock to avoid mixup. I have used check_output method for this. I'm using windows, and I list C and D drives in parallel.
import threading,time,subprocess
lock=threading.Lock()
def func1(runrescompare,rescompare_dir):
resrun_proc = subprocess.check_output(runrescompare, shell=True, cwd=rescompare_dir, stderr=subprocess.PIPE, universal_newlines=True)
lock.acquire()
print(resrun_proc)
lock.release()
tl=[]
t=threading.Thread(target=func1,args=["ls","C:/"])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=["ls","D:/"])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
i have the following loop with a print and a subprocess call
for file in os.listdir(dir):
print(file)
subprocess.call(['python', 'otherscript.py', file])
otherscript.py prints some stuff as well. so when i execute my main script, everything that my main script should print before calling otherscript.py will be printed after otherscript.py is called for the last time:
output from subprocess 1
output from subprocess 2
output from subprocess 3
output from main 1
output from main 2
output from main 3
how can i make it print before calling the subprocess?
The subprocess stdout buffer is flushed then the child python script exits while the content buffered by print() function in the parent may still be there.
The solution is to make sure that nothing is buffered before running subprocess.call(). See How to flush output of Python print?
I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.
I've been playing with using the subprocess module to run python scripts as sub-processes and have come accross a problem with reading output line by line.
The documentation I have read indicates that you should be able to use subprocess and call readline() on stdout, and this does indeed work if the script I am calling is a bash script. However when I run a python script readline() blocks until the whole script has completed.
I have written a couple of test scripts that repeat the problem. In the test scripts I attmept to run a python script (tst1.py) as a sub-process from within a python script (tst.py) and then read the output of tst1.py line by line.
tst.py starts tst1.py and tries to read the output line by line:
#!/usr/bin/env python
import sys, subprocess, multiprocessing, time
cmdStr = 'python ./tst1.py'
print(cmdStr)
cmdList = cmdStr.split()
subProc = subprocess.Popen(cmdList, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while(1):
# this call blocks until tst1.py has completed, then reads all the output
# it then reads empty lines (seemingly for ever)
ln = subProc.stdout.readline()
if ln:
print(ln)
tst1.py simply loops printing out a message:
#!/usr/bin/env python
import time
if __name__ == "__main__":
x = 0
while(x<20):
print("%d: sleeping ..." % x)
# flushing stdout here fixes the problem
#sys.stdout.flush()
time.sleep(1)
x += 1
If tst1.py is written as a shell script tst1.sh:
#!/bin/bash
x=0
while [ $x -lt 20 ]
do
echo $x: sleeping ...
sleep 1
let x++
done
readline() works as expected.
After some playing about I discovered the situation can be resolved by flushing stdout in tst1.py, but I do not understand why this is required. I was wondering if anyone had an explanation for this behaviour ?
I am running redhat 4 linux:
Linux lb-cbga-05 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Because if the output is buffered somewhere the parent process won't see it until the child process exists at that point the output is flushed and all fd's are closed. As for why it works with bash without explicitly flushing the output, because when you type echo in a most shells it actually forks a process that executes echo (which prints something) and exists so the output is flushed too.