Executing popen with timeout - python-3.x

So am trying to execute Linux command specifically via subprocess.popen(). I want to wait only 30 secs for this command to execute because in certain scenarios my command hangs and program wait's forever.
Below is the 2 approaches I used.
Approach 1
cmd = "google-chrome --headless --timeout=30000 --ignore-certificate-errors --print-to-pdf out.pdf https://www.google.com/
process = subprocess.call(cmd, shell=True)
process.wait() # Here I want to wait only till 30 secs and not untill process completes
Approach 2
from multiprocessing import Process
p1 = Process(target=subprocess.call, args=(cmd,))
processTimeout = 50
p1.start()
p1.join(processTimeout)
if p1.is_alive():
p1.terminate()
In the 2nd approach file is not even being created. Please suggest an option.

The Popen.wait takes an optional timeout parameter. You an use this to wait for completion only for a specific time. If the timeout triggers, you can terminate the process.
process = subprocess.call(cmd)
try:
# if this returns, the process completed
process.wait(timeout=30)
except subprocess.TimeoutExpired:
process.terminate()
Since Python 3.5, you can also use the subprocess.run convenience function.
subprocess.run(cmd, timeout=30)
Note that this will still raise TimeoutExpired but automatically terminate the subprocess.

Related

examine output of shell command using subprocess in Python

I'm running a shell command in a Jupyter Notebook using subprocess or os.system() . The actual output is a dump of thousands of lines of code which takes at least a minute to stdout in terminal. In my notebook, I just want to know if the output is more than a couple of lines because if it was an error, the output would only be 1 or 2 lines. What's the best way to check if I'm receiving 20+ lines and then stop the process and move on to the next?
you could read line by line using subprocess.Popen and count the lines (redirecting & merging output and error streams, maybe merging is not needed, depends on the process)
If the number of lines exceeds 20, kill the process and break the loop.
If the loop ends before the number of lines reaches 20, print/handle an error
code:
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for lineno,line in enumerate(iter(p.stdout.readline, b'')):
if lineno == 20:
print("process okay")
p.kill()
break
else:
# too short, break wasn't reached
print("process failed return code: {}".format(p.wait()))
note that p.poll() is not None can help to figure out if the process has ended prematurely too

File not reading textfile after running Popen

I am trying to write something that checks a specific service, puts that into a text file. Afterwords I am trying to determine if its stopped or running and do other things.
The file gets created and looks like this, I tried parsing this out individually or using .readlines() but no dice. Any helps/tips would be appreciated.
SERVICE_NAME: fax
TYPE : 10 WIN32_OWN_PROCESS
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1077 (0x435)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
but my code below returns empty or nothing
from subprocess import Popen
import datetime
today = datetime.datetime.now()
squery = ['sc', 'query', 'fax']
proc = Popen(['sc', 'query', 'fax'], stdout=open(str(today.date())+'_ServiceCheck.txt', 'w'))
if 'STOPPED' in open(str(today.date())+'_ServiceCheck.txt').read():
print("Uh Oh")
#Do Something about it
As written, there's a good chance the parent process will open the file, check for STOP and close long before the subprocess even starts running. You can use subprocess.call to force the parent process to block until the child finishes executing, which might enable the idea of waiting for your Selenium script's process to finish execution.
Consider this:
# some_script.py
from time import sleep
print("subprocess running!")
for i in range(5):
print("subprocess says %s" % i)
sleep(1)
print("subprocess stopping!")
# main.py
import subprocess
while True:
print("parent process starting child...")
proc = subprocess.call(["python", "some_script.py"])
print("parent process noticed child stopped running")
Output excerpt from running python main.py:
parent process starting child...
subprocess running!
subprocess says 0
subprocess says 1
subprocess says 2
subprocess says 3
subprocess says 4
subprocess stopping!
parent process noticed child stopped running
parent process starting child...
subprocess running!
subprocess says 0
subprocess says 1
subprocess says 2
subprocess says 3
subprocess says 4
subprocess stopping!
parent process noticed child stopped running
...
This seems much better. The parent blocks completely until the child stops execution, then immediately restarts the child.
Otherwise, to do what you're doing, it sounds like you'll need to poll the file periodically like:
import datetime
from subprocess import Popen
from time import sleep
delay = 10
while True:
today = datetime.datetime.now()
fname = '%s_ServiceCheck.txt' % today.date()
file_content = open(fname).read()
if 'STOPPED' in file_content:
print('Uh oh')
proc = Popen(['sc', 'query', 'fax'], stdout=open(fname, 'w'))
sleep(delay)
But be careful--what if the Selenium process stops at 11:59:59? Polling this text file is pretty brittle, so this script is probably nowhere near robust enough to handle all cases. If you can redirect your Selenium script's output directly to the parent process, that would make it a lot more reliable. The parent process can also write the log to disk on behalf of the script if needed.
Either way, a lot of it depends on details about your environment and what you're trying to accomplish.

Subprocess.Popen vs .call: What is the correct way to call a C-executable from shell script using python where all 6 jobs can run in parallel

Using subprocess.Popen is producing incomplete results where as subprocess.call is giving correct output
This is related to a regression script which has 6 jobs and each job performs same task but on different input files. And I'm running everything in parallel using SubProcess.Popen
Task is performed using a shell script which has calls to a bunch of C-compiled executables whose job is to generate some text reports followed by converting text report info into jpg images
sample of shell script (runit is the file name) with calling C-compile executables
#!/bin/csh -f
#file name : runit
#C - Executable 1
clean_spgs
#C - Executable 2
scrub_spgs_all file1
scrub_spgs_all file2
#C - Executable 3
scrub_pick file1 1000
scrub_pick file2 1000
while using subprocess.Popen, both scrub_spgs_all and scrub_pick are trying to run in parallel causing the script to generate incomplete results i.e. output text files doesn't contain complete information and also missing some of output text reports.
subprocess.Popen call is
resrun_proc = subprocess.Popen("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
where runrescompare is a shell script and has
#!/bin/csh
#some other text
./runit
Where as using subprocess.call is generating all the output text files and jpg images correctly but I can't run all 6 jobs in parallel.
resrun_proc = subprocess.call("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
What is the correct way to call a C-exctuable from shell script using python subprocess calls where all 6 jobs can run in parallel(using python 3.5.1?
Thanks.
You tried to simulate multiprocessing with subprocess.Popen() which does not work like you want: the output is blocked after a while unless you consume it, for instance with communicate() (but this is blocking) or by reading the output, but with 6 concurrent handles in a loop, you are bound to get deadlocks.
The best way is run the subprocess.call lines in separate threads.
There are several ways to do it. Small simple example with locking:
import threading,time
lock=threading.Lock()
def func1(a,b,c):
lock.acquire()
print(a,b,c)
lock.release()
time.sleep(10)
tl=[]
t = threading.Thread(target=func1,args=[1,2,3])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=[4,5,6])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
I took the time to create an example more suitable to your needs:
2 threads executing a command and getting the output, then printing it within a lock to avoid mixup. I have used check_output method for this. I'm using windows, and I list C and D drives in parallel.
import threading,time,subprocess
lock=threading.Lock()
def func1(runrescompare,rescompare_dir):
resrun_proc = subprocess.check_output(runrescompare, shell=True, cwd=rescompare_dir, stderr=subprocess.PIPE, universal_newlines=True)
lock.acquire()
print(resrun_proc)
lock.release()
tl=[]
t=threading.Thread(target=func1,args=["ls","C:/"])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=["ls","D:/"])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()

Python 3.4 script keeps calling same function

I am new to Python and created a script which when called will set my computer to shutdown in x seconds, if called again it will add x seconds to the shutdown.
My problem is that when I check for the arguments the script is called with. If i call the script with '-s' which would shutdown, it first calls the shutdown function and then proceeds to go into the elif statement infinite times until I exit...
if arg == '-s':
shutdown()
elif arg == '-a':
abort()
else:
sys.exit("Error: '%s' isn't a valid argument." % arg)
The full script is here: http://pastebin.com/VnxANLZ5, as the problem might be else where. Other input for making the script better is welcome aswell.
If your script is named shutdown.py, then it's possible that executing os.system("shutdown -s -t %s" % timeToShutdown) or os.system("shutdown -a") will cause the script to execute itself, rather than invoking Windows' built-in shutdown command.
Try renaming your script to something other than shutdown.py.

Simple Multithreading in Python

I am new to python and try to execute two tasks simultanousely. These tasks are just fetching pages on a web server and one can terminate before the other. I want to display the result only when all requests are served. Easy in linux shell but I get nowhere with python and all the howto's I read look like black magic to a beginner like me. They all look over complicated to me compared with the simplicity of the bash script below.
Here is the bash script I would like to emulate in python:
# First request (in background). Result stored in file /tmp/p1
wget -q -O /tmp/p1 "http://ursule/test/test.php?p=1&w=5" &
PID_1=$!
# Second request. Result stored in file /tmp/p2
wget -q -O /tmp/p2 "http://ursule/test/test.php?p=2&w=2"
PID_2=$!
# Wait for the two processes to terminate before displaying the result
wait $PID_1 && wait $PID_2 && cat /tmp/p1 /tmp/p2
The test.php script is a simple:
<?php
printf('Process %s (sleep %s) started at %s ', $_GET['p'], $_GET['w'], date("H:i:s"));
sleep($_GET['w']);
printf('finished at %s', date("H:i:s"));
?>
The bash script returns the following:
$ ./multiThread.sh
Process 1 (sleep 5) started at 15:12:59 finished at 15:12:04
Process 2 (sleep 2) started at 15:12:59 finished at 15:12:01
What I have tried so far in python 3:
#!/usr/bin/python3.2
import urllib.request, threading
def wget (address):
url = urllib.request.urlopen(address)
mybytes = url.read()
mystr = mybytes.decode("latin_1")
print(mystr)
url.close()
thread1 = threading.Thread(None, wget, None, ("http://ursule/test/test.php?p=1&w=5",))
thread2 = threading.Thread(None, wget, None, ("http://ursule/test/test.php?p=1&w=2",))
thread1.run()
thread2.run()
This doesn't work as expected as it returns:
$ ./c.py
Process 1 (sleep 5) started at 15:12:58 finished at 15:13:03
Process 1 (sleep 2) started at 15:13:03 finished at 15:13:05
Instead of using threading, it would be nice to use multiprocessing module as each task in independent. You may like to read more about GIL (http://wiki.python.org/moin/GlobalInterpreterLock).
Following your advise I dived into the doc pages about multithreading and multiprocessing and, after having done a couple of benchmarks, I came to the conclusion that multiprocessing was better suited for the job. It scale up much better as the number of threads/processes increases. Another problem I was confronted with was how to store the results of all these processes. Using Queue.Queue did the trick. Here is the solution I came up with:
This snippet send concurrent http requests to my test rig that pauses for one second before sending the anwser back (see the php script above).
import urllib.request
# function wget arg(queue, adresse)
def wget (resultQueue, address):
url = urllib.request.urlopen(address)
mybytes = url.read()
url.close()
resultQueue.put(mybytes.decode("latin_1"))
numberOfProcesses = 20
from multiprocessing import Process, Queue
# initialisation
proc = []
results = []
resultQueue = Queue()
# creation of the processes and their result queue
for i in range(numberOfProcesses):
# The url just passes the process number (p) to the my testing web-server
proc.append(Process(target=wget, args=(resultQueue, "http://ursule/test/test.php?p="+str(i)+"&w=1",)))
proc[i].start()
# Wait for a process to terminate and get its result from the queue
for i in range(numberOfProcesses):
proc[i].join()
results.append(resultQueue.get())
# display results
for result in results:
print(result)

Resources