I have a script that downloads images from urls, but I would like to parallelise it otherwise it will take hours. With this code:
import requests
from math import floor, log10
import urllib
import time
import multiprocessing
with open('images.csv', 'r') as f:
images = f.readlines()
num_position = floor(log10(len(images)) + 1)
a = time.time()
for i, image in enumerate(images[1:10]):
if (i+1) % 1000 == 0:
print('Downloading {} image'.format(i+1) )
# a = time.time()
with open(str(i).zfill(num_position)+'a.jpg', 'wb') as file:
try:
writing = file.write(requests.get(image.split(',')[2]).content)
p = multiprocessing.Process(target=writing, args=(image,))
p.start()
p.join()
except:
print('Skipping an image!')
pass
b = time.time()
print('multiple process -- {}'.format(b-a))
I get an error :
Process Process-9:
Traceback (most recent call last):
File "/usr/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/usr/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
TypeError: 'int' object is not callable
Why am I getting an error but the task is still completed and the code doesn't break? (and by that I mean the piece in try: )
What would be the easiest way to include some kind of paralleling here?
You get the error because AFAIK this line
writing = file.write(requests.get(image.split(',')[2]).content)
has the output of integer type. write returns the number of written characters which is equal to the length of the string-representation of your image. Now you assign that to the variable writing -> writing becomes a number.
p = multiprocessing.Process(target=writing, args=(image,))
calls writing as target function, which raises the error since your are not calling a function but integer-type writing (not callable). The code works since your workers do not have anything to do and close immediatly and the file is already written.
To get things working, your would have to define a function that takes your image as argument and maybe the file name. This function you later call in the setup of your workers. Something like that:
def write_file(image, filename):
filestream = open(filename, mode="w")
filestream.write(requests.get(image.split(',')[2]).content)
filestream.close()
And in your application
p = multiprocessing.Process(target=write_file, args=(image, filename,))
However, that is just the writing part. If you want to do the downloads in separate task too then you have to put the code for that into your separate function.
def download_write(urls):
for url in iter(urls.get, 'STOP'):
#download code here#
filestream = open(filename, mode="w")
filestream.write(requests.get(image.split(',')[2]).content)
filestream.close()
And your main application:
list_urls = [] # your list of urls to download
urls = Queue()
for element in list_urls:
urls.put(element)
p = multiprocessing.Process(target=download_write, args=(urls,))
urls.put("STOP") #signals end of tasks for your workers
p.start() #start worker
p.join() #wait for worker to finish
Related
I am really new to multiprocessing!
What I was trying to do:
Run a particular instance method i.e. ( wait_n_secs() which was slow!) as a separate process so that other processes can run on the side.
Once instance method is done processing we retrieve its output and use it using shared array provided by multiprocessing module.
Here is the code I was trying to run.
import cv2
import time
from multiprocessing import Array
import concurrent.futures
import copyreg as copy_reg
import types
def _pickle_method(m):
if m.im_self is None:
return getattr, (m.im_class, m.im_func.func_name)
else:
return getattr, (m.im_self, m.im_func.func_name)
copy_reg.pickle(types.MethodType, _pickle_method)
class Testing():
def __init__(self):
self.executor = concurrent.futures.ProcessPoolExecutor()
self.futures = None
self.shared_array = Array('i', 4)
def wait_n_secs(self,n):
print(f"I wait for {n} sec")
cv2.waitKey(n*1000)
wait_array = (n,n,n,n)
return wait_array
def function(waittime):
bbox = Testing().wait_n_secs(waittime)
return bbox
if __name__ =="__main__":
testing = Testing()
waittime = 5
# Not working!
testing.futures = testing.executor.submit(testing.wait_n_secs,waittime)
# Working!
#testing.futures = testing.executor.submit(function,waittime)
stime = time.time()
while 1:
if not testing.futures.running():
print("Checking for results")
testing.shared_array = testing.futures.result()
print("Shared_array received = ",testing.shared_array)
break
time_elapsed = time.time()-stime
if (( time_elapsed % 1 ) < 0.001):
print(f"Time elapsed since some time = {time_elapsed:.2f} sec")
Problems I faced:
1) Error on Python 3.6:
Traceback (most recent call last):
File "C:\Users\haide\AppData\Local\Programs\Python\Python36\lib\multiprocessing\queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\haide\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\Users\haide\AppData\Local\Programs\Python\Python36\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "C:\Users\haide\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 356, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
2) Error on Python 3.8:
testing.shared_array = testing.futures.result()
File "C:\Users\haide\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\_base.py", line 437, in result
return self.__get_result()
File "C:\Users\haide\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\_base.py", line 389, in __get_result
raise self._exception
File "C:\Users\haide\AppData\Local\Programs\Python\Python38\lib\multiprocessing\queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\haide\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot pickle 'weakref' object
As others like Amby, falviussn has previously asked.
Problem:
We get a pickling error specifically for instance methods in multiprocessing as they are unpickable.
Solution I tried (Partially):
The solution most mentioned is to use copy_reg to pickle the instance method.
I don't fully understand copy_reg. I have tried adding the lines of code to the top of mp.py provided by Nabeel. But I haven't got it to work.
(Important consideration): I am on Python 3 using copyreg and solutions seem to be using python 2 as they imported copy_reg (Python 2)
(I haven't tried):
Using dill because they were either not the case of multiprocessing or even if they were. They were not using concurrent.futures module.
Workaround:
Passing function that calls the instance method ( instead of the instance method directly ) to submit() method.
testing.futures = testing.executor.submit(function,waittime)
This does work. But does not seem like an elegant solution.
What I want:
Please guide me on how to correctly use copyreg as I clearly don't understand its workings.
Or
If it's a python3 issue, Suggest another solution where i can pass instance methods to conccurent.futurs.ProcessPoolExecutor.submit() for multi-processing. :)
Update #1:
#Aaron Can you share an example code of your solution? "passing a module level function that takes instance as an argument"
or
Correct my mistake here:
This was my attempt. :(
Passing the instance to the module level function along with the arguments
inp_args = [waittime]
testing.futures = testing.executor.submit(wrapper_func,testing,inp_args)
And this was the module wrapper function I created,
def wrapper_func(ins,*args):
ins.wait_n_secs(args)
This got me back to...
TypeError: cannot pickle 'weakref' object
We get a pickling error specifically for instance methods in multiprocessing as they are unpickable.
This is not true, instance methods are very much picklable in python 3 (unless they contain local attributes, like factory functions). You get the error because some other instance attributes (specific to your code) are not picklable.
Please guide me on how to correctly use copyreg as I clearly don't understand its workings.
It's not required here
If it's a python3 issue, Suggest another solution where i can pass instance methods to conccurent.futurs.ProcessPoolExecutor.submit() for multi-processing. :)
It's not really a python issue, it's to do with what data your sending to be pickled. Specifically, all three attributes (after they are populated), self.executor, self.futures and self.shared_array cannot be put on a multiprocessing.Queue (which ProcessPoolExecutor internally uses) and pickled.
So, the problem happens because you are passing an instance method as the target function, which means that all instance attributes are also implicitly pickled and sent to the other process. Since, some of these attributes are not picklable, this error is raised. This is also the reason why your workaround works, since the instance attributes are not pickled there as the target function is not an instance method. There are a couple of things you can do, the best way depends on if there are other attributes that you need to send as well.
Method #1
Judging from the sample code, your wait_n_secs function is not really using any instance attributes. Therefore, you can convert it into a staticmethod and pass that as the target function directly instead:
import time
from multiprocessing import Array
import concurrent.futures
class Testing():
def __init__(self):
self.executor = concurrent.futures.ProcessPoolExecutor()
self.futures = None
self.shared_array = Array('i', 4)
#staticmethod
def wait_n_secs(n):
print(f"I wait for {n} sec")
# Have your own implementation here
time.sleep(n)
wait_array = (n, n, n, n)
return wait_array
if __name__ == "__main__":
testing = Testing()
waittime = 5
testing.futures = testing.executor.submit(type(testing).wait_n_secs, waittime) # Notice the type(testing)
stime = time.time()
while 1:
if not testing.futures.running():
print("Checking for results")
testing.shared_array = testing.futures.result()
print("Shared_array received = ", testing.shared_array)
break
time_elapsed = time.time() - stime
if ((time_elapsed % 1) < 0.001):
print(f"Time elapsed since some time = {time_elapsed:.2f} sec")
Method #2
If your instance contains attributes which would be used by the target functions (so they can't be converted to staticmethods), then you can also explicitly not pass the unpicklable attributes of the instance when pickling using the __getstate__ method. This would mean that the instance recreated inside other processes would not have all these attributes either (since we did not pass them), so do keep that in mind:
import time
from multiprocessing import Array
import concurrent.futures
class Testing():
def __init__(self):
self.executor = concurrent.futures.ProcessPoolExecutor()
self.futures = None
self.shared_array = Array('i', 4)
def wait_n_secs(self, n):
print(f"I wait for {n} sec")
# Have your own implementation here
time.sleep(n)
wait_array = (n, n, n, n)
return wait_array
def __getstate__(self):
d = self.__dict__.copy()
# Delete all unpicklable attributes.
del d['executor']
del d['futures']
del d['shared_array']
return d
if __name__ == "__main__":
testing = Testing()
waittime = 5
testing.futures = testing.executor.submit(testing.wait_n_secs, waittime)
stime = time.time()
while 1:
if not testing.futures.running():
print("Checking for results")
testing.shared_array = testing.futures.result()
print("Shared_array received = ", testing.shared_array)
break
time_elapsed = time.time() - stime
if ((time_elapsed % 1) < 0.001):
print(f"Time elapsed since some time = {time_elapsed:.2f} sec")
I'm writing program to run one script that takes pictures and writes the number into txt file and after its done it should tell other file you can read that txt file. I can't seem to import this "Perrasytas" variable to other script. It just says its not defined.
Script1
if line==('echo:SD card ok'):
Perrasytas=0
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
GPIO.output(4,GPIO.LOW)
with open(cnt, 'r') as f:
line = f.read()
num = int((line.split())[0])+1
with open(cnt, 'w') as f:
f.write(str(num))
Perrasytas=1
Script2
import Script1
if Script1.Perrasytas == 1:
cnt2='/home/pi/Prints_photos/counter.txt'
with open(cnt2, 'r') as f:
num2 = f.read()
If I leave "Perrasytas=0" variable at the first line of code it does import, but it doesn't change its state..
is it even possible to do such communication thing between scripts?
Importing a module only runs its code once -- the first time you import it. Since the value of line is probably not equal to "echo:..." when the import happens, the if block is never entered, and the Perrasytas variable is never set.
You could put all of that code into a function, and return the value of Perrasytas from that function. That way, you can execute that code whenever you call the function.
def get_perrasytas(line):
if line==('echo:SD card ok'):
Perrasytas=0
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
GPIO.output(4,GPIO.LOW)
with open(cnt, 'r') as f:
line = f.read()
num = int((line.split())[0])+1
with open(cnt, 'w') as f:
f.write(str(num))
Perrasytas=1
return Perrasytas
Then, you could call it like so:
import Script1
if Script1.get_perrasytas(line) == 1:
cnt2='/home/pi/Prints_photos/counter.txt'
with open(cnt2, 'r') as f:
num2 = f.read()
Note that you will need to have line before you call the function, or include a way to get line inside the function.
I am making an app which will return one random line from the .txt file. I made a class to implement this behaviour. The idea was to use one method to open file (which will remain open) and the other method which will close it after the app exits. I do not have much experience in working with files hence the following behaviour is strange to me:
In __init__ I called self.open_file() in order to just open it. And it works fine to get self.len. Now I thought that I do not need to call self.open_file() again, but when I call file.get_term()(returns random line) it raises IndexError (like the file is empty), But, if I call file.open_file() method again, everything works as expected.
In addition to this close_file() method raises AttributeError - object has no attribute 'close', so I assumed the file closes automatically somehow, even if I did not use with open.
import random
import os
class Pictionary_file:
def __init__(self, file):
self.file = file
self.open_file()
self.len = self.get_number_of_lines()
def open_file(self):
self.opened = open(self.file, "r", encoding="utf8")
def get_number_of_lines(self):
i = -1
for i, line in enumerate(self.opened):
pass
return i + 1
def get_term_index(self):
term_line = random.randint(0, self.len-1)
return term_line
def get_term(self):
term_line = self.get_term_index()
term = self.opened.read().splitlines()[term_line]
def close_file(self):
self.opened.close()
if __name__ == "__main__":
print(os.getcwd())
file = Pictionary_file("pictionary.txt")
file.open_file() #WITHOUT THIS -> IndexError
file.get_term()
file.close() #AttributeError
Where is my mistake and how can I correct it?
Here in __init__:
self.open_file()
self.len = self.get_number_of_lines()
self.get_number_of_lines() actually consumes the whole file because it iterates over it:
def get_number_of_lines(self):
i = -1
for i, line in enumerate(self.opened):
# real all lines of the file
pass
# at this point, `self.opened` is empty
return i + 1
So when get_term calls self.opened.read(), it gets an empty string, so self.opened.read().splitlines() is an empty list.
file.close() is an AttributeError, because the Pictionary_file class doesn't have the close method. It does have close_file, though.
I'm having some trouble with ProcessPoolExecutor.
The following code is trying to find the shortest path in a WikiRace game, it gets 2 titles and navigates between one to another.
Here is my code:
class AsyncSearch:
def __init__(self, start, end):
self.start = start
self.end = end
# self.manager = multiprocessing.Manager()
self.q = multiprocessing.Queue()
# self.q = self.manager.Queue()
def _add_starting_node_page_to_queue(self):
start_page = WikiGateway().page(self.start)
return self._check_page(start_page)
def _is_direct_path_to_end(self, page):
return (page.title == self.end) or (page.links.get(self.end) is not None)
def _add_tasks_to_queue(self, pages):
for page in pages:
self.q.put(page)
def _check_page(self, page):
global PATH_WAS_FOUND_FLAG
logger.info('Checking page "{}"'.format(page.title))
if self._is_direct_path_to_end(page):
logger.info('##########\n\tFound a path!!!\n##########')
PATH_WAS_FOUND_FLAG = True
return True
else:
links = page.links
logger.info("Couldn't find a direct path form \"{}\", "
"adding {} pages to the queue.".format(page.title, len(links)))
self._add_tasks_to_queue(links.values())
return "Couldn't find a direct path form " + page.title
def start_search(self):
global PATH_WAS_FOUND_FLAG
threads = []
logger.debug(f'Running with concurrent processes!')
if self._add_starting_node_page_to_queue() is True:
return True
with concurrent.futures.ProcessPoolExecutor(max_workers=AsyncConsts.PROCESSES) as executor:
threads.append(executor.submit(self._check_page, self.q.get()))
I'm getting the following exception:
Traceback (most recent call last):
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\queues.py", line 241, in _feed
obj = _ForkingPickler.dumps(obj)
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\context.py", line 356, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
It's weird since I'm using multiprocessing.Queue() that should be shared between the processes as mentioned by the exception.
I found this similar question but couldn't found the answer there.
I tried to use self.q = multiprocessing.Manager().Queue() instead of self.q = multiprocessing.Queue(), I'm not sure if this takes me anywhere but the exception I'm getting is different:
Traceback (most recent call last):
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\queues.py", line 241, in _feed
obj = _ForkingPickler.dumps(obj)
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "c:\users\tomer smadja\appdata\local\programs\python\python36-32\lib\multiprocessing\process.py", line 282, in __reduce__
'Pickling an AuthenticationString object is '
TypeError: Pickling an AuthenticationString object is disallowed for security reasons
Also, when I'm trying to use multiprocessing.Process() instead of ProcessPoolExecutor, I'm unable to finish the process once I do find a path. I set up a global variable to stop PATH_WAS_FOUND_FLAG to stop the process initiation but still with no success. What I'm missing here?
ProcessPoolExecutor.submit(...) will not pickle multiprocessing.Queue instances as well other shared multiprocessing.* class instances. You can do two things: One is to use SyncManager, or you can initialize the worker with the multiprocessing.Queue instance at ProcessPoolExecutor construction time. Both are shown below.
Following is your original variation with a couple of fixes applied (see note at end)... with this variation, multiprocessing.Queue operations are slightly faster than below SyncManager variation...
global_page_queue = multiprocessing.Queue()
def set_global_queue(q):
global global_page_queue
global_page_queue = q
class AsyncSearch:
def __init__(self, start, end):
self.start = start
self.end = end
#self.q = multiprocessing.Queue()
...
def _add_tasks_to_queue(self, pages):
for page in pages:
#self.q.put(page)
global_page_queue.put(page)
#staticmethod
def _check_page(self, page):
...
def start_search(self):
...
print(f'Running with concurrent processes!')
with concurrent.futures.ProcessPoolExecutor(
max_workers=5,
initializer=set_global_queue,
initargs=(global_page_queue,)) as executor:
f = executor.submit(AsyncSearch._check_page, self, global_page_queue.get())
r = f.result()
print(f"result={r}")
Following is SyncManager variation where queue operations are slightly slower than above multiprocessing.Queue variation...
import multiprocessing
import concurrent.futures
class AsyncSearch:
def __init__(self, start, end):
self.start = start
self.end = end
self.q = multiprocessing.Manager().Queue()
...
#staticmethod
def _check_page(self, page):
...
def start_search(self):
global PATH_WAS_FOUND_FLAG
worker_process_futures = []
print(f'Running with concurrent processes!')
with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
worker_process_futures.append(executor.submit(AsyncSearch._check_page, self, self.q.get()))
r = worker_process_futures[0].result()
print(f"result={r}")
Note, for some shared objects, SyncManager can be anywhere from slightly to noticeably slower compared to multiprocessing.* variations. For example, a multiprocessing.Value is in shared memory whereas a SyncManager.Value is in the sync manager processes, requiring overhead to interact with it.
An aside, unrelated to your ask, your original code was passing _check_page with incorrect parameters, where you were passing dequeued item to self, leaving the page parameter None. I resolved this by changing _check_page to a static method and passing self.
I've been programming for a while in python but this is my first in multiprocessing.
I made a program that scrapes a local weather station for the ambient temperature using beautifulsoup4 every minute. The program also reads temperatures from several sensors and uploads everything to a Mysql database. This all works fine but on occasion (once every day) getting the data from the local weather station fails in retrieving the webpage. This causes beautifulsoup to start an infinite loop which effectively stops all functionality of the program. To combat this I tried to try my hand on multiprocessing.
I've coded a check that kills the extra thread if that is still running after 10 seconds. Here is where things go wrong, normally the beautifulsoup thread closes after 2-4 seconds when its finished. However in the case where the beautifulsoup gets stuck in its loop not only the thread is terminated but the entire program stops doing stuff altogether.
I've copied the relevant snippets of code. Please note that some vars are declared outside of the snippets, the code works with exception of the problem described above. Btw I am very much aware that there is a plethora of ways to make my code more efficient. Refining the code is something that I'll do when its working stable :) Thanks in advance for your help!
Imports:
...
from multiprocessing import Process, Queue
import multiprocessing
from bs4 import BeautifulSoup #sudo apt-get install python3-bs4
Beutifulsoup section:
def get_ZWS_temp_out(temp):
try:
if 1==1:
response = requests.get(url)
responsestr = str(response)
if "200" in responsestr:
soup = BeautifulSoup(response.content, 'html.parser')
tb = soup.findAll("div", {"class": "elementor-element elementor-element-8245410 elementor-widget__width-inherit elementor-widget elementor-widget-wp-widget-live_weather_station_widget_outdoor"})
tb2 = tb[0].findAll("div", {"class": "lws-widget-big-value"})
string = str(tb2[0])[-10:][:4]
stringt = string[:1]
if stringt.isdigit() == True:
#print("getal ok")
string = string
elif stringt == '-':
#print("minteken")
string = string
elif stringt == '>':
#print("temp < 10")
string = string[-3:]
temp = float(string)
except Exception as error:
print(error)
Q.put(temp)
return(temp)
Main program:
Q = Queue()
while 1 == 1:
strings = time.strftime("%Y,%m,%d,%H,%M,%S")
t = strings.split(',')
time_numbers = [ int(x) for x in t ]
if last_min != time_numbers[4]:
targettemp = get_temp_target(targettemp)
p = Process(target=get_ZWS_temp_out, name="get_ZWS_temp_out", args=(ZWS_temp_out,))
p.start()
i = 0
join = True
while i < 10:
i = i + 1
time.sleep(1)
if p.is_alive() and i == 10: #checks to quit early otherwise another iteration
print(datetime.datetime.fromtimestamp(time.time()).strftime("%Y-%m-%d %H:%M:%S"),": ZWS getter is running for too long... let's kill it...")
# Terminate ZWS query
p.terminate()
i = 10
join = False
if join == True:
p.join()
Thanks in advance for your time :)
I have to manually stop the program which gives the following output:
pi#Jacuzzi-pi:~ $ python3 /home/pi/Jacuzzi/thermometer.py
temperature sensors observer and saving program, updates every 3,5 seconds
2019-10-28 03:50:11 : ZWS getter is running for too long... let's kill it...
^CTraceback (most recent call last):
File "/home/pi/Jacuzzi/thermometer.py", line 283, in <module>
ZWS_temp_out = Q.get()
File "/usr/lib/python3.5/multiprocessing/queues.py", line 94, in get
res = self._recv_bytes()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
I believe your program is waiting infinitely to pull items from the queue you've created. I can't see the line in the code you've posted, but it appears in the error message:
ZWS_temp_out = Q.get()
Since the get_ZWS_temp_out process is the one that adds items to the queue, you need to make sure that the process is running before you call Q.get(). I suspect this line of code gets executed between the act of terminating the timed-out process and restarting a new process, where instead it should be called after the new process is created.
Based on what Rob found out this is the updated (working) code for the main program, the others are unchanged:
Q = Queue()
while 1 == 1:
strings = time.strftime("%Y,%m,%d,%H,%M,%S")
t = strings.split(',')
time_numbers = [ int(x) for x in t ]
if last_min != time_numbers[4]:
targettemp = get_temp_target(targettemp)
p = Process(target=get_ZWS_temp_out, name="get_ZWS_temp_out", args=(ZWS_temp_out,))
p.start()
i = 0
completion = True
while i < 10:
i = i + 1
time.sleep(1)
if p.is_alive() and i == 10: #checks to quit early otherwise another iteration
print(datetime.datetime.fromtimestamp(time.time()).strftime("%Y-%m-%d %H:%M:%S"),": ZWS getter is running for too long... let's kill it...")
# Terminate ZWS query
p.terminate()
i = 10
completion = False
if completion == True:
p.join()
ZWS_temp_out = Q.get()