I have a script that receives temperature data via using requests. Since I had to make multiple requests (around 13000) I decided to explore the use of multi-threading which I am new at.
The programs work by grabbing longitude/latitude data from a csv file and then makes a request to retrieve the temperature data.
The problem that I am facing is that the script does not finish fully when the last temperature value is retrieved.
Here is the code. I have shortened so it is easy to see what I am doing:
num_threads = 16
q = Queue(maxsize=0)
def get_temp(q):
while not q.empty():
work = q.get()
if work is None:
break
## rest of my code here
q.task_done()
At main:
def main():
for o in range(num_threads):
logging.debug('Starting Thread %s', o)
worker = threading.Thread(target=get_temp, args=(q,))
worker.setDaemon(True)
worker.start()
logging.info("Main Thread Waiting")
q.join()
logging.info("Job complete!")
I do not see any errors on the console and temperature is being successfully being written to another file. I have a tried running a test csv file with only a few longitude/latitude references and the script seems to finish executing fine.
So is there a way of shedding light as to what might be happening in the background? I am using Python 3.7.3 on PyCharm 2019.1 on Linux Mint 19.1.
the .join() function waits for all threads to join before continuing to the next line
Related
It's my first time asking a question on here so bear with me.
I'm trying to make a python3 program that runs executable files for x amount of time and creates a log of all output in a text file. For some reason the code I have so far works only with some executables. I'm new to python and especially subprocess so any help is appreciated.
import time
import subprocess
def CreateLog(executable, timeout=5):
time_start = time.time()
process = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, text=True)
f = open("log.txt", "w")
while process.poll() is None:
output = process.stdout.readline()
if output:
f.write(output)
if time.time() > time_start + timeout:
process.kill()
break
I was recently experimenting with crypto mining and came across nanominer, I tried using this python code on nanominer and the log file was empty. I am aware that nanominer already logs its output, but the point is why does the python code fail.
You are interacting through .poll() (R U dead yet?) and .readline().
It's not clear you want to do that.
There seems to be two cases for your long-lived child:
it runs "too long" silently
it runs forever, regularly producing output text at e.g. one-second intervals
The 2nd case is the easy one.
Just use for line in process.stdout:, consume the line,
peek at the clock, and maybe send a .kill() just as you're already doing.
No need for .poll(), as child exiting will produce EOF on that pipe.
For the 1st case, you will want to set an alarm.
See https://docs.python.org/3/library/signal.html#example
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
After "too long", five seconds, your handler will run.
It can do anything you desire.
You'll want it to have access to the process handle,
which will let you send a .kill().
Working with Python 3.6, what I’m looking to accomplish is to create a function that continuously scrapes dynamic/changing data from a webpage, while the rest of the script executes, and is able to reference the data returned from the continuous function.
I know this is likely a threading task, however I’m not super knowledgeable in it yet. Pseudo-code I might think looks something like this
def continuous_scraper():
# Pull data from webpage
scraped_table = pd.read_html(url)
return scraped_table
# start the continuous scraper function here, to run either indefinitely, or preferably stop after a predefined amount of time
scraped_table = thread(continuous_scraper)
# the rest of the script is run here, making use of the updating “scraped_table”
while True:
print(scraped_table[“Col_1”].iloc[0]
Here is a fairly simple example using some stock market page that seems to update every couple of seconds.
import threading, time
import pandas as pd
# A lock is used to ensure only one thread reads or writes the variable at any one time
scraped_table_lock = threading.Lock()
# Initially set to None so we know when its value has changed
scraped_table = None
# This bad-boy will be called only once in a separate thread
def continuous_scraper():
# Tell Python this is a global variable, so it rebinds scraped_table
# instead of creating a local variable that is also named scraped_table
global scraped_table
url = r"https://tradingeconomics.com/australia/stock-market"
while True:
# Pull data from webpage
result = pd.read_html(url, match="Dow Jones")[0]
# Acquire the lock to ensure thread-safety, then assign the new result
# This is done after read_html returns so it doesn't hold the lock for so long
with scraped_table_lock:
scraped_table = result
# You don't wanna flog the server, so wait 2 seconds after each
# response before sending another request
time.sleep(2)
# Make the thread daemonic, so the thread doesn't continue to run once the
# main script and any other non-daemonic threads have ended
scraper_thread = threading.Thread(target=continuous_scraper, daemon=True)
# start the continuous scraper function here, to run either indefinitely, or
# preferably stop after a predefined amount of time
scraper_thread.start()
# the rest of the script is run here, making use of the updating “scraped_table”
for _ in range(100):
print("Time:", time.time())
# Acquire the lock to ensure thread-safety
with scraped_table_lock:
# Check if it has been changed from the default value of None
if scraped_table is not None:
print(" ", scraped_table)
else:
print("scraped_table is None")
# You probably don't wanna flog your stdout, either, dawg!
time.sleep(0.5)
Be sure to read about multithreaded programming and thread safety. It's easy to make mistakes. If there is a bug, it often only manifests in rare and seemingly random occasions, making it difficult to debug.
I recommend looking into multiprocessing library and Pool class.
The docs have multiple examples of how to use it.
Question itself is too general to make a simple answer.
I am new to python and I am trying to use threadpool to run this script in parallel. However it does not run but just run in sequence. The scrip basically iterate through excel file to pick the ip address of devices and then sends xml request based on an input file. I have spent multiple hours on this, what am I not getting.
def do_upload(xml_file):
for ip in codecIPs:
try:
request = open(xml_file, "r").read()
h = httplib2.Http(".cache")
h.add_credentials(username, password)
url = "http://{}/putxml".format(ip)
print('-'*40)
print('Uploading Wall Paper to {}'.format(ip))
resp, content = h.request(url, "POST", body=request,
headers={'content-type': 'text/xml; charset=UTF-8'})
print(content)
except (socket.timeout, socket.error, httpexception) as e:
print('failed to connect to {}'.format(codecIPs), e)
pool = ThreadPool(3)
results = pool.map(do_upload('brandinglogo.xml'), codecIPs)
pool.close()
pool.join()
Python has no parallelism in its threading model due to the so called Global Interpreter Lock. Basically, all threads run on core only. It enables concurrent execution though. So for IO bound tasks, like downloading files from the web, database accesses, etc. You will get some speedup using threads to kick off those syscalls. But for CPU bound tasks, you need to use Processes. Therefore use the multiprocessing python library.
I'm writing an optimization routine to brute force search a solution space for optimal hyper parameters; and apply_async does not appear to be doing anything at all. Ubuntu Server 16.04, Python 3.5, PyCharm CE 2018. Also, I'm doing this on an Azure virtual machine. My code looks like this:
class optimizer(object):
def __init__(self,n_proc,frame):
# Set Class Variables
def prep(self):
# Get Data and prepare for optimization
def ret_func(self,retval):
self.results = self.results.append(retval)
print('Something')
def search(self):
p = multiprocessing.Pool(processes=self.n_proc)
for x, y in zip(repeat(self.data),self.grid):
job = p.apply_async(self.bot.backtest,(x,y),callback=self.ret_func)
p.close()
p.join()
self.results.to_csv('OptimizationResults.csv')
print('***************************')
print('Exiting, Optimization Complete')
if __name__ == '__main__':
multiprocessing.freeze_support()
opt = optimizer(n_proc=4,frame='ytd')
opt.prep()
print('Data Prepped, beginning search')
opt.search()
I was running this exact setup on a Windows Server VM, and I switched over due to issues with multiprocessing not utilizing all cores. Today, I configured my machine and was able to run the optimization one time only. After that, it mysteriously stopped working with no change from me. Also, I should mention that it spits out output every 1 in 10 times I run it. Very odd behavior. I expect to see:
Something
Something
Something
.....
Which would typically be the best "to-date" results of the optimization (omitted for clarity). Instead I get:
Data Prepped, beginning search
***************************
Exiting, Optimization Complete
If I call get() on the async object, the results are printed as expected, but only one core is utilized because the results are being gathered in the for loop. Why isn't apply_async doing anything at all? I should mention that I use the "stop" button on Pycharm to terminate the process, not sure if this has something to do with it?
Let me know if you need more details about prep(), or bot.backtest()
I found the error! Basically I was converting a dict() to a list() and passing the values from the list into my function! The list parameter order was different every time I ran the function, and one of the parameters needed to be an integer, not a float.
For some reason, on windows, the order of the dict was preserved when converting to a list; not the case with Ubuntu! Very interesting.
I have a simple watchdog in python 3 that reboots my server if something goes wrong:
import time, os
from multiprocessing import Pool
def watchdog(x):
time.sleep(x)
os.system('reboot')
return
def main():
while True:
p = Pool(processes=1)
p.apply_async(watchdog, (60, )) # start watchdog with 60s interval
# here some code thas has a little chance to block permanently...
# reboot is ok because of many other files running independently
# that will get problems too if this one blocks too long and
# this will reset all together and autostart everything back
# block is happening 1-2 time a month, mostly within a http-request
p.terminate()
p.join()
return
if __name__ == '__main__':
main()
p = Pool(processes=1) is declared every time the while loop starts.
Now here the question: Is there any smarter way?
If I p.terminate() to prevent the process from reboot, the Pool becomes closed for any other work. Or is there even nothing wrong with declaring a new Pool every time because of garbage collection.
Use a process. Processes support all of the features you are using, so you don't need to make a pool with size one. While processes do have a warning about using the terminate() method (since it can corrupt pipes, sockets, and locking primitives), you are not using any of those items and don't need to care. (In any event, Pool.terminate() probably has the same issues with pipes etc. even though it lacks a similar warning.)