Python Tornado: consuming external Queue from not coroutine - python-3.x

I have the following situation: Using python 3.6 and Tornado 5.1 to receive client requests by web socket. Some of these requests require you to invoke an external processing, which returns a queue and then deposits results periodically in it. These results are transmitted via websocket to the clients.
External processing is NOT a coroutine, so I invoke it using run_in_executor.
My problem:
When the response time of the external processing is very large, the run_in_executor reaches the maximum number of workers (default: number of processors x 5)!
Is it safe to increase the maximum number of workers?
Or is another solution recommended? !!
Below a simplified code.
From already thank you very much!!!!
#########################
## SERVER CODE ##
#########################
from random import randint
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
from random import randint
from tornado import gen
import threading
import asyncio
import queue
import time
class WSHandler(tornado.websocket.WebSocketHandler):
"""entry point for all WS request"""
def open(self):
print('new connection. Request: ' + str(self.request))
async def on_message(self, message):
# Emulates the subscription to an external object
# that returns a queue to listen
producer = Producer()
q = producer.q
while True:
rta = await tornado.ioloop.IOLoop.current().run_in_executor(None, self.loop_on_q, q)
if rta != None:
await self.write_message(str(rta))
else:
break
def on_close(self):
print('connection closed. Request: ' + str(self.request) +
'. close_reason: ' + str(self.close_reason) +
'. close_code: ' + str(self.close_code) +
'. get_status: ' + str(self.get_status()))
def loop_on_q(self, q):
rta = q.get()
return rta
class Producer:
def __init__(self):
self.q = queue.Queue()
t = threading.Thread(target=self.start)
t.daemon = True
t.start()
def start(self):
count = 1
while True:
# time.sleep(randint(1,5))
if count < 100:
self.q.put(count)
else:
self.q.put(None)
break
time.sleep(50)
count += 1
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
asyncio.set_event_loop(asyncio.new_event_loop())
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
print('SRV START')
tornado.ioloop.IOLoop.instance().instance().start()
#########################
## CLIENT CODE ##
#########################
# If you run it more than 20 times in less than 50 seconds ==> Block
# (number of processors x 5), I have 4 cores
from websocket import create_connection
def conect():
url = 'ws://localhost:8888/ws'
ws = create_connection(url)
print('Conecting')
return ws
print('Conecting to srv')
con_ws = conect()
print('Established connection. Sending msg ...')
msj = '{"type":"Socket"}'
con_ws.send(msj)
print('Package sent. Waiting answer...')
while True:
result = con_ws.recv()
print('Answer: ' + str(result))

Is it safe to increase the maximum number of workers Yes, up to a certain fixed amount which can be calculated with load testing.
Or is another solution recommended? If you reach workers limit you can move workers to multiple separated servers (this approach is called horizontal scaling) and pass jobs to them with a message queue. See Celery as a batteries-included-solution or RabbitMQ, Kafka etc. if you prefer to write everything by yourself.

Related

Python: Callback on the worker-queue not working

Apologies for the long post. I am trying to subscribe to rabbitmq queue and then trying to create a worker-queue to execute tasks. This is required since the incoming on the rabbitmq would be high and the processing task on the item from the queue would take 10-15 minutes to execute each time. Hence necessitating the need for a worker-queue. Now I am trying to initiate only 4 items in the worker-queue, and register a callback method for processing the items in the queue. The expectation is that my code handles the part when all the 4 instances in the worker-queue are busy, the new incoming would be blocked until a free slot is available.
The rabbitmq piece is working well. The problem is I cannot figure out why the items from my worker-queue are not executing the task, i.e the callback is not working. In fact, the item from the worker queue gets executed only once when the program execution starts. For the rest of the time, tasks keep getting added to the worker-queue without being consumed. Would appreciate it if somebody could help out with the understanding on this one.
I am attaching the code for rabbitmqConsumer, driver, and slaveConsumer. Some information has been redacted in the code for privacy issues.
# This is the driver
#!/usr/bin/env python
import time
from rabbitmqConsumer import BasicMessageReceiver
basic_receiver_object = BasicMessageReceiver()
basic_receiver_object.declare_queue()
while True:
basic_receiver_object.consume_message()
time.sleep(2)
#This is the rabbitmqConsumer
#!/usr/bin/env python
import pika
import ssl
import json
from slaveConsumer import slave
class BasicMessageReceiver:
def __init__(self):
# SSL Context for TLS configuration of Amazon MQ for RabbitMQ
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
url = <url for the queue>
parameters = pika.URLParameters(url)
parameters.ssl_options = pika.SSLOptions(context=ssl_context)
self.connection = pika.BlockingConnection(parameters)
self.channel = self.connection.channel()
# worker-queue object
self.slave_object = slave()
self.slave_object.start_task()
def declare_queue(self, queue_name=“abc”):
print(f"Trying to declare queue inside consumer({queue_name})...")
self.channel.queue_declare(queue=queue_name, durable=True)
def close(self):
print("Closing Receiver")
self.channel.close()
self.connection.close()
def _consume_message_setup(self, queue_name):
def message_consume(ch, method, properties, body):
print(f"I am inside the message_consume")
message = json.loads(body)
self.slave_object.execute_task(message)
ch.basic_ack(delivery_tag=method.delivery_tag)
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(on_message_callback=message_consume,
queue=queue_name)
def consume_message(self, queue_name=“abc”):
print("I am starting the rabbitmq start_consuming")
self._consume_message_setup(queue_name)
self.channel.start_consuming()
#This is the slaveConsumer
#!/usr/bin/env python
import pika
import ssl
import json
import requests
import threading
import queue
import os
class slave:
def __init__(self):
self.job_queue = queue.Queue(maxsize=3)
self.job_item = ""
def start_task(self):
def _worker():
while True:
json_body = self.job_queue.get()
self._parse_object_from_queue(json_body)
self.job_queue.task_done()
threading.Thread(target=_worker, daemon=True).start()
def execute_task(self, obj):
print("Inside execute_task")
self.job_item = obj
self.job_queue.put(self.job_item)
# print(self.job_queue.queue)
def _parse_object_from_queue(self, json_body):
if bool(json_body[‘entity’]):
if json_body['entity'] == 'Hello':
print("Inside Slave: Hello")
elif json_body['entity'] == 'World':
print("Inside Slave: World")
self.job_queue.join()

Trying to get data using Websocket & run some parallel code on the value returned using Asyncio

I am just getting started with asyncio in python. What I am trying to do is below :
A Websocket connects to a data provider and keeps listening for new data (Run forever )
Parallelly work on this data and maybe save the data to a file. Make buy /sell decisions(Any parellel operation is okay)
Trying to use asyncio to achieve this (Will threads be better? Although threads seem more complicated that asyncio )
I am Using jupyter notebook( So Event loop is already created, maybe that's the problem ?)
Code 1 works , but this blocks my event loop and only keeps printing the data. All other code is blocked. Since its always busy in the web socket I guess.
import ssl
import websocket
import json
from IPython.display import display, clear_output
from threading import Thread
def on_message(ws, message):
resp1 = json.loads(message)
clear_output(wait=True)
print(resp1['events'][0]['price'])
def run():
ws = websocket.WebSocketApp("wss://api.gemini.com/v1/marketdata/BTCUSD",on_message=on_message)
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
ws_run = Thread(target=run)
ws_run.start()
print("ok") # this prints once, but then it gets blocked by the websocket code.
I tried code 2 :
But this hangs forever and doesn't do anything.
import asyncio
import ssl
import websocket
import json
from IPython.display import display, clear_output
async def on_message(ws, message):
resp1 = json.loads(message)
clear_output(wait=True)
#print(resp1,sort_keys=True, indent=4)
#print(resp1['events'][0]['side'])
print(resp1['events'][0]['price'])
async def count2(x):
x=0
for i in range(5):
await asyncio.sleep(0.01)
print('second' , x)
y=x+10
return y
async def main():
await asyncio.gather(count(x), on_message(ws, message))
if __name__ == "__main__":
import time
ws = websocket.WebSocketApp("wss://api.gemini.com/v1/marketdata/BTCUSD",on_message=on_message)
asyncio.get_event_loop().run_forever(ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE}))
s = time.perf_counter()
await main()
elapsed = time.perf_counter() - s
print(f" executed in {elapsed:0.2f} seconds.")
Tried this variant in main(), still no response. :
if __name__ == "__main__":
import time
ws = websocket.WebSocketApp("wss://api.gemini.com/v1/marketdata/BTCUSD",on_message=on_message)
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
s = time.perf_counter()
await main()
elapsed = time.perf_counter() - s
print(f" executed in {elapsed:0.2f} seconds.")
Update: I got this to work ,but I dont know if this is the right way, without using run_forever():
import ssl
import websocket
import asyncio
import time
async def MySock():
while True:
print(ws.recv())
await asyncio.sleep(0.5)
async def MyPrint():
while True:
print("-------------------------------------------")
await asyncio.sleep(0.5)
async def main():
await asyncio.gather(MySock(),MyPrint())
if __name__ == "__main__":
ws = websocket.WebSocket()
ws.connect("wss://api.gemini.com/v1/marketdata/btcusd?top_of_book=true&offers=true")
s = time.perf_counter()
await main()
elapsed = time.perf_counter() - s
print(f" executed in {elapsed:0.2f} seconds.")

websockets, asyncio and PyQt5 together at last. Is Quamash necessary?

I've been working on a client that uses PyQt5 and the websockets module which is built around asyncio. I thought that something like the code below would work but I'm finding that the incoming data (from the server) is not being updated in the GUI until I click enter in the line edit box. Those incoming messages are intended to set the pulse for the updates to the GUI and will carry data to be used for updating. Is quamash a better way to approach this? btw, I will be using processes for some other aspects of this code so I don't consider it overkill (at this point).
This is Python 3.6, PyQt5.6(or higher) and whatever version of websockets that currently installs with pip. https://github.com/aaugustin/websockets
The client:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import asyncio
import websockets
import sys
import time
from multiprocessing import Process, Pipe, Queue
from PyQt5 import QtCore, QtGui, QtWidgets
class ComBox(QtWidgets.QDialog):
def __init__(self):
QtWidgets.QDialog.__init__(self)
self.verticalLayout = QtWidgets.QVBoxLayout(self)
self.groupBox = QtWidgets.QGroupBox(self)
self.groupBox.setTitle( "messages from beyond" )
self.gridLayout = QtWidgets.QGridLayout(self.groupBox)
self.label = QtWidgets.QLabel(self.groupBox)
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
self.verticalLayout.addWidget(self.groupBox)
self.lineEdit = QtWidgets.QLineEdit(self)
self.verticalLayout.addWidget(self.lineEdit)
self.lineEdit.editingFinished.connect(self.enterPress)
#QtCore.pyqtSlot()
def enterPress(self):
mytext = str(self.lineEdit.text())
self.inputqueue.put(mytext)
#QtCore.pyqtSlot(str)
def updategui(self, message):
self.label.setText(message)
class Websocky(QtCore.QThread):
updatemaingui = QtCore.pyqtSignal(str)
def __init__(self):
super(Websocky, self).__init__()
def run(self):
while True:
time.sleep(.1)
message = self.outputqueue.get()
try:
self.updatemaingui[str].emit(message)
except Exception as e1:
print("updatemaingui problem: {}".format(e1))
async def consumer_handler(websocket):
while True:
try:
message = await websocket.recv()
outputqueue.put(message)
except Exception as e1:
print(e1)
async def producer_handler(websocket):
while True:
message = inputqueue.get()
await websocket.send(message)
await asyncio.sleep(.1)
async def handler():
async with websockets.connect('ws://localhost:8765') as websocket:
consumer_task = asyncio.ensure_future(consumer_handler(websocket))
producer_task = asyncio.ensure_future(producer_handler(websocket))
done, pending = await asyncio.wait(
[consumer_task, producer_task],
return_when=asyncio.FIRST_COMPLETED, )
for task in pending:
task.cancel()
def start_websockets():
loop = asyncio.get_event_loop()
loop.run_until_complete(handler())
inputqueue = Queue()
outputqueue = Queue()
app = QtWidgets.QApplication(sys.argv)
comboxDialog = ComBox()
comboxDialog.inputqueue = inputqueue
comboxDialog.outputqueue = outputqueue
comboxDialog.show()
webster = Websocky()
webster.outputqueue = outputqueue
webster.updatemaingui[str].connect(comboxDialog.updategui)
webster.start()
p2 = Process(target=start_websockets)
p2.start()
sys.exit(app.exec_())
The server:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import asyncio
import time
import websockets
# here we'll store all active connections to use for sending periodic messages
connections = []
##asyncio.coroutine
async def connection_handler(connection, path):
connections.append(connection) # add connection to pool
while True:
msg = await connection.recv()
if msg is None: # connection lost
connections.remove(connection) # remove connection from pool, when client disconnects
break
else:
print('< {}'.format(msg))
##asyncio.coroutine
async def send_periodically():
while True:
await asyncio.sleep(2) # switch to other code and continue execution in 5 seconds
for connection in connections:
message = str(round(time.time()))
print('> Periodic event happened.')
await connection.send(message) # send message to each connected client
start_server = websockets.serve(connection_handler, 'localhost', 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.ensure_future(send_periodically()) # before blocking call we schedule our coroutine for sending periodic messages
asyncio.get_event_loop().run_forever()
Shortly after posting this question I realized the problem. The line
message = inputqueue.get()
in the producer_handler function is blocking. This causes what should be an async function to hang everything in that process until it sees something in the queue. My workaround was to use the aioprocessing module which provides asyncio compatible queues. So, it looks more like this:
import aioprocessing
async def producer_handler(websocket):
while True:
message = await inputqueue.coro_get()
await websocket.send(message)
await asyncio.sleep(.1)
inputqueue = aioprocessing.AioQueue()
The aioprocessing module provides some nice options and documentation. And in this case is a rather simple solution for the issue. https://github.com/dano/aioprocessing
So, to answer my question: No, you don't have to use quamash for this kind of thing.

Processing huge CSV file using Python and multithreading

I have a function that yields lines from a huge CSV file lazily:
def get_next_line():
with open(sample_csv,'r') as f:
for line in f:
yield line
def do_long_operation(row):
print('Do some operation that takes a long time')
I need to use threads such that each record I get from the above function I can call do_long_operation.
Most places on Internet have examples like this, and I am not very sure if I am on the right path.
import threading
thread_list = []
for i in range(8):
t = threading.Thread(target=do_long_operation, args=(get_next_row from get_next_line))
thread_list.append(t)
for thread in thread_list:
thread.start()
for thread in thread_list:
thread.join()
My questions are:
How do I start only a finite number of threads, say 8?
How do I make sure that each of the threads will get a row from get_next_line?
You could use a thread pool from multiprocessing and map your tasks to a pool of workers:
from multiprocessing.pool import ThreadPool as Pool
# from multiprocessing import Pool
from random import randint
from time import sleep
def process_line(l):
print l, "started"
sleep(randint(0, 3))
print l, "done"
def get_next_line():
with open("sample.csv", 'r') as f:
for line in f:
yield line
f = get_next_line()
t = Pool(processes=8)
for i in f:
t.map(process_line, (i,))
t.close()
t.join()
This will create eight workers and submit your lines to them, one by one. As soon as a process is "free", it will be allocated a new task.
There is a commented out import statement, too. If you comment out the ThreadPool and import Pool from multiprocessing instead, you will get subprocesses instead of threads, which may be more efficient in your case.
Using a Pool/ThreadPool from multiprocessing to map tasks to a pool of workers and a Queue to control how many tasks are held in memory (so we don't read too far ahead into the huge CSV file if worker processes are slow):
from multiprocessing.pool import ThreadPool as Pool
# from multiprocessing import Pool
from random import randint
import time, os
from multiprocessing import Queue
def process_line(l):
print("{} started".format(l))
time.sleep(randint(0, 3))
print("{} done".format(l))
def get_next_line():
with open(sample_csv, 'r') as f:
for line in f:
yield line
# use for testing
# def get_next_line():
# for i in range(100):
# print('yielding {}'.format(i))
# yield i
def worker_main(queue):
print("{} working".format(os.getpid()))
while True:
# Get item from queue, block until one is available
item = queue.get(True)
if item == None:
# Shutdown this worker and requeue the item so other workers can shutdown as well
queue.put(None)
break
else:
# Process item
process_line(item)
print("{} done working".format(os.getpid()))
f = get_next_line()
# Use a multiprocessing queue with maxsize
q = Queue(maxsize=5)
# Start workers to process queue items
t = Pool(processes=8, initializer=worker_main, initargs=(q,))
# Enqueue items. This blocks if the queue is full.
for l in f:
q.put(l)
# Enqueue the shutdown message (i.e. None)
q.put(None)
# We need to first close the pool before joining
t.close()
t.join()
Hannu's answer is not the best method.
I ran the code on a 100M rows CSV file. It took me forever to perform the operation.
However, prior to reading his answer, I had written the following code:
def call_processing_rows_pickably(row):
process_row(row)
import csv
from multiprocessing import Pool
import time
import datetime
def process_row(row):
row_to_be_printed = str(row)+str("hola!")
print(row_to_be_printed)
class process_csv():
def __init__(self, file_name):
self.file_name = file_name
def get_row_count(self):
with open(self.file_name) as f:
for i, l in enumerate(f):
pass
self.row_count = i
def select_chunk_size(self):
if(self.row_count>10000000):
self.chunk_size = 100000
return
if(self.row_count>5000000):
self.chunk_size = 50000
return
self.chunk_size = 10000
return
def process_rows(self):
list_de_rows = []
count = 0
with open(self.file_name, 'rb') as file:
reader = csv.reader(file)
for row in reader:
print(count+1)
list_de_rows.append(row)
if(len(list_de_rows) == self.chunk_size):
p.map(call_processing_rows_pickably, list_de_rows)
del list_de_rows[:]
def start_process(self):
self.get_row_count()
self.select_chunk_size()
self.process_rows()
initial = datetime.datetime.now()
p = Pool(4)
ob = process_csv("100M_primes.csv")
ob.start_process()
final = datetime.datetime.now()
print(final-initial)
This took 22 minutes. Obviously, I need to have more improvements. For example, the Fred library in R takes 10 minutes maximum to do this task.
The difference is: I am creating a chunk of 100k rows first, and then I pass it to a function which is mapped by threadpool(here, 4 threads).

Python Tweepy streaming with multitasking

in Python 2.7 I am successful in using the following code to listen to a direct message stream on an account:
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy import API
from tweepy.streaming import StreamListener
# These values are appropriately filled in the code
consumer_key = '######'
consumer_secret = '######'
access_token = '######'
access_token_secret = '######'
class StdOutListener( StreamListener ):
def __init__( self ):
self.tweetCount = 0
def on_connect( self ):
print("Connection established!!")
def on_disconnect( self, notice ):
print("Connection lost!! : ", notice)
def on_data( self, status ):
print("Entered on_data()")
print(status, flush = True)
return True
# I can add code here to execute when a message is received, such as slicing the message and activating something else
def on_direct_message( self, status ):
print("Entered on_direct_message()")
try:
print(status, flush = True)
return True
except BaseException as e:
print("Failed on_direct_message()", str(e))
def on_error( self, status ):
print(status)
def main():
try:
auth = OAuthHandler(consumer_key, consumer_secret)
auth.secure = True
auth.set_access_token(access_token, access_token_secret)
api = API(auth)
# If the authentication was successful, you should
# see the name of the account print out
print(api.me().name)
stream = Stream(auth, StdOutListener())
stream.userstream()
except BaseException as e:
print("Error in main()", e)
if __name__ == '__main__':
main()
This is great, and I can also execute code when I receive a message, but the jobs I'm adding to a work queue need to be able to stop after a certain amount of time. I'm using a popular start = time.time() and subtracting current time to determine elapsed time, but this streaming code does not loop to check the time. I just waits for a new message, so the clock is never checked so to speak.
My question is this: How can I get streaming to occur and still track time elapsed? Do I need to use multithreading as described in this article? http://www.tutorialspoint.com/python/python_multithreading.htm
I am new to Python and having fun playing around with hardware attached to a Raspberry Pi. I have learned so much from Stackoverflow, thank you all :)
I'm not sure exactly how you want to decide when to stop, but you can pass a timeout argument to the stream to give up after a certain delay.
stream = Stream(auth, StdOutListener(), timeout=30)
That will call your listener's on_timeout() method. If you return true, it will continue streaming. Otherwise, it will stop.
Between the stream's timeout argument and your listener's on_timeout(), you should be able to decide when to stop streaming.
I found I was able to get some multithreading code the way I wanted to. Unlike this tutorial from Tutorialspoint which gives an example of launching multiple instances of the same code with varying timing parameters, I was able to get two different blocks of code to run in their own instances
One block of code constantly adds 10 to a global variable (var).
Another block checks when 5 seconds elapses then prints var's value.
This demonstrates 2 different tasks executing and sharing data using Python multithreading.
See code below
import threading
import time
exitFlag = 0
var = 10
class myThread1 (threading.Thread):
def __init__(self, threadID, name, counter):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
def run(self):
#var counting block begins here
print "addemup starting"
global var
while (var < 100000):
if var > 90000:
var = 0
var = var + 10
class myThread2 (threading.Thread):
def __init__(self, threadID, name, counter):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
def run(self):
#time checking block begins here and prints var every 5 secs
print "checkem starting"
global var
start = time.time()
elapsed = time.time() - start
while (elapsed < 10):
elapsed = time.time() - start
if elapsed > 5:
print "var = ", var
start = time.time()
elapsed = time.time() - start
# Create new threads
thread1 = myThread1(1, "Thread-1", 1)
thread2 = myThread2(2, "Thread-2", 2)
# Start new Threads
thread1.start()
thread2.start()
print "Exiting Main Thread"
My next task will be breaking up my twitter streaming in to its own thread, and passing direct messages received as variables to a task queueing program, while hopefully the first thread continues to listen for more direct messages.

Resources