So apologies because i've seen this question asked a bunch, but having looked through all of the questions none seem to fix my problem. My code looks like this
TDSession = TDClient()
TDSession.grab_refresh_token()
q = queue.Queue(10)
asyncio.run(listener.startStreaming(TDSession, q))
while True:
message = q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
I have tried doing asyncio.create_task(listener.startStreaming(TDSession,q)), the problem is I get
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'startStreaming' was never awaited
which confused me because this seemed to work here Can an asyncio event loop run in the background without suspending the Python interpreter? which is what i'm trying to do.
with the listener.startStreaming func looking like this
async def startStreaming(TDSession, q):
streamingClient = TDSession.create_streaming_session()
streamingClient.account_activity()
await streamingClient.build_pipeline()
while True:
message = await streamingClient.start_pipeline()
message = parseMessage(message)
if message != None:
print('putting message into q')
print( dict(message) )
q.put(message)
Is there a way to make this work where I can run the listener in the background?
EDIT: I've tried this as well, but it only runs the consumer function, instead of running both at the same time
TDSession.grab_refresh_token()
q = queue.Queue(10)
loop = asyncio.get_event_loop()
loop.create_task(listener.startStreaming(TDSession, q))
loop.create_task(consumer(TDSession, q))
loop.run_forever()
As you found out, the asyncio.run function runs the given coroutine until it is complete. In other words, it waits for the coroutine returned by listener.startStreaming to finish before proceeding to the next line.
Using asyncio.create_task, on the other hand, requires the caller to be already running inside an asyncio loop already. From the docs:
The task is executed in the loop returned by get_running_loop(), RuntimeError is raised if there is no running loop in current thread.
What you need is to combine the two, by creating a function that's async, and then call create_task inside that async function.
For example:
async def main():
TDSession = TDClient()
TDSession.grab_refresh_token()
q = asyncio.Queue(10)
streaming_task = asyncio.create_task(listener.startStreaming(TDSession, q))
while True:
message = await q.get()
print('oh shoot!')
print(message)
orderEntry.placeOrder(TDSession=TDSession)
await streaming_task # If you want to wait for `startStreaming` to complete after the while loop
if __name__ == '__main__':
asyncio.run(main())
Edit: From your comment I realized you want to use the producer-consumer pattern, so I also updated the example above to use asyncio.Queue instead of a queue.Queue, in order for the thread to be able to jump between the producer (startStreaming) and the consumer (the while loop)
Related
Two coroutintes in code below, running in different threads, cannot communicate with each other by asyncio.Queue. After the producer inserts a new item in asyncio.Queue, the consumer cannot get this item from that asyncio.Queue, it gets blocked in method await self.n_queue.get().
I try to print the ids of asyncio.Queue in both consumer and producer, and I find that they are same.
import asyncio
import threading
import time
class Consumer:
def __init__(self):
self.n_queue = None
self._event = None
def run(self, loop):
loop.run_until_complete(asyncio.run(self.main()))
async def consume(self):
while True:
print("id of n_queue in consumer:", id(self.n_queue))
data = await self.n_queue.get()
print("get data ", data)
self.n_queue.task_done()
async def main(self):
loop = asyncio.get_running_loop()
self.n_queue = asyncio.Queue(loop=loop)
task = asyncio.create_task(self.consume())
await asyncio.gather(task)
async def produce(self):
print("id of queue in producer ", id(self.n_queue))
await self.n_queue.put("This is a notification from server")
class Producer:
def __init__(self, consumer, loop):
self._consumer = consumer
self._loop = loop
def start(self):
while True:
time.sleep(2)
self._loop.run_until_complete(self._consumer.produce())
if __name__ == '__main__':
loop = asyncio.get_event_loop()
print(id(loop))
consumer = Consumer()
threading.Thread(target=consumer.run, args=(loop,)).start()
producer = Producer(consumer, loop)
producer.start()
id of n_queue in consumer: 2255377743176
id of queue in producer 2255377743176
id of queue in producer 2255377743176
id of queue in producer 2255377743176
I try to debug step by step in asyncio.Queue, and I find after the method self._getters.append(getter) is invoked in asyncio.Queue, the item is inserted in queue self._getters. The following snippets are all from asyncio.Queue.
async def get(self):
"""Remove and return an item from the queue.
If queue is empty, wait until an item is available.
"""
while self.empty():
getter = self._loop.create_future()
self._getters.append(getter)
try:
await getter
except:
# ...
raise
return self.get_nowait()
When a new item is inserted into asycio.Queue in producer, the methods below would be invoked. The variable self._getters has no items although it has same id in methods put() and set().
def put_nowait(self, item):
"""Put an item into the queue without blocking.
If no free slot is immediately available, raise QueueFull.
"""
if self.full():
raise QueueFull
self._put(item)
self._unfinished_tasks += 1
self._finished.clear()
self._wakeup_next(self._getters)
def _wakeup_next(self, waiters):
# Wake up the next waiter (if any) that isn't cancelled.
while waiters:
waiter = waiters.popleft()
if not waiter.done():
waiter.set_result(None)
break
Does anyone know what's wrong with the demo code above? If the two coroutines are running in different threads, how could they communicate with each other by asyncio.Queue?
Short answer: no!
Because the asyncio.Queue needs to share the same event loop, but
An event loop runs in a thread (typically the main thread) and executes all callbacks and Tasks in its thread. While a Task is running in the event loop, no other Tasks can run in the same thread. When a Task executes an await expression, the running Task gets suspended, and the event loop executes the next Task.
see
https://docs.python.org/3/library/asyncio-dev.html#asyncio-multithreading
Even though you can pass the event loop to threads, it might be dangerous to mix the different concurrency concepts. Still note, that passing the loop just means that you can add tasks to the loop from different threads, but they will still be executed in the main thread. However, adding tasks from threads can lead to race conditions in the event loop, because
Almost all asyncio objects are not thread safe, which is typically not a problem unless there is code that works with them from outside of a Task or a callback. If there’s a need for such code to call a low-level asyncio API, the loop.call_soon_threadsafe() method should be used
see
https://docs.python.org/3/library/asyncio-dev.html#asyncio-multithreading
Typically, you should not need to run async functions in different threads, because they should be IO bound and therefore a single thread should be sufficient to handle the work load. If you still have some CPU bound tasks, you are able to dispatch them to different threads and make the result awaitable using asyncio.to_thread, see https://docs.python.org/3/library/asyncio-task.html#running-in-threads.
There are many questions already about this topic, see e.g. Send asyncio tasks to loop running in other thread or How to combine python asyncio with threads?
If you want to learn more about the concurrency concepts, I recommend to read https://medium.com/analytics-vidhya/asyncio-threading-and-multiprocessing-in-python-4f5ff6ca75e8
I want to stop awaiting ainput function at 6th iteration of for loop
import asyncio
from aioconsole import ainput
class Test():
def __init__(self):
self.read = True
async def read_from_concole(self):
while self.read:
command = await ainput('$>')
if command == 'exit':
self.read = False
if command == 'greet':
print('greetings :J')
async def count(self):
console_task = asyncio.create_task(self.read_from_concole())
for c in range(10):
await asyncio.sleep(.5)
print(f'number: {c}')
if c == 5: # 6th iteration
# What shoud I do here?
# Following code doesn't meet my expectations
self.read = False
console_task.cancel()
await console_task
# async def run_both(self):
# await asyncio.gather(
# self.read_from_concole(),
# self.count()
# )
if __name__ == '__main__':
o1 = Test()
loop = asyncio.new_event_loop()
loop.run_until_complete(o1.count())
Of course, this code is simplified, but covers the idea: write a program where one coroutine can cancel another which is awaiting something (in this example ainput.
asyncio.Task.cancel() is not the solution because it won't make coroutine stop awaiting (so I need put arbitrary character into console and press enter, this is not what I want).
I don't even know whether my approach makes sense, I'm a fresh asyncio user, for now, I know only the basics. In my real project, the situation is very similar. I have a GUI application and a console window. By clicking 'X' button I want to close window and terminate ainput (which read commands from the console) to completely finish the program (the console part is working on a different thread, and because of that I can't close my program completely - this thread will run until ainput receive some input from a user).
Recently, I have moved my REST server code in express.js to using FastAPI. So far, I've been successful in the transition until recently. I've noticed based on the firebase python admin sdk documention, unlike node.js, the python sdk is blocking. The documentation says here:
In Python and Go Admin SDKs, all write methods are blocking. That is, the write methods do not return until the writes are committed to the database.
I think this feature is having a certain effect on my code. It also could be how I've structured my code as well. Some code from one of my files is below:
from app.services.new_service import nService
from firebase_admin import db
import json
import redis
class TryNewService:
async def tryNew_func(self, request):
# I've already initialized everything in another file for firebase
ref = db.reference()
r = redis.Redis()
holdingData = await nService().dialogflow_session(request)
fulfillmentText = json.dumps(holdingData[-1])
body = await request.json()
if ("user_prelimInfo_address" in holdingData):
holdingData.append("session")
holdingData.append(body["session"])
print(holdingData)
return(holdingData)
else:
if (("Default Welcome Intent" in holdingData)):
pass
else:
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]})
print(holdingData)
return(fulfillmentText)
Is there any workaround for the blocking effect of usingref.set() line in my code? Kinda like adding a callback in node.js? I'm new to the asyncio world of python 3.
Update as of 06/13/2020: So I added following code and am now getting a RuntimeError: Task attached to a different loop. In my second else statement I do the following:
loop = asyncio.new_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as pool:
result = await loop.run_in_executor(pool, ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
print("custom thread pool:{}".format(result))
With this new RuntimeError, I would appreciate some help in figuring out.
If you want to run synchronous code inside an async coroutine, then the steps are:
loop = get_event_loop()
Note: Get and not new. Get provides current event_loop, and new_even_loop returns a new one
await loop.run_in_executor(None, sync_method)
First parameter = None -> use default executor instance
Second parameter (sync_method) is the synchronous code to be called.
Remember that resources used by sync_method need to be properly synchronized:
a) either using asyncio.Lock
b) or using asyncio.run_coroutine_threadsafe function(see an example below)
Forget for this case about ThreadPoolExecutor (that provides a way to I/O parallelism, versus concurrency provided by asyncio).
You can try following code:
loop = asyncio.get_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
result = await loop.run_in_executor(None, sync_method, ref, UserVal, holdingData)
print("custom thread pool:{}".format(result))
With a new function:
def sync_method(ref, UserVal, holdingData):
result = ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
return result
Please let me know your feedback
Note: previous code it's untested. I have only tested next minimum example (using pytest & pytest-asyncio):
import asyncio
import time
import pytest
#pytest.mark.asyncio
async def test_1():
loop = asyncio.get_event_loop()
delay = 3.0
result = await loop.run_in_executor(None, sync_method, delay)
print(f"Result = {result}")
def sync_method(delay):
time.sleep(delay)
print(f"dddd {delay}")
return "OK"
Answer #jeff-ridgeway comment:
Let's try to change previous answer to clarify how to use run_coroutine_threadsafe, to execute from a sync worker thread a coroutine that gather these shared resources:
Add loop as additional parameter in run_in_executor
Move all shared resources from sync_method to a new async_method, that is executed with run_coroutine_threadsafe
loop = asyncio.get_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
result = await loop.run_in_executor(None, sync_method, ref, UserVal, holdingData, loop)
print("custom thread pool:{}".format(result))
def sync_method(ref, UserVal, holdingData, loop):
coro = async_method(ref, UserVal, holdingData)
future = asyncio.run_coroutine_threadsafe(coro, loop)
future.result()
async def async_method(ref, UserVal, holdingData)
result = ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
return result
Note: previous code is untested. And now my tested minimum example updated:
#pytest.mark.asyncio
async def test_1():
loop = asyncio.get_event_loop()
delay = 3.0
result = await loop.run_in_executor(None, sync_method, delay, loop)
print(f"Result = {result}")
def sync_method(delay, loop):
coro = async_method(delay)
future = asyncio.run_coroutine_threadsafe(coro, loop)
return future.result()
async def async_method(delay):
time.sleep(delay)
print(f"dddd {delay}")
return "OK"
I hope this can be helpful
Run blocking database calls on the event loop using a ThreadPoolExecutor. See https://medium.com/#hiranya911/firebase-python-admin-sdk-with-asyncio-d65f39463916
I'm playing about with a personal project in python3.6 and I've run into the following issue which results in the my_queue.join() call blocking indefinitely. Note this isn't my actual code but a minimal example demonstrating the issue.
import threading
import queue
def foo(stop_event, my_queue):
while not stop_event.is_set():
try:
item = my_queue.get(timeout=0.1)
print(item) #Actual logic goes here
except queue.Empty:
pass
print('DONE')
stop_event = threading.Event()
my_queue = queue.Queue()
thread = threading.Thread(target=foo, args=(stop_event, my_queue))
thread.start()
my_queue.put(1)
my_queue.put(2)
my_queue.put(3)
print('ALL PUT')
my_queue.join()
print('ALL PROCESSED')
stop_event.set()
print('ALL COMPLETE')
I get the following output (it's actually been consistent, but I understand that the output order may differ due to threading):
ALL PUT
1
2
3
No matter how long I wait I never see ALL PROCESSED output to the console, so why is my_queue.join() blocking indefinitely when all the items have been processed?
From the docs:
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls
task_done() to indicate that the item was retrieved and all work on it
is complete. When the count of unfinished tasks drops to zero, join()
unblocks.
You're never calling q.task_done() inside your foo function. The foo function should be something like the example:
def worker():
while True:
item = q.get()
if item is None:
break
do_work(item)
q.task_done()
I am trying to run a asyncio sub-process in a pyramid view but the view hangs and the async task appears to never complete. I can run this example outside of a pyramid view and it works.
With that said I have tested originally using loop = asyncio.get_event_loop() but this tells me RuntimeError: There is no current event loop in thread 'Dummy-2'
There are certainly things I don't fully understand here. Like maybe the view thread is different to the main thread so get_event_loop doesn't work.
So does anybody know why my async task might not yield its result in this scenario? This is a naive example.
#asyncio.coroutine
def async_task(dir):
# This task can be of varying length for each handled directory
print("Async task start")
create = asyncio.create_subprocess_exec(
'ls',
'-l',
dir,
stdout=asyncio.subprocess.PIPE)
proc = yield from create
# Wait for the subprocess exit
data = yield from proc.stdout.read()
exitcode = yield from proc.wait()
return (exitcode, data)
#view_config(
route_name='test_async',
request_method='GET',
renderer='json'
)
def test_async(request):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
dirs = ['/tmp/1/', '/tmp/2/', '/tmp/3/']
tasks = []
for dir in dirs:
tasks.append(asyncio.ensure_future(async_task(dir), loop=loop))
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
return
You are invoking loop.run_until_complete in your view so clearly it is going to block until complete!
If you want to use asyncio with a WSGI app then you need to do so in another thread. For example you could spin up a thread that contains the eventloop and executes your async code. WSGI code is all synchronous and so any async code must be done this way, with it's own issues, or you can just live with it blocking the request thread like you're doing now.