I want to use timeout to stop the blocking function of mqtt, I use a the timeout_decorator module, it can stop command function but cannot stop blocking function, subscribe.simple.
The following code runs successfully
import time
import timeout_decorator
#timeout_decorator.timeout(5, timeout_exception=StopIteration)
def mytest():
print("Start")
for i in range(1,10):
time.sleep(1)
print("{} seconds have passed".format(i))
if __name__ == '__main__':
mytest()
the result as follow:
Start
1 seconds have passed
2 seconds have passed
3 seconds have passed
4 seconds have passed
Traceback (most recent call last):
File "timeutTest.py", line 12, in <module>
mytest()
File "/home/gyf/.local/lib/python3.5/site-packages/timeout_decorator/timeout_decorator.py", line 81, in new_function
return function(*args, **kwargs)
File "timeutTest.py", line 8, in mytest
time.sleep(1)
File "/home/gyf/.local/lib/python3.5/site-packages/timeout_decorator/timeout_decorator.py", line 72, in handler
_raise_exception(timeout_exception, exception_message)
File "/home/gyf/.local/lib/python3.5/site-packages/timeout_decorator/timeout_decorator.py", line 45, in _raise_exception
raise exception()
timeout_decorator.timeout_decorator.TimeoutError: 'Timed Out'
but I failed with the subscribe.simple API
import timeout_decorator
#timeout_decorator.timeout(5)
def sub():
# print(type(msg))
print("----before simple")
# threading.Timer(5,operateFail,args=)
msg = subscribe.simple("paho/test/simple", hostname=MQTT_IP,port=MQTT_PORT,)
print("----after simple")
return msg
publish.single("paho/test/single", "cloud to device", qos=2, hostname=MQTT_IP,port=MQTT_PORT)
try:
print("pub")
msg = sub()
print(msg)
except StopIteration as identifier:
print("error")
The result infinitely wait
pub
----before simple
I want the function which include subscribe.simple API can stop after 5 seconds.
Asyncio won't be able to handle blocking function in the same thread. therefore using asyncio.wait_for failed. However, inspired by this blog post I used loop.run_in_executor to keep control on the blocking thread.
from paho.mqtt import subscribe
import asyncio
MQTT_IP = "localhost"
MQTT_PORT = 1883
msg = None
def possibly_blocking_function():
global msg
print("listenning for message")
msg = subscribe.simple(
"paho/test/simple",
hostname=MQTT_IP,
port=MQTT_PORT,
)
print("message received!")
async def main():
print("----before simple")
try:
await asyncio.wait_for(
loop.run_in_executor(None, possibly_blocking_function), timeout=5
)
except asyncio.TimeoutError:
pass
print("----after simple")
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Output :
----before simple
listenning for message
----after simple
Please note this is not perfect, the program won't end since there are running tasks. You can exit it using various solution but this is out of scope since I am still looking for a clean way to close that stuck thread.
Related
After the asyncio.wait_for timeout, the task was not cancelled
The script below is the minimized script to reproduce it. The tcp server just sent two chars after 100 seconds later after client connected
import sys
import asyncio
import socket
async def test_single_call():
reader, writer = await asyncio.open_connection(host='127.0.0.1', port=8888)
try:
msg = await asyncio.wait_for(reader.read(1), timeout=3)
print("Unexcepted message received:" , msg, file=sys.stderr)
assert False
except asyncio.TimeoutError:
pass
msg = await reader.read(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(test_single_call())
loop.close()
The tcpclient(code above) is expected to timeout 3 seconds later, and read again after that; but it seems the task was not cancelled after it was timeout. My python version is 3.6.9
Traceback (most recent call last):
File "tcpclient.py", line 17, in <module>
loop.run_until_complete(test_single_call())
File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
return future.result()
File "tcpclient.py", line 14, in test_single_call
msg = await reader.read(1)
File "/usr/lib/python3.6/asyncio/streams.py", line 634, in read
yield from self._wait_for_data('read')
File "/usr/lib/python3.6/asyncio/streams.py", line 452, in _wait_for_data
'already waiting for incoming data' % func_name)
RuntimeError: read() called while another coroutine is already waiting for incoming data
I also uploaded the tcp server here
For Linux Python 3.6, it had this issue. two options:
Upgrade to Python 3.8 or 3.9
Replace the open_connection and StreamReader with loop.sock_connet, loop.sock_recv, loop.sock_sendall
eg:
import sys
import asyncio
import socket
async def test_single_call(loop):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock = await loop.sock_connect(sock, server_address)
try:
msg = await asyncio.wait_for(loop.sock_recv(sock, 1), timeout=3)
print("Unexcepted message received:" , msg, file=sys.stderr)
assert False
except asyncio.TimeoutError:
pass
msg = await loop.sock_recv(sock, 1)
loop = asyncio.get_event_loop()
loop.run_until_complete(test_single_call(loop))
loop.close()
The asyncio.Futures documentations states:
Future objects are used to bridge low-level callback-based code with high-level async/await code.
Is there a canonical example of how this is done?
To make the example more concrete, suppose we want to wrap the following function, which is typical of a callback-based API. To be clear: this function cannot be modified - pretend it is some complex third party library (that probably uses threads internally that we can't control) that wants a callback.
import threading
import time
def callback_after_delay(secs, callback, *args):
"""Call callback(*args) after sleeping for secs seconds"""
def _target():
time.sleep(secs)
callback(*args)
thread = threading.Thread(target=_target)
thread.start()
We would like to be able to use our wrapper function like:
async def main():
await aio_callback_after_delay(10., print, "Hello, World")
Just use a ThreadPoolExecutor. The code doesn't change except how you kick off the thread. If you remove "return_exceptions" from the gather call you'll see the exception with full traceback printed so its up to you what you want.
import time,random
from concurrent.futures import ThreadPoolExecutor
import asyncio
def cb():
print("cb called")
def blocking():
if random.randint(0,3) == 1:
raise ValueError("Random Exception!")
time.sleep(1)
cb()
return 5
async def run(loop):
futs = []
executor = ThreadPoolExecutor(max_workers=5)
for x in range(5):
future = loop.run_in_executor(executor, blocking)
futs.append( future )
res = await asyncio.gather( *futs, return_exceptions=True )
for r in res:
if isinstance(r, Exception):
print("Exception:",r)
loop = asyncio.get_event_loop()
loop.run_until_complete( run(loop) )
loop.close()
Output
cb called
cb called
cb called
Exception: Random Exception!
Exception: Random Exception!
Below is a complete self-contained example to demonstrate one approach. It takes care of running the callback on the asyncio thread, and handles exceptions raised from the callback.
Works in python 3.6.6. I wonder about using asyncio.get_event_loop() here. We need a loop as loop.create_future() is the preferred way to create futures in asyncio. However, in 3.7 we should prefer asyncio.get_running_loop() which would raise an exception if a loop had not yet been set. Perhaps the best approach is to pass the loop in to aio_callback_after_delay explicitly - but this does not match existing asyncio code which often makes the loop an optional keyword argument. Clarification on this point, or any other improvements would be appreciated!
import asyncio
import threading
import time
# This is the callback code we are trying to bridge
def callback_after_delay(secs, callback, *args):
"""Call callback(*args) after sleeping for secs seconds"""
def _target():
time.sleep(secs)
callback(*args)
thread = threading.Thread(target=_target)
thread.start()
# This is our wrapper
async def aio_callback_after_delay(secs, callback, *args):
loop = asyncio.get_event_loop()
f = loop.create_future()
def _inner():
try:
f.set_result(callback(*args))
except Exception as ex:
f.set_exception(ex)
callback_after_delay(secs, loop.call_soon_threadsafe, _inner)
return await f
#
# Below is test code to demonstrate things work
#
async def test_aio_callback_after_delay():
print('Before!')
await aio_callback_after_delay(1., print, "Hello, World!")
print('After!')
async def test_aio_callback_after_delay_exception():
def callback():
raise RuntimeError()
print('Before!')
await aio_callback_after_delay(1., callback)
print('After!')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
# Basic test
print('Basic Test')
loop.run_until_complete(test_aio_callback_after_delay())
# Test our implementation is truly async
print('Truly Async!')
loop.run_until_complete(
asyncio.gather(
*(test_aio_callback_after_delay() for i in range(0,5))
)
)
# Test exception handling
print('Exception Handling')
loop.run_until_complete(test_aio_callback_after_delay_exception())
The output is something like:
Basic Test
Before!
Hello, World
After!
Truly Async!
Before!
Before!
Before!
Before!
Before!
Hello, World
Hello, World
Hello, World
Hello, World
Hello, World
After!
After!
After!
After!
After!
Exception Handling
Before!
Traceback (most recent call last):
File "./scratch.py", line 60, in <module>
loop.run_until_complete(test_aio_callback_after_delay_exception())
File "\lib\asyncio\base_events.py", line 468, in run_until_complete
return future.result()
File "./scratch.py", line 40, in test_aio_callback_after_delay_exception
await aio_callback_after_delay(1., callback)
File "./scratch.py", line 26, in aio_callback_after_delay
return await f
File "./scratch.py", line 21, in _inner
f.set_result(callback(*args))
File "./scratch.py", line 37, in callback
raise RuntimeError()
RuntimeError
I am coding for python >3.5.
I am using Websockets 6.0 library that is here:
https://github.com/aaugustin/websockets
I have been call them the asyncio Websockets since they are based on asyncio.
In my search there were a lot of "lost connections", but I am looking at how to cancel a current ws.recv().
A call to the .start() creates a helper thread to start the asynico event loop. Then the receive function start and calls the connect function and websocket ws is instanced. Then the receive functions works fall messages. When I am ready to stop, a .stop() is called. I was expecting the stop funciton to stop the awaited ws.recv(). Then with the keep_running flag set to false and running a ws.close(), I would expect the ws.recv() to end and the when keep_running loop to end. That is not what is happening. I see all three stops, but never the receive stop.
command is: stop
Do command is stopped
Stop 1
Stop 2
Stop 3
^CException ignored in: <module 'threading' from '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py'>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 1294, in _shutdown
t.join()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 1056, in join
self._wait_for_tstate_lock()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 1072, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
(pyalmondplus) Pauls-MBP:pyalmondplus paulenright$
Code for reference:
import threading
import asyncio
import websockets
import json
class PyAlmondPlus:
def __init__(self, api_url, event_callback=None):
self.api_url = api_url
self.ws = None
self.loop = asyncio.get_event_loop()
self.receive_task = None
self.event_callback = event_callback
self.keep_running = False
async def connect(self):
print("connecting")
if self.ws is None:
print("opening socket")
self.ws = await websockets.connect(self.api_url)
print(self.ws)
async def disconnect(self):
pass
async def send(self, message):
pass
async def receive(self):
print("receive started")
while self.keep_running:
if self.ws is None:
await self.connect()
recv_data = await self.ws.recv()
print(recv_data)
print("receive ended")
def start(self):
self.keep_running = True
print("Start 1")
print("Start 2")
t = threading.Thread(target=self.start_loop, args=())
print("Start 3")
t.start()
print("Receiver running")
def start_loop(self):
print("Loop helper 1")
policy = asyncio.get_event_loop_policy()
policy.set_event_loop(policy.new_event_loop())
self.loop = asyncio.get_event_loop()
self.loop.set_debug(True)
asyncio.set_event_loop(self.loop)
self.loop.run_until_complete(self.receive())
print("Loop helper 2")
def stop(self):
print("Stop 1")
self.keep_running = False
print("Stop 2")
self.ws.close()
print("Stop 3")
I am looking at how to cancel a current ws.recv() [...] I see all three stops, but never the receive stop.
Your receive coroutine is likely suspended waiting for some data to arrive, so it's not in a position to check the keep_running flag.
The easy and robust way to stop a running coroutine is to cancel the asyncio Task that drives it. That will immediately un-suspend the coroutine and make whatever it was waiting for raise a CancelledError. When using cancel you don't need a keep_running flag at all, the exception will terminate the loop automatically.
A call to the .start() creates a helper thread to start the asynico event loop.
This works, but you don't really need a new thread and a whole new event loop for each instance of PyAlmondPlus. Asyncio is designed to run inside a single thread, so one event loop instance can host any number of coroutines.
Here is a possible design that implements both ideas (not tested with actual web sockets):
# pre-start a single thread that runs the asyncio event loop
bgloop = asyncio.new_event_loop()
_thread = threading.Thread(target=bgloop.run_forever)
_thread.daemon = True
_thread.start()
class PyAlmondPlus:
def __init__(self, api_url):
self.api_url = api_url
self.ws = None
async def connect(self):
if self.ws is None:
self.ws = await websockets.connect(self.api_url)
async def receive(self):
# keep_running is not needed - cancel the task instead
while True:
if self.ws is None:
await self.connect()
recv_data = await self.ws.recv()
async def init_receive_task(self):
self.receive_task = bgloop.create_task(self.receive())
def start(self):
# use run_coroutine_threadsafe to safely submit a coroutine
# to the event loop running in a different thread
init_done = asyncio.run_coroutine_threadsafe(
self.init_receive_task(), bgloop)
# wait for the init coroutine to actually finish
init_done.result()
def stop(self):
# Cancel the running task. Since the event loop is in a
# background thread, request cancellation with
# call_soon_threadsafe.
bgloop.call_soon_threadsafe(self.receive_task.cancel)
I am using aiozmq for a simple RPC program.
I have created a client and server.
When the server is running, the client runs just fine.
I have a timeout set in the client to raise an exception in the event of no server being reachable.
The client code is below. When I run it without the server running, I get an expected exception but the script doesn't actually return to the terminal. It still seems to be executing.
Could someone firstly explain how this is happening and secondly how to fix it?
import asyncio
from asyncio import TimeoutError
from aiozmq import rpc
import sys
import os
import signal
import threading
import sys
import traceback
#signal.signal(signal.SIGINT, signal.SIG_DFL)
async def client():
print("waiting for connection..")
client = await rpc.connect_rpc(
connect='tcp://127.0.0.1:5555',
timeout=1
)
print("got client")
for i in range(100):
print("{}: calling simple_add".format(i))
ret = await client.call.simple_add(1, 2)
assert 3 == ret
print("calling slow_add")
ret = await client.call.slow_add(3, 5)
assert 8 == ret
client.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.set_debug(True)
future = asyncio.ensure_future(client())
try:
loop.run_until_complete(future)
except TimeoutError:
print("Timeout occurred...")
future.cancel()
loop.stop()
#loop.run_forever()
main_thread = threading.currentThread()
for t in threading.enumerate():
if t is main_thread:
print("skipping main_thread...")
continue
print("Thread is alive? {}".format({True:'yes',
False:'no'}[t.is_alive()]))
print("Waiting for thread...{}".format(t.getName()))
t.join()
print(sys._current_frames())
traceback.print_stack()
for thread_id, frame in sys._current_frames().items():
name = thread_id
for thread in threading.enumerate():
if thread.ident == thread_id:
name = thread.name
traceback.print_stack(frame)
print("exiting..")
sys.exit(1)
#os._exit(1)
print("eh?")
The result of running the above is below. Note again that the program was still running, I had to to exit.
> python client.py
waiting for connection..
got client
0: calling simple_add
Timeout occurred...
skipping main_thread...
{24804: <frame object at 0x00000000027C3848>}
File "client.py", line 54, in <module>
traceback.print_stack()
File "client.py", line 60, in <module>
traceback.print_stack(frame)
exiting..
^C
I also tried sys.exit() which also didn't work:
try:
loop.run_until_complete(future)
except:
print("exiting..")
sys.exit(1)
I can get the program to die, but only if I use os._exit(1). sys.exit() doesn't seem to cut it. I doesn't appear that there are any other threads preventing the interpreter from dying. (Unless I'm mistaken?) What else could be stopping the program from exiting?
I've just started using the asyncio libs from Python3.4 and wrote a small program which attempts to concurrently fetch 50 webpages at a time. The program blows up after a few hundred requests with a 'Too many open files' exception.
I thought that my fetch method closes the connections with the 'response.read_and_close()' method call.
Any ideas what's going on here? Am I going about this problem the right way?
import asyncio
import aiohttp
#asyncio.coroutine
def fetch(url):
response = yield from aiohttp.request('GET', url)
response = yield from response.read_and_close()
return response.decode('utf-8')
#asyncio.coroutine
def print_page(url):
page = yield from fetch(url)
# print(page)
#asyncio.coroutine
def process_batch_of_urls(round, urls):
print("Round starting: %d" % round)
coros = []
for url in urls:
coros.append(asyncio.Task(print_page(url)))
yield from asyncio.gather(*coros)
print("Round finished: %d" % round)
#asyncio.coroutine
def process_all():
api_url = 'https://google.com'
for i in range(10):
urls = []
for url in range(50):
urls.append(api_url)
yield from process_batch_of_urls(i, urls)
loop = asyncio.get_event_loop()
loop.run_until_complete(process_all())
The error I'm getting is:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/aiohttp/client.py", line 106, in request
File "/usr/local/lib/python3.4/site-packages/aiohttp/connector.py", line 135, in connect
File "/usr/local/lib/python3.4/site-packages/aiohttp/connector.py", line 242, in _create_connection
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/base_events.py", line 424, in create_connection
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/base_events.py", line 392, in create_connection
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socket.py", line 123, in __init__
OSError: [Errno 24] Too many open files
During handling of the above exception, another exception occurred:
Aha, I grok you problem.
Explicit connector definitely can solve the issue.
https://github.com/KeepSafe/aiohttp/pull/79 should fix it for implicit connectors too.
Thank you very much for finding resource leak in aiohttp
UPD.
aiohttp 0.8.2 has no the problem.
Ok I finally got it to work.
Turns out I had to use a TCPConnector which pools connections.
So I made this variable:
connector = aiohttp.TCPConnector(share_cookies=True, loop=loop)
and pass it through to each get request. My new fetch routine looks like this:
#asyncio.coroutine
def fetch(url):
data = ""
try:
yield from asyncio.sleep(1)
response = yield from aiohttp.request('GET', url, connector=connector)
except Exception as exc:
print('...', url, 'has error', repr(str(exc)))
else:
data = (yield from response.read()).decode('utf-8', 'replace')
response.close()
return data