Okay, so I am created a DataStream object which is just a wrapper class around asyncio.Queue. I am passing this around all over and everything is working fine up until the following functions. I am calling ensure_future to run 2 infinite loops, one that replicates the data in one DataStream object, and one that sends data to a websocket. here is that code:
def start(self):
# make sure that we set the event loop before we run our async requests
print("Starting WebsocketProducer on ", self.host, self.port)
RUNTIME_LOGGER.info(
"Starting WebsocketProducer on %s:%i", self.host, self.port)
#Get the event loop and add a task to it.
asyncio.set_event_loop(self.loop)
asyncio.get_event_loop().create_task(self._mirror_stream(self.data_stream))
asyncio.ensure_future(self._serve(self.ssl_context))enter code here
Ignore the indent issue, SO wont indent correctly.
And here is the method that is failing with the error 'Task was destroyed but it is pending!'. Keep in mind, if I do not include the lines with 'data_stream.get()' the function runs fine. I made sure, the objects in both locations have the same memory address AND value for id(). If i print the data that comes from the await self.data_stream.get() I get the correct data. However after that it seems to just return and break. Here is the code:
async def _mirror_stream(self):
while True:
stream_length = self.data_stream.length
try:
if stream_length > 1:
for _ in range(0, stream_length):
data = await self.data_stream.get()
else:
data = await self.data_stream.get()
except Exception as e:
print(str(e))
# If the data is null, keep the last known value
if self._is_json_serializable(data) and data is not None:
self.payload = json.dumps(data)
else:
RUNTIME_LOGGER.warning(
"Mirroring stream encountered a Null payload in WebsocketProducer!")
await asyncio.sleep(self.poll_rate)enter code here
The issue has been resolved by implementing my own async Queue by utilizing the normal queue.Queue object. For some reason the application would only work if I would 'await' for queue.get(), even though it wasnt an asyncio.Queue object... Not entirely sure why this behavior occured, however the application is running well, and still performing as if the Queue were from the asyncio lib. Thanks to those who looked!
Related
I'm trying to understand if it's possible to set a loop inside of a Try/Except call, or if I'd need to restructure to use functions. Long story short, after spending a few hours learning Python and BeautifulSoup, I managed to frankenstein some code together to scrape a list of URLs, pull that data out to CSV (and now update it to a MySQL db). The code is now working as planned, except that I occasionally run into a 10054, either because my VPN hiccups, or possibly the source host server is occasionally bouncing me (I have a 30 second delay in my loop but it still kicks me on occasion).
I get the general idea of Try/Except structure, but I'm not quite sure how I would (or if I could) loop inside it to try again. My base code to grab the URL, clean it and parse the table I need looks like this:
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
html = requests.get(url[0]).text
soup = BeautifulSoup(html, 'html.parser')
for span in soup('span'):
span.decompose()
trs = soup.select('div#collapseOne tr')
if trs:
print('Processing')
for t in trs:
for header, value in zip(t.select('td')[0], t.select('td:nth-child(2)')):
if num == 0:
headers.append(' '.join(header.split()))
values.append(re.sub(' +', ' ', value.get_text(' ', strip=True)))
After that is just processing the data to CSV and running an update sql statement.
What I'd like to do is if the HTML request call fails is wait 30 seconds, try the request again, then process, or if the retry fails X number of times, go ahead and exit the script (assuming at that point I have a full connection failure).
Is it possible to do something like that in line, or would I need to make the request statement into a function and set up a loop to call it? Have to admit I'm not familiar with how Python works with function returns yet.
You can add an inner loop for the retries and put your try/except block in that. Here is a sketch of what it would look like. You could put all of this into a function and put that function call in its own try/except block to catch other errors that cause the loop to exit.
Looking at requests exception hierarchy, Timeout covers multiple recoverable exceptions and is a good start for everything you may want to catch. Other things like SSLError aren't going to get better just because you retry, so skip them. You can go through the list to see what is reasonable for you.
import itertools
# requests exceptions at
# https://requests.readthedocs.io/en/master/_modules/requests/exceptions/
for url in contents:
print('Processing record', (num+1), 'of', len(contents))
if url:
print('Retrieving data from ', url[0])
retry_count = itertools.count()
# loop for retries
while True:
try:
# get with timeout and convert http errors to exceptions
resp = requests.get(url[0], timeout=10)
resp.raise_for_status()
# the things you want to recover from
except requests.Timeout as e:
if next(retry_count) <= 5:
print("timeout, wait and retry:", e)
time.sleep(30)
continue
else:
print("timeout, exiting")
raise # reraise exception to exit
except Exception as e:
print("unrecoverable error", e)
raise
break
html = resp.text
etc…
I've done a little example by myself to graphic this, and yes, you can put loops inside try/except blocks.
from sys import exit
def example_func():
try:
while True:
num = input("> ")
try:
int(num)
if num == "10":
print("Let's go!")
else:
print("Not 10")
except ValueError:
exit(0)
except:
exit(0)
example_func()
This is a fairly simple program that takes input and if it's 10, then it says "Let's go!", otherwise it tells you it's not 10 (if it's not a valid value, it just kicks you out).
Notice that inside the while loop I put a try/except block, taking into account the necessary indentations. You can take this program as a model and use it on your favor.
I'm writing a Python program to interact with a device based on a CAN Bus. I'm using the python-can module successfully for this purpose. I'm also using asyncio to react to asynchronous events. I have written a "CanBusManager" class that is used by the "CanBusSequencer" class. The "CanBusManager" class takes care of generating/sending/receiving messages, and the CanBusSequencer drives the sequence of messages to be sent.
At some point in the sequence I want to wait until a specific message is received to "unlock" the remaining messages to be sent in the sequence. Overview in code:
main.py
async def main():
event = asyncio.Event()
sequencer = CanBusSequencer(event)
task = asyncio.create_task(sequencer.doSequence())
await task
asyncio.run(main(), debug=True)
canBusSequencer.py
from canBusManager import CanBusManager
class CanBusSequencer:
def __init__(self, event)
self.event = event
self.canManager = CanBusManager(event)
async def doSequence(self):
for index, row in self.df_sequence.iterrows():
if:...
self.canManager.sendMsg(...)
else:
self.canManager.sendMsg(...)
await self.event.wait()
self.event.clear()
canBusManager.py
import can
class CanBusManager():
def __init__(self, event):
self.event = event
self.startListening()
**EDIT**
def startListening(self):
self.msgNotifier = can.Notifier(self.canBus, self.receivedMsgCallback)
**EDIT**
def receivedMsgCallback(self, msg):
if(msg == ...):
self.event.set()
For now my program stays by the await self.event.wait(), even though the relevant message is received and the self.event.set() is executed. Running the program with debug = True reveals an
RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
that I don't really get. It has to do with the asyncio event loop, somehow not properly defined/managed. I'm coming from the C++ world and I'm currently writing my first large program with Python. Any guidance would be really appreciated:)
Your question doesn't explain how you arrange for receivedMsgCallback to be invoked.
If it is invoked by a classic "async" API which uses threads behind the scenes, then it will be invoked from outside the thread that runs the event loop. According to the documentation, asyncio primitives are not thread-safe, so invoking event.set() from another thread doesn't properly synchronize with the running event loop, which is why your program doesn't wake up when it should.
If you want to do anything asyncio-related, such as invoke Event.set, from outside the event loop thread, you need to use call_soon_threadsafe or equivalent. For example:
def receivedMsgCallback(self, msg):
if msg == ...:
self.loop.call_soon_threadsafe(self.event.set)
The event loop object should be made available to the CanBusManager object, perhaps by passing it to its constructor and assigning it to self.loop.
On a side note, if you are creating a task only to await it immediately, you don't need a task in the first place. In other words, you can replace task = asyncio.create_task(sequencer.doSequence()); await task with the simpler await sequencer.doSequence().
I recently finished a project using a mix of Django and Twisted and realized it's overkill for what I need which is basically just a way for my servers to communicate via TCP sockets. I turned to Trio and so far I'm liking what I see as it's way more direct (for what I need). That said though, I just wanted to be sure I was doing this the right way.
I followed the tutorial which taught the basics but I need a server that could handle multiple clients at once. To this end, I came up with the following code
import trio
from itertools import count
PORT = 12345
BUFSIZE = 16384
CONNECTION_COUNTER = count()
class ServerProtocol:
def __init__(self, server_stream):
self.ident = next(CONNECTION_COUNTER)
self.stream = server_stream
async def listen(self):
while True:
data = await self.stream.receive_some(BUFSIZE)
if data:
print('{} Received\t {}'.format(self.ident, data))
# Process data here
class Server:
def __init__(self):
self.protocols = []
async def receive_connection(self, server_stream):
sp: ServerProtocol = ServerProtocol(server_stream)
self.protocols.append(sp)
await sp.listen()
async def main():
await trio.serve_tcp(Server().receive_connection, PORT)
trio.run(main)
My issue here seems to be that each ServerProtocol runs listen on every cycle instead of waiting for data to be available to be received.
I get the feeling I'm using Trio wrong in which case, is there a Trio best practices that I'm missing?
Your overall structure looks fine to me. The issue that jumps out at me is:
while True:
data = await self.stream.receive_some(BUFSIZE)
if data:
print('{} Received\t {}'.format(self.ident, data))
# Process data here
The guarantee that receive_some makes is: if the other side has closed the connection already, then it immediately returns an empty byte-string. Otherwise, it waits until there is some data to return, and then returns it as a non-empty byte-string.
So your code should work fine... until the other end closes the connection. Then it starts doing an infinite loop, where it keeps checking for data, getting an empty byte-string back (data = b""), so the if data: ... block doesn't run, and it immediately loops around to do it again.
One way to fix this would be (last 3 lines are new):
while True:
data = await self.stream.receive_some(BUFSIZE)
if data:
print('{} Received\t {}'.format(self.ident, data))
# Process data here
else:
# Other side has gone away
break
I'm having difficulty understanding the behaviour of my altered echo server, which attempts to take advantage of python 3's asyncio module.
Essentially I have an infinite loop (lets say I want to stream some data from the server to the client indefinitely whilst the connection has been made) e.g. MyServer.py:
#! /usr/bin/python3
import asyncio
import os
import time
class MyProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def connection_lost(self, exc):
asyncio.get_event_loop().stop()
def data_received(self, data):
i = 0
while True:
self.transport.write(b'>> %i' %i)
time.sleep(2)
i+=1
loop = asyncio.get_event_loop()
coro = loop.create_server(MyProtocol,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except:
loop.run_until_complete(server.wait_closed())
finally:
loop.close()
Next when I connect with nc ::1 8100 and send some text (e.g. "testing") I get the following:
user#machine$ nc ::1 8100
*** Connection from('::1', 58503, 0, 0) ***
testing
>> 1
>> 2
>> 3
^C
Now when I attempt to connect using nc again, I do not get any welcome message and after I attempt to send some new text to the server I get an endless stream of the following error:
user#machine$ nc ::1 8100
Is there anybody out there?
socket.send() raised exception
socket.send() raised exception
...
^C
Just to add salt to the wound the socket.send() raised exception message continues to spam my terminal until I kill the python server process...
As I'm new to web technologies (been a desktop dinosaur for far too long!), I'm not sure why I am getting the above behaviour and I haven't got a clue on how to produce the intended behaviour, which loosely looks like this:
server starts
client 1 connects to server
server sends welcome message to client
4 client 1 sends an arbitrary message
server sends messages back to client 1 for as long as the client is connected
client 1 disconnects (lets say the cable is pulled out)
client 2 connects to server
Repeat steps 3-6 for client 2
Any enlightenment would be extremely welcome!
There are multiple problems with the code.
First and foremost, data_received never returns. At the transport/protocol level, asyncio programming is single-threaded and callback-based. Application code is scattered across callbacks like data_received, and the event loop runs the show, monitoring file descriptors and invoking the callbacks as needed. Each callback is only allowed to perform a short calculation, invoke methods on transport, and arrange for further callbacks to be executed. What the callback cannot do is take a lot of time to complete or block waiting for something. A while loop that never exits is especially bad because it doesn't allow the event loop to run at all.
This is why the code only spits out exceptions once the client disconnects: connection_lost is never called. It's supposed to be called by the event loop, and the never-returning data_received is not giving the event loop a chance to resume. With the event loop blocked, the program is unable to respond to other clients, and data_received keeps trying to send data to the disconnected client, and logs its failure to do so.
The correct way to express the idea can look like this:
def data_received(self, data):
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
Note how both data_received and write_to_client do very little work and quickly return. No calls to time.sleep(), and definitely no infinite loops - the "loop" is hidden inside the kind-of-recursive call to write_to_client.
This change reveals the second problem in the code. Its MyProtocol.connection_lost stops the whole event loop and exits the program. This renders the program unable to respond to the second client. The fix could be to replace loop.stop() with setting a flag in connection_lost:
def data_received(self, data):
self._done = False
self.i = 0
loop.call_soon(self.write_to_client)
def write_to_client(self):
if self._done:
return
self.transport.write(b'>> %i' % self.i)
self.i += 1
loop.call_later(2, self.write_to_client)
def connection_lost(self, exc):
self._done = True
This allows multiple clients to connect.
Unrelated to the above issues, the callback-based code is a bit tiresome to write, especially when taking into account complicated code paths and exception handling. (Imagine trying to express nested loops with callbacks, or propagating an exception occurring inside a deeply embedded callback.) asyncio supports coroutines-based streams as alternative to callback-based transports and protocols.
Coroutines allow writing natural-looking code that contains loops and looks like it contains blocking calls, which under the hood are converted into suspension points that enable the event loop to resume. Using streams the code from the question would look like this:
async def talk_to_client(reader, writer):
peername = writer.get_extra_info('peername')
print('Connection from {}'.format(peername))
data = await reader.read(1024)
i = 0
while True:
writer.write(b'>> %i' % i)
await writer.drain()
await asyncio.sleep(2)
i += 1
loop = asyncio.get_event_loop()
coro = asyncio.start_server(talk_to_client,
os.environ.get('MY_SERVICE_ADDRESS', 'localhost'),
os.environ.get('MY_SERVICE_PORT', 8100))
server = loop.run_until_complete(coro)
loop.run_forever()
talk_to_client looks very much like the original implementation of data_received, but without the drawbacks. At each point where it uses await the event loop is resumed if the data is not available. time.sleep(n) is replaced with await asyncio.sleep(n) which does the equivalent of loop.call_later(n, <resume current coroutine>). Awaiting writer.drain() ensures that the coroutine pauses when the peer cannot process the output it gets, and that it raises an exception when the peer has disconnected.
I am hoping someone can help me here.
I have an object that has the ability to have attributes that return coroutine objects. This works beautifully, however I have a situation where I need to get the results of the coroutine object from synchronous code in a separate thread, while the event loop is currently running. The code I came up with is:
def get_sync(self, key: str, default: typing.Any=None) -> typing.Any:
"""
Get an attribute synchronously and safely.
Note:
This does nothing special if an attribute is synchronous. It only
really has a use for asynchronous attributes. It processes
asynchronous attributes synchronously, blocking everything until
the attribute is processed. This helps when running SQL code that
cannot run asynchronously in coroutines.
Args:
key (str): The Config object's attribute name, as a string.
default (Any): The value to use if the Config object does not have
the given attribute. Defaults to None.
Returns:
Any: The vale of the Config object's attribute, or the default
value if the Config object does not have the given attribute.
"""
ret = self.get(key, default)
if asyncio.iscoroutine(ret):
if loop.is_running():
loop2 = asyncio.new_event_loop()
try:
ret = loop2.run_until_complete(ret)
finally:
loop2.close()
else:
ret = loop.run_until_complete(ret)
return ret
What I am looking for is a safe way to synchronously get the results of a coroutine object in a multithreaded environment. self.get() can return a coroutine object, for attributes I have set to provide them. The issues I have found are: If the event loop is running or not. After searching for a few hours on stack overflow and a few other sites, my (broken) solution is above. If the loop is running, I make a new event loop and run my coroutine in the new event loop. This works, except that the code hangs forever on the ret = loop2.run_until_complete(ret) line.
Right now, I have the following scenarios with results:
results of self.get() is not a coroutine
Returns results. [Good]
results of self.get() is a coroutine & event loop is not running (basically in same thread as the event loop)
Returns results. [Good]
results of self.get() is a coroutine & event loop is running (basically in a different thread than the event loop)
Hangs forever waiting for results. [Bad]
Does anyone know how I can go about fixing the bad result so I can get the value I need? Thanks.
I hope I made some sense here.
I do have a good, and valid reason to be using threads; specifically I am using SQLAlchemy which is not async and I punt the SQLAlchemy code to a ThreadPoolExecutor to handle it safely. However, I need to be able to query these asynchronous attributes from within these threads for the SQLAlchemy code to get certain configuration values safely. And no, I won't switch away from SQLAlchemy to another system just in order to accomplish what I need, so please do not offer alternatives to it. The project is too far along to switch something so fundamental to it.
I tried using asyncio.run_coroutine_threadsafe() and loop.call_soon_threadsafe() and both failed. So far, this has gotten the farthest on making it work, I feel like I am just missing something obvious.
When I get a chance, I will write some code that provides an example of the problem.
Ok, I implemented an example case, and it worked the way I would expect. So it is likely my problem is elsewhere in the code. Leaving this open and will change the question to fit my real problem if I need.
Does anyone have any possible ideas as to why a concurrent.futures.Future from asyncio.run_coroutine_threadsafe() would hang forever rather than return a result?
My example code that does not duplicate my error, unfortunately, is below:
import asyncio
import typing
loop = asyncio.get_event_loop()
class ConfigSimpleAttr:
__slots__ = ('value', '_is_async')
def __init__(
self,
value: typing.Any,
is_async: bool=False
):
self.value = value
self._is_async = is_async
async def _get_async(self):
return self.value
def __get__(self, inst, cls):
if self._is_async and loop.is_running():
return self._get_async()
else:
return self.value
class BaseConfig:
__slots__ = ()
attr1 = ConfigSimpleAttr(10, True)
attr2 = ConfigSimpleAttr(20, True)
def get(self, key: str, default: typing.Any=None) -> typing.Any:
return getattr(self, key, default)
def get_sync(self, key: str, default: typing.Any=None) -> typing.Any:
ret = self.get(key, default)
if asyncio.iscoroutine(ret):
if loop.is_running():
fut = asyncio.run_coroutine_threadsafe(ret, loop)
print(fut, fut.running())
ret = fut.result()
else:
ret = loop.run_until_complete(ret)
return ret
config = BaseConfig()
def example_func():
return config.get_sync('attr1')
async def main():
a1 = await loop.run_in_executor(None, example_func)
a2 = await config.attr2
val = a1 + a2
print('{a1} + {a2} = {val}'.format(a1=a1, a2=a2, val=val))
return val
loop.run_until_complete(main())
This is the stripped down version of exactly what my code is doing, and the example works, even if my actual application doesn't. I am stuck as far as where to look for answers. Suggestions are welcome as to where to try to track down my "stuck forever" problem, even if my code above doesn't actually duplicate the problem.
It is very unlikely that you need to run several event loops at the same time, so this part looks quite wrong:
if loop.is_running():
loop2 = asyncio.new_event_loop()
try:
ret = loop2.run_until_complete(ret)
finally:
loop2.close()
else:
ret = loop.run_until_complete(ret)
Even testing whether the loop is running or not doesn't seem to be the right approach. It's probably better to give explicitly the (only) running loop to get_sync and schedule the coroutine using run_coroutine_threadsafe:
def get_sync(self, key, loop):
ret = self.get(key, default)
if not asyncio.iscoroutine(ret):
return ret
future = asyncio.run_coroutine_threadsafe(ret, loop)
return future.result()
EDIT: Hanging problems can be related to tasks being scheduled in the wrong loop (e.g. forgetting about the optional loop argument when calling a coroutine). This kind of problem should be easier to debug with the PR 303 (now merged): a RuntimeError is raised instead when the loop and the future don't match. So you might want to run your tests with the latest version of asyncio.
Ok, I got my code working, by taking a different approach to it. The problem was tied with using something that had file IO, which I was converting into a coroutine using loop.run_in_executor() on the file IO components. Then, I was trying to use this in a sync function being called from another thread, processed using another loop.run_in_executor() on that function. This is a very important routine in my code (called probably a million times or more during the execution of my short-running code), and I made a decision that my logic was just getting too complicated. So... I uncomplicated it. Now, if I want to use the file IO components asynchronously, I explicitly use my "get_async()" method, otherwise, I use my attribute through normal attribute access.
By removing the complexity of my logic, it made the code cleaner, easier to understand, and even more importantly, it actually works. While I am not 100% certain that I know the root cause of the issue (I believe it has something to do with a thread processing an attribute, which then in turn starts another thread that tries to read the attribute before it is processed, which caused something like a race condition and halting my code, but I could never duplicate the error outside of my application unfortunately to completely prove it out), I was able to get past it and continue with my development efforts.