Wait on an ordinary function which calls an async function - python-3.x

For a project, I want to be able to have simultaneously a sync and async version of a library, the sync version has most logic parts and the async must call the sync version in a async way. For example I have a class which gets an http requester in constructor, this requester handle sync or async internally:
.
├── async
│ └── foo.py
├── foo.py
└── main.py
└── requester.py
# requester.py
class Requester():
def connect():
return self._connect('some_address')
class AsynRequester():
async def connect():
return await self._connect('some_address')
# foo.py
class Foo:
def __init__(self, requester):
self._requester = requester
def connect(self):
self.connect_info = self._requester.connect('host') # in async version it would be called by an await internally
# async/foo.py
from foo import Foo as RawFoo
class Foo(RawFoo):
async def connect(self):
return await super(RawFoo, self).connect()
# main.py
from async.foo import Foo # from foo import Foo
from requester import AsynRequester # from requester import Requester
def main():
f = Foo(AsyncRequester()) # or Foo(Requester()) where we use sync requester
await f.connect() # or f.connect() if we are using sync methods
But async connect finally calls the sync connect of sync class type of Foo (which is parent of async class) which internally calls requester.connect function. It is impossible because requester.connect internally has called await connect when it had being used in async mode but it is calling without any await.
All of my tests have been written for sync version, because async tests is not efficient as they must be, also I must write tests for one version and be sure that both versions would work correctly. How can I have both version at the same time which are using the same logic and just the I/O calls are separated.

the sync version has most logic parts and the async must call the sync version in a async way
It is possible, but it's a lot of work, as you're effectively fighting the function color mismatch. Your logic will have to be written in an async fashion, with some hacks to allow it to work in sync mode.
For example, a logic method would look like this:
# common code that doesn't assume it's either sync or async
class FooRaw:
async def connect(self):
self.connect_info = await self._to_async(self._requester.connect(ENDPOINT))
async def hello_logic(self):
await self._to_async(self.connect())
self.sock.write('hello %s %s\n' % (USERNAME, PASSWORD))
resp = await self._to_async(self.sock.readline())
assert resp.startswith('OK')
return resp
When running under asyncio, methods like connect and readline are coroutines, so their return value must be awaited. On the other hand, in blocking code self.connect and sock.readline are sync functions that return concrete values. But await is a syntactic construct which is either present or missing, you cannot switch it off at run-time without duplicating code.
To allow the same code to work in sync and async modes, FooRaw.hello_logic always awaits, leaving it to the _to_async method to wrap the result in an awaitable when running outside asyncio. In async classes _asincify awaits its argument and return the result, it's basically a no-op. In sync classes it returns the received object without awaiting it - but is still defined as async def, so it can be awaited. In that case FooRaw.hello_logic is still a coroutine, but one that never suspends (because the "coroutines" it awaits are all instances of _to_async which doesn't suspend outside asyncio.)
With that in place, the async implementation of hello_logic doesn't need to do anything except choose the right requester and provide the correct _to_async; its connect and hello_logic inherited from FooRaw do the right thing automatically:
class FooAsync(FooRaw):
def __init__(self):
self._requester = AsyncRequester()
#staticmethod
async def _to_async(x):
# we're running async, await X and return the result
result = await x
return result
The sync version will, in addition to implementing _to_async, need to wrap the logic methods to "run" the coroutine:
class FooSync(FooRaw):
def __init__(self):
self._requester = SyncRequester()
#staticmethod
async def _to_async(x):
# we're running sync, X is the result we want
return x
# the following can be easily automated by a decorator
def connect(self):
return _run_sync(super().connect())
def hello_logic(self):
return _run_sync(super().hello_logic())
Note that it is possible to run the coroutine outside the event loop only because the FooSync.hello_logic is a coroutine in name only; the underlying requester uses blocking calls, so FooRaw.connect and others never really suspend, they complete their execution in a single run. (This is similar to a generator that does some work without ever yielding anything.) This property makes the _run_sync helper straightforward:
def _run_sync(coro):
try:
# start running the coroutine
coro.send(None)
except StopIteration as e:
# the coroutine has finished; return the result
# stored in the `StopIteration` exception
return e.value
else:
raise AssertionError("coroutine suspended")

Related

Pytest+FastAPI+SQLAlchemy+Postgres InterfaceError

I've met some problem with running tests using FastAPI+SQLAlchemy and PostgreSQL, which leads to lots of errors (however, it works well on SQLite). I've created a repo with MVP app and Pytest on Docker Compose testing.
The basic error is sqlalchemy.exc.InterfaceError('cannot perform operation: another operation is in progress'). This may be related to the app/DB initialization, though I checked that all the operations get performed sequentially. Also I tried to use single instance of TestClient for the all tests, but got no better results. I hope to find a solution, a correct way for testing such apps 🙏
Here are the most important parts of the code:
app.py:
app = FastAPI()
some_items = dict()
#app.on_event("startup")
async def startup():
await create_database()
# Extract some data from env, local files, or S3
some_items["pi"] = 3.1415926535
some_items["eu"] = 2.7182818284
#app.post("/{name}")
async def create_obj(name: str, request: Request):
data = await request.json()
if data.get("code") in some_items:
data["value"] = some_items[data["code"]]
async with async_session() as session:
async with session.begin():
await create_object(session, name, data)
return JSONResponse(status_code=200, content=data)
else:
return JSONResponse(status_code=404, content={})
#app.get("/{name}")
async def get_connected_register(name: str):
async with async_session() as session:
async with session.begin():
objects = await get_objects(session, name)
result = []
for obj in objects:
result.append({
"id": obj.id, "name": obj.name, **obj.data,
})
return result
tests.py:
#pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
loop.close()
#pytest_asyncio.fixture(scope="module")
#pytest.mark.asyncio
async def get_db():
await delete_database()
await create_database()
#pytest.mark.parametrize("test_case", test_cases_post)
def test_post(get_db, test_case):
with TestClient(app)() as client:
response = client.post(f"/{test_case['name']}", json=test_case["data"])
assert response.status_code == test_case["res"]
#pytest.mark.parametrize("test_case", test_cases_get)
def test_get(get_db, test_case):
with TestClient(app)() as client:
response = client.get(f"/{test_case['name']}")
assert len(response.json()) == test_case["count"]
db.py:
DATABASE_URL = environ.get("DATABASE_URL", "sqlite+aiosqlite:///./test.db")
engine = create_async_engine(DATABASE_URL, future=True, echo=True)
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
Base = declarative_base()
async def delete_database():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
async def create_database():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
class Model(Base):
__tablename__ = "smth"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
data = Column(JSON, nullable=False)
idx_main = Index("name", "id")
async def create_object(db: Session, name: str, data: dict):
connection = Model(name=name, data=data)
db.add(connection)
await db.flush()
async def get_objects(db: Session, name: str):
raw_q = select(Model) \
.where(Model.name == name) \
.order_by(Model.id)
q = await db.execute(raw_q)
return q.scalars().all()
At the moment the testing code is quite coupled, so the test suite seems to work as follows:
the database is created once for all tests
the first set of tests runs and populates the database
the second set of tests runs (and will only succeed if the database is fully populated)
This has value as an end-to-end test, but I think it would work better if the whole thing were placed in a single test function.
As far as unit testing goes, it is a bit problematic. I'm not sure whether pytest-asyncio makes guarantees about test running order (there are pytest plugins that exist solely to make tests run in a deterministic order), and certainly the principle is that unit tests should be independent of each other.
The testing is coupled in another important way too - the database I/O code and the application logic are being tested simultaneously.
A practice that FastAPI encourages is to make use of dependency injection in your routes:
from fastapi import Depends, FastAPI, Request
...
def get_sessionmaker() -> Callable:
# this is a bit baroque, but x = Depends(y) assigns x = y()
# so that's why it's here
return async_session
#app.post("/{name}")
async def create_obj(name: str, request: Request, get_session = Depends(get_sessionmaker)):
data = await request.json()
if data.get("code") in some_items:
data["value"] = some_items[data["code"]]
async with get_session() as session:
async with session.begin():
await create_object(session, name, data)
return JSONResponse(status_code=200, content=data)
else:
return JSONResponse(status_code=404, content={})
When it comes to testing, FastAPI then allows you to swap out your real dependencies so that you can e.g. mock the database and test the application logic in isolation from database I/O:
from app import app, get_sessionmaker
from mocks import mock_sessionmaker
...
client = TestClient(app)
...
async def override_sessionmaker():
return mock_sessionmaker
app.dependency_overrides[get_sessionmaker] = override_sessionmaker
# now we can run some tests
This will mean that when you run your tests, whatever you put in mocks.mock_sessionmaker will give you the get_session function in your tests, rather than get_sessionmaker. We could have our mock_sessionmaker return a function called get_mock_session.
In other words, rather than with async_session() as session:, in the tests we'd have with get_mock_session() as session:.
Unfortunately this get_mock_session has to return something a little complicated (let's call it mock_session), because the application code then does an async with session.begin().
I'd be tempted to refactor the application code for simplicity, but if not then it will have to not throw errors when you call .begin, .add, and .flush on it, in this example, and those methods have to be async. But they don't have to do anything, so it's not too bad...
The FastAPI docs have an alternative example of databases + dependencies that does leave the code a little coupled, but uses SQLite strictly for the purpose of unit tests, leaving you free to do something different for an end-to-end test and in the application itself.

Using asyncio for doing a/b testing in Python

Let's say there's some API that's running in production already and you created another API which you kinda want to A/B test using the incoming requests that's hitting the production-api. Now I was wondering, is it possible to do something like this, (I am aware of people doing traffic splits by keeping two different API versions for A/B testing etc)
As soon as you get the incoming request for your production-api, you make an async request to your new API and then carry on with the rest of the code for the production-api and then, just before returning the final response to the caller back, you check whether you have the results computed for that async task that you had created before. If it's available, then you return that instead of the current API.
I am wondering, what's the best way to do something like this? Do we try to write a decorator for this or something else? i am a bit worried about lot of edge cases that can happen if we use async here. Anyone has any pointers on making the code or the whole approach better?
Thanks for your time!
Some pseudo-code for the approach above,
import asyncio
def call_old_api():
pass
async def call_new_api():
pass
async def main():
task = asyncio.Task(call_new_api())
oldResp = call_old_api()
resp = await task
if task.done():
return resp
else:
task.cancel() # maybe
return oldResp
asyncio.run(main())
You can't just execute call_old_api() inside asyncio's coroutine. There's detailed explanation why here. Please, ensure you understand it, because depending on how your server works you may not be able to do what you want (to run async API on a sync server preserving the point of writing an async code, for example).
In case you understand what you're doing, and you have an async server, you can call the old sync API in thread and use a task to run the new API:
task = asyncio.Task(call_new_api())
oldResp = await in_thread(call_old_api())
if task.done():
return task.result() # here you should keep in mind that task.result() may raise exception if the new api request failed, but that's probably ok for you
else:
task.cancel() # yes, but you should take care of the cancelling, see - https://stackoverflow.com/a/43810272/1113207
return oldResp
I think you can go even further and instead of always waiting for the old API to be completed, you can run both APIs concurrently and return the first that's done (in case new api works faster than the old one). With all checks and suggestions above, it should look something like this:
import asyncio
import random
import time
from contextlib import suppress
def call_old_api():
time.sleep(random.randint(0, 2))
return "OLD"
async def call_new_api():
await asyncio.sleep(random.randint(0, 2))
return "NEW"
async def in_thread(func):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(None, func)
async def ensure_cancelled(task):
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
old_api_task = asyncio.Task(in_thread(call_old_api))
new_api_task = asyncio.Task(call_new_api())
done, pending = await asyncio.wait(
[old_api_task, new_api_task], return_when=asyncio.FIRST_COMPLETED
)
if pending:
for task in pending:
await ensure_cancelled(task)
finished_task = done.pop()
res = finished_task.result()
print(res)
asyncio.run(main())

Running periodically background function in class initializer

I am trying to interface some Api which requires an auth token using asyncio.
I have a method inside the ApiClient class for obtaining this token.
class ApiClient:
def __init__(self):
self._auth_token = None
# how to invoke _keep_auth_token_alive in the background?
async def _keep_auth_token_alive(self):
while True:
await self._set_auth_token()
await asyncio.sleep(3600)
The problem is, that every hour I need to recall this function in order to maintain a valid token because it refreshes every hour (without this I will get 401 after one hour).
How can I make this method invoke every hour in the background starting at the initializing of ApiClient?
(The _set_auth_token method makes an HTTP request and then self._auth_token = res.id_token)
To use an asyncio library, your program needs to run inside the asyncio event loop. Assuming that's already the case, you need to use asyncio.create_task:
class ApiClient:
def __init__(self):
self._auth_token = None
asyncio.create_task(self._keep_auth_token_alive())
Note that the auth token will not be available upon a call to ApiClient(), it will get filled no earlier than the first await, after the background task has had a chance to run. To fix that, you can make _set_async_token public and await it explicitly:
client = ApiClient()
await client.set_async_token()
To make the usage more ergonomic, you can implement an async context manager. For example:
class ApiClient:
def __init__(self):
self._auth_token = None
async def __aenter__(self):
await self._set_auth_token()
self._keepalive_task = asyncio.create_task(self._keep_auth_token_alive())
async def __aexit__(self, *_):
self._keepalive_task.cancel()
async def _keep_auth_token_alive(self):
# modified to sleep first and then re-acquire the token
while True:
await asyncio.sleep(3600)
await self._set_auth_token()
Then you use ApiClient inside an async with:
async with ApiClient() as client:
...
This has the advantage of reliably canceling the task once the client is no longer needed. Since Python is garbage-collected and doesn't support deterministic destructors, this would not be possible if the task were created in the constructor.

Python 3.8 mock for coroutines

I try to write unittest for checking if class method was asserted.
class Application:
async def func1(self):
await self.func2(self.func3())
async def func2(self, val):
pass
async def func3(self):
pass
And unittest for it:
#pytest.mark.asyncio
async def test_method():
app = Application()
with patch.object(Application, 'func2') as mock:
await app.func1()
mock.assert_awaited_with(app.func3())
But I get error:
AssertionError: expected await not found.
Expected: func2(<coroutine object Application.func3 at 0x7f1ecf8557c0>)
Actual: func2(<coroutine object Application.func3 at 0x7f1ecf855540>)
Why? I called the same method. What can I do with that?
There is an asynctest that makes it easier to mock (patch) async function.
Patching
Patching is a mechanism allowing to temporarily replace a symbol
(class, object, function, attribute, …) by a mock, in-place. It is
especially useful when one need a mock, but can’t pass it as a
parameter of the function to be tested.
For instance, if cache_users() didn’t accept the client argument, but
instead created a new client, it would not be possible to replace it
by a mock like in all the previous examples.
When an object is hard to mock, it sometimes shows a limitation in the
design: a coupling that is too tight, the use of a global variable (or
a singleton), etc. However, it’s not always possible or desirable to
change the code to accomodate the tests. A common situation where
tight coupling is almost invisible is when performing logging or
monitoring. In this case, patching will help.
A patch() can be used as a context manager. It will replace the target
(logging.debug()) with a mock during the lifetime of the with block.
async def test_with_context_manager(self):
client = asynctest.Mock(AsyncClient())
cache = {}
with asynctest.patch("logging.debug") as debug_mock:
await cache_users_async(client, cache)
debug_mock.assert_called()
Notice that the path to the function is give as a string to the asynctest.patch and not as an object.
In addition, in func1 func2 in getting func3 as a coroutine and does not wait for it to run (no await).
You could try something like this:
import asynctest
#pytest.mark.asyncio
async def test_method():
app = Application()
with asynctest.patch('Application.func2') as mock:
await app.func1()
mock.assert_awaited_with(app.func3())

Failure on unit tests with pytest, tornado and aiopg, any query fail

I've a REST API running on Python 3.7 + Tornado 5, with postgresql as database, using aiopg with SQLAlchemy core (via the aiopg.sa binding). For the unit tests, I use py.test with pytest-tornado.
All the tests go ok as soon as no query to the database is involved, where I'd get this:
Runtime error: Task cb=[IOLoop.add_future..() at venv/lib/python3.7/site-packages/tornado/ioloop.py:719]> got Future attached to a different loop
The same code works fine out of the tests, I'm capable of handling 100s of requests so far.
This is part of an #auth decorator which will check the Authorization header for a JWT token, decode it and get the user's data and attach it to the request; this is the part for the query:
partner_id = payload['partner_id']
provided_scopes = payload.get("scope", [])
for scope in scopes:
if scope not in provided_scopes:
logger.error(
'Authentication failed, scopes are not compliant - '
'required: {} - '
'provided: {}'.format(scopes, provided_scopes)
)
raise ForbiddenException(
"insufficient permissions or wrong user."
)
db = self.settings['db']
partner = await Partner.get(db, username=partner_id)
# The user is authenticated at this stage, let's add
# the user info to the request so it can be used
if not partner:
raise UnauthorizedException('Unknown user from token')
p = Partner(**partner)
setattr(self.request, "partner_id", p.uuid)
setattr(self.request, "partner", p)
The .get() async method from Partner comes from the Base class for all models in the app. This is the .get method implementation:
#classmethod
async def get(cls, db, order=None, limit=None, offset=None, **kwargs):
"""
Get one instance that will match the criteria
:param db:
:param order:
:param limit:
:param offset:
:param kwargs:
:return:
"""
if len(kwargs) == 0:
return None
if not hasattr(cls, '__tablename__'):
raise InvalidModelException()
tbl = cls.__table__
instance = None
clause = cls.get_clause(**kwargs)
query = (tbl.select().where(text(clause)))
if order:
query = query.order_by(text(order))
if limit:
query = query.limit(limit)
if offset:
query = query.offset(offset)
logger.info(f'GET query executing:\n{query}')
try:
async with db.acquire() as conn:
async with conn.execute(query) as rows:
instance = await rows.first()
except DataError as de:
[...]
return instance
The .get() method above will either return a model instance (row representation) or None.
It uses the db.acquire() context manager, as described in aiopg's doc here: https://aiopg.readthedocs.io/en/stable/sa.html.
As described in this same doc, the sa.create_engine() method returns a connection pool, so the db.acquire() just uses one connection from the pool. I'm sharing this pool to every request in Tornado, they use it to perform the queries when they need it.
So this is the fixture I've set up in my conftest.py:
#pytest.fixture
async def db():
dbe = await setup_db()
return dbe
#pytest.fixture
def app(db, event_loop):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
return app
I can't find an explanation of why this is happening; Tornado's doc and source makes it clear that asyncIO event loop is used as default, and by debugging it I can see the event loop is indeed the same one, but for some reason it seems to get closed or stopped abruptly.
This is one test that fails:
#pytest.mark.gen_test(timeout=2)
def test_score_returns_204_empty(app, http_server, http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = yield http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
This test fails as it returns 401 instead of 204, given the query on the auth decorator fails due to the RuntimeError, which returns then an Unauthorized response.
Any idea from the async experts here will be very appreciated, I'm quite lost on this!!!
Well, after a lot of digging, testing and, of course, learning quite a lot about asyncio, I made it work myself. Thanks for the suggestions so far.
The issue was that the event_loop from asyncio was not running; as #hoefling mentioned, pytest itself does not support coroutines, but pytest-asyncio brings such a useful feature to your tests. This is very well explained here: https://medium.com/ideas-at-igenius/testing-asyncio-python-code-with-pytest-a2f3628f82bc
So, without pytest-asyncio, your async code that needs to be tested will look like this:
def test_this_is_an_async_test():
loop = asyncio.get_event_loop()
result = loop.run_until_complete(my_async_function(param1, param2, param3)
assert result == 'expected'
We use loop.run_until_complete() as, otherwise, the loop will never be running, as this is the way asyncio works by default (and pytest makes nothing to make it work differently).
With pytest-asyncio, your test works with the well-known async / await parts:
async def test_this_is_an_async_test(event_loop):
result = await my_async_function(param1, param2, param3)
assert result == 'expected'
pytest-asyncio in this case wraps the run_until_complete() call above, summarizing it heavily, so the event loop will run and be available for your async code to use it.
Please note: the event_loop parameter in the second case is not even necessary here, pytest-asyncio gives one available for your test.
On the other hand, when you are testing your Tornado app, you usually need to get a http server up and running during your tests, listening in a well-known port, etc., so the usual way goes by writing fixtures to get a http server, base_url (usually http://localhost:, with an unused port, etc etc).
pytest-tornado comes up as a very useful one, as it offers several of these fixtures for you: http_server, http_client, unused_port, base_url, etc.
Also to mention, it gets a pytest mark's gen_test() feature, which converts any standard test to use coroutines via yield, and even to assert it will run with a given timeout, like this:
#pytest.mark.gen_test(timeout=3)
def test_fetch_my_data(http_client, base_url):
result = yield http_client.fetch('/'.join([base_url, 'result']))
assert len(result) == 1000
But, this way it does not support async / await, and actually only Tornado's ioloop will be available via the io_loop fixture (although Tornado's ioloop uses by default asyncio underneath from Tornado 5.0), so you'd need to combine both pytest.mark.gen_test and pytest.mark.asyncio, but in the right order! (which I did fail).
Once I understood better what could be the problem, this was the next approach:
#pytest.mark.gen_test(timeout=2)
#pytest.mark.asyncio
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
But this is utterly wrong, if you understand how Python's decorator wrappers work. With the code above, pytest-asyncio's coroutine is then wrapped in a pytest-tornado yield gen.coroutine, which won't get the event-loop running... so my tests were still failing with the same problem. Any query to the database were returning a Future waiting for an event loop to be running.
My updated code once I made myself up of the silly mistake:
#pytest.mark.asyncio
#pytest.mark.gen_test(timeout=2)
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
In this case, the gen.coroutine is wrapped inside the pytest-asyncio coroutine, and the event_loop runs the coroutines as expected!
But there were still a minor issue that took me a little while to realize, too; pytest-asyncio's event_loop fixture creates for every test a new event loop, while pytest-tornado creates too a new IOloop. And the tests were still failing, but this time with a different error.
The conftest.py file now looks like this; please note I've re-declared the event_loop fixture to use the event_loop from pytest-tornado io_loop fixture itself (please recall pytest-tornado creates a new io_loop on each test function):
#pytest.fixture(scope='function')
def event_loop(io_loop):
loop = io_loop.current().asyncio_loop
yield loop
loop.stop()
#pytest.fixture(scope='function')
async def db():
dbe = await setup_db()
yield dbe
#pytest.fixture
def app(db):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
yield app
Now all my tests work, I'm back a happy man and very proud of my now better understanding of the asyncio way of life. Cool!

Resources