Python 3.8 mock for coroutines - python-3.x

I try to write unittest for checking if class method was asserted.
class Application:
async def func1(self):
await self.func2(self.func3())
async def func2(self, val):
pass
async def func3(self):
pass
And unittest for it:
#pytest.mark.asyncio
async def test_method():
app = Application()
with patch.object(Application, 'func2') as mock:
await app.func1()
mock.assert_awaited_with(app.func3())
But I get error:
AssertionError: expected await not found.
Expected: func2(<coroutine object Application.func3 at 0x7f1ecf8557c0>)
Actual: func2(<coroutine object Application.func3 at 0x7f1ecf855540>)
Why? I called the same method. What can I do with that?

There is an asynctest that makes it easier to mock (patch) async function.
Patching
Patching is a mechanism allowing to temporarily replace a symbol
(class, object, function, attribute, …) by a mock, in-place. It is
especially useful when one need a mock, but can’t pass it as a
parameter of the function to be tested.
For instance, if cache_users() didn’t accept the client argument, but
instead created a new client, it would not be possible to replace it
by a mock like in all the previous examples.
When an object is hard to mock, it sometimes shows a limitation in the
design: a coupling that is too tight, the use of a global variable (or
a singleton), etc. However, it’s not always possible or desirable to
change the code to accomodate the tests. A common situation where
tight coupling is almost invisible is when performing logging or
monitoring. In this case, patching will help.
A patch() can be used as a context manager. It will replace the target
(logging.debug()) with a mock during the lifetime of the with block.
async def test_with_context_manager(self):
client = asynctest.Mock(AsyncClient())
cache = {}
with asynctest.patch("logging.debug") as debug_mock:
await cache_users_async(client, cache)
debug_mock.assert_called()
Notice that the path to the function is give as a string to the asynctest.patch and not as an object.
In addition, in func1 func2 in getting func3 as a coroutine and does not wait for it to run (no await).
You could try something like this:
import asynctest
#pytest.mark.asyncio
async def test_method():
app = Application()
with asynctest.patch('Application.func2') as mock:
await app.func1()
mock.assert_awaited_with(app.func3())

Related

Using asyncio for doing a/b testing in Python

Let's say there's some API that's running in production already and you created another API which you kinda want to A/B test using the incoming requests that's hitting the production-api. Now I was wondering, is it possible to do something like this, (I am aware of people doing traffic splits by keeping two different API versions for A/B testing etc)
As soon as you get the incoming request for your production-api, you make an async request to your new API and then carry on with the rest of the code for the production-api and then, just before returning the final response to the caller back, you check whether you have the results computed for that async task that you had created before. If it's available, then you return that instead of the current API.
I am wondering, what's the best way to do something like this? Do we try to write a decorator for this or something else? i am a bit worried about lot of edge cases that can happen if we use async here. Anyone has any pointers on making the code or the whole approach better?
Thanks for your time!
Some pseudo-code for the approach above,
import asyncio
def call_old_api():
pass
async def call_new_api():
pass
async def main():
task = asyncio.Task(call_new_api())
oldResp = call_old_api()
resp = await task
if task.done():
return resp
else:
task.cancel() # maybe
return oldResp
asyncio.run(main())
You can't just execute call_old_api() inside asyncio's coroutine. There's detailed explanation why here. Please, ensure you understand it, because depending on how your server works you may not be able to do what you want (to run async API on a sync server preserving the point of writing an async code, for example).
In case you understand what you're doing, and you have an async server, you can call the old sync API in thread and use a task to run the new API:
task = asyncio.Task(call_new_api())
oldResp = await in_thread(call_old_api())
if task.done():
return task.result() # here you should keep in mind that task.result() may raise exception if the new api request failed, but that's probably ok for you
else:
task.cancel() # yes, but you should take care of the cancelling, see - https://stackoverflow.com/a/43810272/1113207
return oldResp
I think you can go even further and instead of always waiting for the old API to be completed, you can run both APIs concurrently and return the first that's done (in case new api works faster than the old one). With all checks and suggestions above, it should look something like this:
import asyncio
import random
import time
from contextlib import suppress
def call_old_api():
time.sleep(random.randint(0, 2))
return "OLD"
async def call_new_api():
await asyncio.sleep(random.randint(0, 2))
return "NEW"
async def in_thread(func):
loop = asyncio.get_running_loop()
return await loop.run_in_executor(None, func)
async def ensure_cancelled(task):
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
old_api_task = asyncio.Task(in_thread(call_old_api))
new_api_task = asyncio.Task(call_new_api())
done, pending = await asyncio.wait(
[old_api_task, new_api_task], return_when=asyncio.FIRST_COMPLETED
)
if pending:
for task in pending:
await ensure_cancelled(task)
finished_task = done.pop()
res = finished_task.result()
print(res)
asyncio.run(main())

Running periodically background function in class initializer

I am trying to interface some Api which requires an auth token using asyncio.
I have a method inside the ApiClient class for obtaining this token.
class ApiClient:
def __init__(self):
self._auth_token = None
# how to invoke _keep_auth_token_alive in the background?
async def _keep_auth_token_alive(self):
while True:
await self._set_auth_token()
await asyncio.sleep(3600)
The problem is, that every hour I need to recall this function in order to maintain a valid token because it refreshes every hour (without this I will get 401 after one hour).
How can I make this method invoke every hour in the background starting at the initializing of ApiClient?
(The _set_auth_token method makes an HTTP request and then self._auth_token = res.id_token)
To use an asyncio library, your program needs to run inside the asyncio event loop. Assuming that's already the case, you need to use asyncio.create_task:
class ApiClient:
def __init__(self):
self._auth_token = None
asyncio.create_task(self._keep_auth_token_alive())
Note that the auth token will not be available upon a call to ApiClient(), it will get filled no earlier than the first await, after the background task has had a chance to run. To fix that, you can make _set_async_token public and await it explicitly:
client = ApiClient()
await client.set_async_token()
To make the usage more ergonomic, you can implement an async context manager. For example:
class ApiClient:
def __init__(self):
self._auth_token = None
async def __aenter__(self):
await self._set_auth_token()
self._keepalive_task = asyncio.create_task(self._keep_auth_token_alive())
async def __aexit__(self, *_):
self._keepalive_task.cancel()
async def _keep_auth_token_alive(self):
# modified to sleep first and then re-acquire the token
while True:
await asyncio.sleep(3600)
await self._set_auth_token()
Then you use ApiClient inside an async with:
async with ApiClient() as client:
...
This has the advantage of reliably canceling the task once the client is no longer needed. Since Python is garbage-collected and doesn't support deterministic destructors, this would not be possible if the task were created in the constructor.

call from method in class to #class.method using python

I am trying to make a class, that will be eventually turned into a library. To do this, I am trying to do something like what discord.py made, and the idea comes from it.
The code that discord makes is:
#bot.event
async def on_ready():
print('discord bot is ready')
Where the '#bot' is just an object that I created before by doing
bot = discord()
And the '.event' is a preprogramed and ready to use method.
on_ready() is a function that is called already.
I want to have a way to create this from my own class, and from there mannage the entire code using async functions.
How to do it in my own code?
You need to implement a class whose public methods are decorators. For example, this class implements a scheduler that exposes scheduling through a decorator:
class Scheduler:
def __init__(self):
self._torun = []
def every_second(self, fn):
self._torun.append(fn)
async def _main(self):
while True:
for fn in self._torun:
asyncio.create_task(fn())
await asyncio.sleep(1)
def run(self):
asyncio.run(self._main())
You'd use it like this:
sched = Scheduler()
#sched.every_second
async def hello():
print('hello')
#sched.every_second
async def world():
print('world')
sched.run()
The class mimics discord in that it has a run method that calls asyncio.run for you. I would prefer to expose an async main instead, so the user can call asyncio.run(bot.main()) and have control over which event loop is used. But the example follows discord's convention, to make the API more familiar to users of discord.py.

Make method of derived class async

I have to create and use a class derived from an upstream package (not modifiable)
I want/need to add/modify a method in the derived class that should be async because i need to await a websocket send/recv in the method
I tried just to add async to the method but i get the message (from the base class method) that my method from derived class RuntimeWarning: coroutine MyCopyProgressHandler.end was never awaited
Is there a way to "convert" derived class method to async?
When you need to convert sync method to async you have several different options. Second one (run_in_executor) is probably the easiest one.
For example, this is how you can make sync function requests.get to run asynchronously:
import asyncio
import requests
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(10)
async def get(url):
loop = asyncio.get_running_loop()
response = await loop.run_in_executor(
executor,
requests.get,
url
)
return response.text
async def main():
res = await get('http://httpbin.org/get')
print(res)
asyncio.run(main())

Wait on an ordinary function which calls an async function

For a project, I want to be able to have simultaneously a sync and async version of a library, the sync version has most logic parts and the async must call the sync version in a async way. For example I have a class which gets an http requester in constructor, this requester handle sync or async internally:
.
├── async
│ └── foo.py
├── foo.py
└── main.py
└── requester.py
# requester.py
class Requester():
def connect():
return self._connect('some_address')
class AsynRequester():
async def connect():
return await self._connect('some_address')
# foo.py
class Foo:
def __init__(self, requester):
self._requester = requester
def connect(self):
self.connect_info = self._requester.connect('host') # in async version it would be called by an await internally
# async/foo.py
from foo import Foo as RawFoo
class Foo(RawFoo):
async def connect(self):
return await super(RawFoo, self).connect()
# main.py
from async.foo import Foo # from foo import Foo
from requester import AsynRequester # from requester import Requester
def main():
f = Foo(AsyncRequester()) # or Foo(Requester()) where we use sync requester
await f.connect() # or f.connect() if we are using sync methods
But async connect finally calls the sync connect of sync class type of Foo (which is parent of async class) which internally calls requester.connect function. It is impossible because requester.connect internally has called await connect when it had being used in async mode but it is calling without any await.
All of my tests have been written for sync version, because async tests is not efficient as they must be, also I must write tests for one version and be sure that both versions would work correctly. How can I have both version at the same time which are using the same logic and just the I/O calls are separated.
the sync version has most logic parts and the async must call the sync version in a async way
It is possible, but it's a lot of work, as you're effectively fighting the function color mismatch. Your logic will have to be written in an async fashion, with some hacks to allow it to work in sync mode.
For example, a logic method would look like this:
# common code that doesn't assume it's either sync or async
class FooRaw:
async def connect(self):
self.connect_info = await self._to_async(self._requester.connect(ENDPOINT))
async def hello_logic(self):
await self._to_async(self.connect())
self.sock.write('hello %s %s\n' % (USERNAME, PASSWORD))
resp = await self._to_async(self.sock.readline())
assert resp.startswith('OK')
return resp
When running under asyncio, methods like connect and readline are coroutines, so their return value must be awaited. On the other hand, in blocking code self.connect and sock.readline are sync functions that return concrete values. But await is a syntactic construct which is either present or missing, you cannot switch it off at run-time without duplicating code.
To allow the same code to work in sync and async modes, FooRaw.hello_logic always awaits, leaving it to the _to_async method to wrap the result in an awaitable when running outside asyncio. In async classes _asincify awaits its argument and return the result, it's basically a no-op. In sync classes it returns the received object without awaiting it - but is still defined as async def, so it can be awaited. In that case FooRaw.hello_logic is still a coroutine, but one that never suspends (because the "coroutines" it awaits are all instances of _to_async which doesn't suspend outside asyncio.)
With that in place, the async implementation of hello_logic doesn't need to do anything except choose the right requester and provide the correct _to_async; its connect and hello_logic inherited from FooRaw do the right thing automatically:
class FooAsync(FooRaw):
def __init__(self):
self._requester = AsyncRequester()
#staticmethod
async def _to_async(x):
# we're running async, await X and return the result
result = await x
return result
The sync version will, in addition to implementing _to_async, need to wrap the logic methods to "run" the coroutine:
class FooSync(FooRaw):
def __init__(self):
self._requester = SyncRequester()
#staticmethod
async def _to_async(x):
# we're running sync, X is the result we want
return x
# the following can be easily automated by a decorator
def connect(self):
return _run_sync(super().connect())
def hello_logic(self):
return _run_sync(super().hello_logic())
Note that it is possible to run the coroutine outside the event loop only because the FooSync.hello_logic is a coroutine in name only; the underlying requester uses blocking calls, so FooRaw.connect and others never really suspend, they complete their execution in a single run. (This is similar to a generator that does some work without ever yielding anything.) This property makes the _run_sync helper straightforward:
def _run_sync(coro):
try:
# start running the coroutine
coro.send(None)
except StopIteration as e:
# the coroutine has finished; return the result
# stored in the `StopIteration` exception
return e.value
else:
raise AssertionError("coroutine suspended")

Resources