Python : Mockey Patch Leaking to Other unit test cases - python-3.x

Here is my unit test case written using monkey patch library in python :
#pytest.mark.asyncio
async def test_my_method(monkeypatch):
app.dependency_overrides[check_token] = override_token_expired
async def mock_cosmos_query(*args, **kwargs):
return fake_spaces
monkeypatch.setattr('app.routers.space.cosmos_query', mock_cosmos_query)
response = client.get("/api/spaces/TestSpace", headers={"authorization": "fake user"}, params={"expand": "false"})
jsonObj = response.json()
assert response.status_code == 200
assert jsonObj['name'] == 'TestSpace'
monkeypatch.undo() // NOT WORKING
The problem is that when I do monkeypatch.undo(), it doesn't undo monkeypatch.setattr() and hence rest of my test cases fails.
My expectation is that when I do monkeypatch.undo(), it should reset monkeypatch and the bahviour of monkey patch do not leak to other test cases.
Can someone please guid me here ?

Related

Pytest+FastAPI+SQLAlchemy+Postgres InterfaceError

I've met some problem with running tests using FastAPI+SQLAlchemy and PostgreSQL, which leads to lots of errors (however, it works well on SQLite). I've created a repo with MVP app and Pytest on Docker Compose testing.
The basic error is sqlalchemy.exc.InterfaceError('cannot perform operation: another operation is in progress'). This may be related to the app/DB initialization, though I checked that all the operations get performed sequentially. Also I tried to use single instance of TestClient for the all tests, but got no better results. I hope to find a solution, a correct way for testing such apps 🙏
Here are the most important parts of the code:
app.py:
app = FastAPI()
some_items = dict()
#app.on_event("startup")
async def startup():
await create_database()
# Extract some data from env, local files, or S3
some_items["pi"] = 3.1415926535
some_items["eu"] = 2.7182818284
#app.post("/{name}")
async def create_obj(name: str, request: Request):
data = await request.json()
if data.get("code") in some_items:
data["value"] = some_items[data["code"]]
async with async_session() as session:
async with session.begin():
await create_object(session, name, data)
return JSONResponse(status_code=200, content=data)
else:
return JSONResponse(status_code=404, content={})
#app.get("/{name}")
async def get_connected_register(name: str):
async with async_session() as session:
async with session.begin():
objects = await get_objects(session, name)
result = []
for obj in objects:
result.append({
"id": obj.id, "name": obj.name, **obj.data,
})
return result
tests.py:
#pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
loop.close()
#pytest_asyncio.fixture(scope="module")
#pytest.mark.asyncio
async def get_db():
await delete_database()
await create_database()
#pytest.mark.parametrize("test_case", test_cases_post)
def test_post(get_db, test_case):
with TestClient(app)() as client:
response = client.post(f"/{test_case['name']}", json=test_case["data"])
assert response.status_code == test_case["res"]
#pytest.mark.parametrize("test_case", test_cases_get)
def test_get(get_db, test_case):
with TestClient(app)() as client:
response = client.get(f"/{test_case['name']}")
assert len(response.json()) == test_case["count"]
db.py:
DATABASE_URL = environ.get("DATABASE_URL", "sqlite+aiosqlite:///./test.db")
engine = create_async_engine(DATABASE_URL, future=True, echo=True)
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
Base = declarative_base()
async def delete_database():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
async def create_database():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
class Model(Base):
__tablename__ = "smth"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
data = Column(JSON, nullable=False)
idx_main = Index("name", "id")
async def create_object(db: Session, name: str, data: dict):
connection = Model(name=name, data=data)
db.add(connection)
await db.flush()
async def get_objects(db: Session, name: str):
raw_q = select(Model) \
.where(Model.name == name) \
.order_by(Model.id)
q = await db.execute(raw_q)
return q.scalars().all()
At the moment the testing code is quite coupled, so the test suite seems to work as follows:
the database is created once for all tests
the first set of tests runs and populates the database
the second set of tests runs (and will only succeed if the database is fully populated)
This has value as an end-to-end test, but I think it would work better if the whole thing were placed in a single test function.
As far as unit testing goes, it is a bit problematic. I'm not sure whether pytest-asyncio makes guarantees about test running order (there are pytest plugins that exist solely to make tests run in a deterministic order), and certainly the principle is that unit tests should be independent of each other.
The testing is coupled in another important way too - the database I/O code and the application logic are being tested simultaneously.
A practice that FastAPI encourages is to make use of dependency injection in your routes:
from fastapi import Depends, FastAPI, Request
...
def get_sessionmaker() -> Callable:
# this is a bit baroque, but x = Depends(y) assigns x = y()
# so that's why it's here
return async_session
#app.post("/{name}")
async def create_obj(name: str, request: Request, get_session = Depends(get_sessionmaker)):
data = await request.json()
if data.get("code") in some_items:
data["value"] = some_items[data["code"]]
async with get_session() as session:
async with session.begin():
await create_object(session, name, data)
return JSONResponse(status_code=200, content=data)
else:
return JSONResponse(status_code=404, content={})
When it comes to testing, FastAPI then allows you to swap out your real dependencies so that you can e.g. mock the database and test the application logic in isolation from database I/O:
from app import app, get_sessionmaker
from mocks import mock_sessionmaker
...
client = TestClient(app)
...
async def override_sessionmaker():
return mock_sessionmaker
app.dependency_overrides[get_sessionmaker] = override_sessionmaker
# now we can run some tests
This will mean that when you run your tests, whatever you put in mocks.mock_sessionmaker will give you the get_session function in your tests, rather than get_sessionmaker. We could have our mock_sessionmaker return a function called get_mock_session.
In other words, rather than with async_session() as session:, in the tests we'd have with get_mock_session() as session:.
Unfortunately this get_mock_session has to return something a little complicated (let's call it mock_session), because the application code then does an async with session.begin().
I'd be tempted to refactor the application code for simplicity, but if not then it will have to not throw errors when you call .begin, .add, and .flush on it, in this example, and those methods have to be async. But they don't have to do anything, so it's not too bad...
The FastAPI docs have an alternative example of databases + dependencies that does leave the code a little coupled, but uses SQLite strictly for the purpose of unit tests, leaving you free to do something different for an end-to-end test and in the application itself.

functional programming with coroutines python

I have async call, for example
from httpx import AsyncClient, Response
client = AsyncClient()
my_call = client.get(f"{HOST}/api/my_method") # async call
And I want to pass it to some retry logic like
async def retry_http(http_call):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
await retry_http(my_call)
but I got
RuntimeError
cannot reuse already awaited coroutine
Are there any method to make my_call an reusable coroutine ?
It is not possible in Python - a co-routine, once created, have an internal state that can't be easily duplicated - so once it runs, the internal state changes, including the internal line of code that is in execution, and there is no way to "rewind" that.
The most simple approach is to do like in #RyabchenkoAlexander's answer and accept the co-routine function and its parameters separately, and create the co-routine inside your retry function.
An alternative that is a nice Python idiom is to decorate the co-routine function - you make your retry_http a decorator instead, which wraps the underlying co-routine function in the retrying code.
Then, if the functions where you want this behavior are in your code, you can use the decorator syntax (#name prefixing the function definion) so that all calls will have the retry behavior, or you can apply it as a plain expression to get a new, retriable, co-routine function. Your final call could be:
result = await (retry_http(client.get) (f"{HOST}/api/my_method"))
(note the extra pair of parentheses around client.get, decorating it)
The decorator itself could be:
def retry_http(coro_func):
async def wrapper(*args, **kw):
# your original code - just replacing the await expression
...
while count > 5:
...
result = await coro_func(*args, **kw)
...
...
return result
return wrapper
As for your original intent: it would actually be possible to introspect a coroutine object, its internal variables and passed parameter, to recreate a co-routine object that has not yet started - however, that would involve using introspection to locate the original callable, and making the call again - it would be cumbersome, could be slow, and for little gain. I will outline the requirements, nonetheless:
A co-routine object has the cr_code and cr_frame attributes - you'd need to retrieve the function associated with the code object in cr_code- probably using the garbage colector API, or recreate a new function re-using the same code object, by calling types.FunctionType with the same parameters - and the local and global variables can be retrieved from the frame object in cr_frame.
can be fixed in next way
async def retry_http(http_call, *args, **kwargs):
count = 5
status, response = None, None
while count > 0:
response: Response = await http_call(*args, **kwargs)
if response.status_code == 200:
break
count -= 1
if response.status_code in (502, 504):
await asyncio.sleep(2)
else:
break
if response.status_code != 200:
return {
"success": False,
"result": {
"error": f"Response Error",
"response_code": response.status_code,
"response": response.text,
}
}
return response.json()
client = AsyncClient()
await retry_http(client.get, f"{HOST}/api/my_method")

How to assert a monkey patch was called in pytest?

Consider the following:
class MockResponse:
status_code = 200
#staticmethod
def json():
return {'key': 'value'}
# where api_session is a fixture
def test_api_session_get(monkeypatch, api_session) -> None:
def mock_get(*args, **kwargs):
return MockResponse()
monkeypatch.setattr(requests.Session, 'get', mock_get)
response = api_session.get('endpoint/') # My wrapper around requests.Session
assert response.status_code == 200
assert response.json() == {'key': 'value'}
monkeypatch.assert_called_with(
'endpoint/',
headers={
'user-agent': 'blah',
},
)
How can I assert that the get I am patching gets called with '/endpoint' and headers? When I run the test now I get the following failure message:
FAILED test/utility/test_api_session.py::test_api_session_get - AttributeError: 'MonkeyPatch' object has no attribute 'assert_called_with'
What am I doing wrong here? Thanks to all those of who reply in advance.
Going to add another response that uses monkeypatch rather than "you can't use monkeypatch"
Since python has closures, here is my poor man's way of doing such things with monkeypatch:
patch_called = False
def _fake_delete(keyname):
nonlocal patch_called
patch_called = True
assert ...
monkeypatch.setattr("mymodule._delete", _fake_delete)
res = client.delete(f"/.../{delmeid}"). # this is a flask client
assert res.status_code == 200
assert patch_called
In your case, since we are doing similar things with patching an HTTP servers method handler, you could do something like (not saying this is pretty):
param_called = None
def _fake_delete(param):
nonlocal param_called
patch_called = param
assert ...
monkeypatch.setattr("mymodule._delete", _fake_delete)
res = client.delete(f"/.../{delmeid}")
assert res.status_code == 200
assert param_called == "whatever this should be"
You need a Mock object to call assert_called_with - monkeypatch does not provide that out of the box. You can use unittest.mock.patch with side_effect instead to achieve this:
from unittest import mock
import requests
...
#mock.patch('requests.Session.get')
def test_api_session_get(mocked, api_session) -> None:
def mock_get(*args, **kwargs):
return MockResponse()
mocked.side_effect = mock_get
response = api_session.get('endpoint/')
...
mocked.assert_called_with(
'endpoint/',
headers={
'user-agent': 'blah',
},
)
Using side_effect is needed to still get a mock object (mocked in this case, of type MagickMock), instead of just setting your own object in patch, otherwise you won't be able to use the assert_called_... methods.

Failure on unit tests with pytest, tornado and aiopg, any query fail

I've a REST API running on Python 3.7 + Tornado 5, with postgresql as database, using aiopg with SQLAlchemy core (via the aiopg.sa binding). For the unit tests, I use py.test with pytest-tornado.
All the tests go ok as soon as no query to the database is involved, where I'd get this:
Runtime error: Task cb=[IOLoop.add_future..() at venv/lib/python3.7/site-packages/tornado/ioloop.py:719]> got Future attached to a different loop
The same code works fine out of the tests, I'm capable of handling 100s of requests so far.
This is part of an #auth decorator which will check the Authorization header for a JWT token, decode it and get the user's data and attach it to the request; this is the part for the query:
partner_id = payload['partner_id']
provided_scopes = payload.get("scope", [])
for scope in scopes:
if scope not in provided_scopes:
logger.error(
'Authentication failed, scopes are not compliant - '
'required: {} - '
'provided: {}'.format(scopes, provided_scopes)
)
raise ForbiddenException(
"insufficient permissions or wrong user."
)
db = self.settings['db']
partner = await Partner.get(db, username=partner_id)
# The user is authenticated at this stage, let's add
# the user info to the request so it can be used
if not partner:
raise UnauthorizedException('Unknown user from token')
p = Partner(**partner)
setattr(self.request, "partner_id", p.uuid)
setattr(self.request, "partner", p)
The .get() async method from Partner comes from the Base class for all models in the app. This is the .get method implementation:
#classmethod
async def get(cls, db, order=None, limit=None, offset=None, **kwargs):
"""
Get one instance that will match the criteria
:param db:
:param order:
:param limit:
:param offset:
:param kwargs:
:return:
"""
if len(kwargs) == 0:
return None
if not hasattr(cls, '__tablename__'):
raise InvalidModelException()
tbl = cls.__table__
instance = None
clause = cls.get_clause(**kwargs)
query = (tbl.select().where(text(clause)))
if order:
query = query.order_by(text(order))
if limit:
query = query.limit(limit)
if offset:
query = query.offset(offset)
logger.info(f'GET query executing:\n{query}')
try:
async with db.acquire() as conn:
async with conn.execute(query) as rows:
instance = await rows.first()
except DataError as de:
[...]
return instance
The .get() method above will either return a model instance (row representation) or None.
It uses the db.acquire() context manager, as described in aiopg's doc here: https://aiopg.readthedocs.io/en/stable/sa.html.
As described in this same doc, the sa.create_engine() method returns a connection pool, so the db.acquire() just uses one connection from the pool. I'm sharing this pool to every request in Tornado, they use it to perform the queries when they need it.
So this is the fixture I've set up in my conftest.py:
#pytest.fixture
async def db():
dbe = await setup_db()
return dbe
#pytest.fixture
def app(db, event_loop):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
return app
I can't find an explanation of why this is happening; Tornado's doc and source makes it clear that asyncIO event loop is used as default, and by debugging it I can see the event loop is indeed the same one, but for some reason it seems to get closed or stopped abruptly.
This is one test that fails:
#pytest.mark.gen_test(timeout=2)
def test_score_returns_204_empty(app, http_server, http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = yield http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
This test fails as it returns 401 instead of 204, given the query on the auth decorator fails due to the RuntimeError, which returns then an Unauthorized response.
Any idea from the async experts here will be very appreciated, I'm quite lost on this!!!
Well, after a lot of digging, testing and, of course, learning quite a lot about asyncio, I made it work myself. Thanks for the suggestions so far.
The issue was that the event_loop from asyncio was not running; as #hoefling mentioned, pytest itself does not support coroutines, but pytest-asyncio brings such a useful feature to your tests. This is very well explained here: https://medium.com/ideas-at-igenius/testing-asyncio-python-code-with-pytest-a2f3628f82bc
So, without pytest-asyncio, your async code that needs to be tested will look like this:
def test_this_is_an_async_test():
loop = asyncio.get_event_loop()
result = loop.run_until_complete(my_async_function(param1, param2, param3)
assert result == 'expected'
We use loop.run_until_complete() as, otherwise, the loop will never be running, as this is the way asyncio works by default (and pytest makes nothing to make it work differently).
With pytest-asyncio, your test works with the well-known async / await parts:
async def test_this_is_an_async_test(event_loop):
result = await my_async_function(param1, param2, param3)
assert result == 'expected'
pytest-asyncio in this case wraps the run_until_complete() call above, summarizing it heavily, so the event loop will run and be available for your async code to use it.
Please note: the event_loop parameter in the second case is not even necessary here, pytest-asyncio gives one available for your test.
On the other hand, when you are testing your Tornado app, you usually need to get a http server up and running during your tests, listening in a well-known port, etc., so the usual way goes by writing fixtures to get a http server, base_url (usually http://localhost:, with an unused port, etc etc).
pytest-tornado comes up as a very useful one, as it offers several of these fixtures for you: http_server, http_client, unused_port, base_url, etc.
Also to mention, it gets a pytest mark's gen_test() feature, which converts any standard test to use coroutines via yield, and even to assert it will run with a given timeout, like this:
#pytest.mark.gen_test(timeout=3)
def test_fetch_my_data(http_client, base_url):
result = yield http_client.fetch('/'.join([base_url, 'result']))
assert len(result) == 1000
But, this way it does not support async / await, and actually only Tornado's ioloop will be available via the io_loop fixture (although Tornado's ioloop uses by default asyncio underneath from Tornado 5.0), so you'd need to combine both pytest.mark.gen_test and pytest.mark.asyncio, but in the right order! (which I did fail).
Once I understood better what could be the problem, this was the next approach:
#pytest.mark.gen_test(timeout=2)
#pytest.mark.asyncio
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
But this is utterly wrong, if you understand how Python's decorator wrappers work. With the code above, pytest-asyncio's coroutine is then wrapped in a pytest-tornado yield gen.coroutine, which won't get the event-loop running... so my tests were still failing with the same problem. Any query to the database were returning a Future waiting for an event loop to be running.
My updated code once I made myself up of the silly mistake:
#pytest.mark.asyncio
#pytest.mark.gen_test(timeout=2)
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
In this case, the gen.coroutine is wrapped inside the pytest-asyncio coroutine, and the event_loop runs the coroutines as expected!
But there were still a minor issue that took me a little while to realize, too; pytest-asyncio's event_loop fixture creates for every test a new event loop, while pytest-tornado creates too a new IOloop. And the tests were still failing, but this time with a different error.
The conftest.py file now looks like this; please note I've re-declared the event_loop fixture to use the event_loop from pytest-tornado io_loop fixture itself (please recall pytest-tornado creates a new io_loop on each test function):
#pytest.fixture(scope='function')
def event_loop(io_loop):
loop = io_loop.current().asyncio_loop
yield loop
loop.stop()
#pytest.fixture(scope='function')
async def db():
dbe = await setup_db()
yield dbe
#pytest.fixture
def app(db):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
yield app
Now all my tests work, I'm back a happy man and very proud of my now better understanding of the asyncio way of life. Cool!

Soap UI: Groovy Script to call an API if the response is true

I am very new to use SoapUI. Writing test cases for my project APIs.
My requirement is to run a groovy script after an API call and if the response text of this API is "true", another API should call.
I found myself stuck to do this. Can anyone guide me to do this.
Thanks in advance!!!
I found answer but forgot to inform over here. I did asserted an script like this in TestStep:
def slurper = new groovy.json.JsonSlurper()
def responseJson = slurper.parseText(messageExchange.getResponseContent())
assert responseJson instanceof Map
assert responseJson.containsKey('authToken')
def id = "Bearer "+responseJson['authToken']
log.info(id.toString())
testRunner = new com.eviware.soapui.impl.wsdl.testcase.WsdlTestCaseRunner(context.testCase.testSuite.project.getTestSuiteByName("TestSuite").getTestCaseByName("TestCas"), null)
def tcase = testRunner.testCase
def tstep = tcase.getTestStepByName("TestStep")
testContext = new com.eviware.soapui.impl.wsdl.testcase.WsdlTestRunContext(tstep)
runner = tstep.run(testRunner, testContext)
Little idea for this :
def response = context.expand( '${TestRequest1#Response}' )
if ( response == 'true' )
{
testRunner.runTestStepByName( "TestRequest2")
}
Disable your first test Step(TestRequest1).

Resources