Can I use mocks in testing Flask app functions? - python-3.x

I'm trying to test some routes on my Flask app that call external APIs, which I want to mock.
The routes are set up like this:
#app.route('/url/<string::arg>')
def route_function(arg):
data = external_api(arg)
response = make_response(data)
# configure response
return response
I originally tried something like this:
class TestFlaskApp(unittest.TestCase):
def setUp(self):
self.app = app.test_client()
#patch('external_api',
side_effect=mock_api)
def test_flask_route(self, api):
result = app.get('/url/arg')
self.assertEqual(result.status_code, 200)
api.assert_called_once_with('arg')
...which failed. The mock API function wasn't called, since I assume the mock does not apply in the app context.
I also tried this, thinking I might be able to test the route functions directly and thus avoid having to use the app context:
class TestAppFunctions(unittest.TestCase):
#patch('external_api',
side_effect=mock_api)
def test_flask_function(self, api):
result = my_flask_app.route_function('arg')
self.assertEqual(result.status_code, 200)
api.assert_called_once_with('arg')
...but this didn't work either, since to make a response, route_function needs the app context.
So is there a way to mock inside the app context? How else can I test these routes without triggering external API calls?

Oluwafemi Sule was right...I just needed to patch the function where it was used, not where it was defined.
You need to pass the object path to the patch function so that it can be resolved and replaced with the mock at runtime. For example if external_api function is called in a module named routes which is in turn contained in a package named my_shining_app, patch will be passed as my_shining_app.routes.external_api
Note that the path should be where the function is called (i.e. where it's to be replaced with the mock) and not where it's defined

Related

Class wide mock in pytest (for all methods in the whole TestClass)

I am unittesting my new librabry, which is basically database interface. Our apps use it to access our database. that means, I want to test all methods, but I do not want DB commands to be called for real. I only check if they are called with correct arguemnts.
For that purpose, I am mocking the database library. this is the actual code that DOES work:
import pytest
from unittests.conftest import PyTestConfig as Tconf
from my_lib.influx_interface import InfluxInterface
class TestInfluxInterface:
def test_connect(self, mocker):
"""
Create InfluxConnector object and call connect()
check if __init__() arguments are passed / used correctly
"""
influx_client = mocker.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
test_connector = InfluxInterface(Tconf.test_id)
# Call connect with no input (influx_client - should be called with no arguemnts
test_connector.connect()
influx_client.assert_called_once()
influx_client.reset_mock()
# Call connect with custom correct input (influx_client - should be called with custom values
test_connector.connect(Tconf.custom_conf)
influx_client.assert_called_once_with(url=Tconf.custom_conf["url"], token=Tconf.custom_conf["token"],
org=Tconf.custom_conf["org"], timeout=Tconf.custom_conf["timeout"],
debug=Tconf.custom_conf["debug"])
influx_client.reset_mock()
# Call connect with incorrect input (influx_client - should be called with default values
test_connector.connect(Tconf.default_conf)
influx_client.assert_called_once_with(url=Tconf.default_conf["url"], token=Tconf.default_conf["token"],
org=Tconf.default_conf["org"], timeout=Tconf.default_conf["timeout"],
debug=Tconf.default_conf["debug"])
Now, what I do next, is to add more methods into TestInfluxInterface class, which will be testing rest of the code. One test method for each method in my library. Thats how I usually do it.
The problem is, that there is a part of the code:
influx_client = mocker.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
test_connector = InfluxInterface(Tconf.test_id)
That will be same for every method. Thus I will be copy-pasting it over and over. As you can already see, thats not good solution.
In unittest, I would do this:
import unittest
import unittest.mock as mock
from unittests.conftest import PyTestConfig as Tconf
from my_lib.influx_interface import InfluxInterface
#mock.patch('my_lib.influx_interface.InfluxDBClient', autospec=True)
class TestInfluxInterface:
def setUp(self):
self.test_connector = InfluxInterface(Tconf.test_id)
def test_connect(self, influx_client):
"""
Create InfluxConnector object and call connect()
check if __init__() arguments are passed / used correctly
"""
# Call connect with no input (influx_client - should be called with no arguemnts
self.test_connector.connect()
influx_client.assert_called_once()
Than in each method, I would use self.test_connector to call whatever method I want to test, and check if it called correct influx_client method, with correct parameters.
However, we are moving from unittest to pytest. And while I am wrapping my head around pytest docs, reading stuff here and there, fixtures, mockers, etc..I cannot find out how to do this correctly.

Run Flask on demand to visualize computed data

Using Python 3.7, I made a CLI utility which prints some results to stdout. Depending on an option the results should be visualized in a browser (single user, no sessions). Flask seems to be a good choice for this. However, this is not a standard usecase described in the docs or in tutorials.
I am looking for a best practise way to pass the data (e.g. a Python List) to the Flask app so that I can return it from view functions. Basically it would be immutable application data. The following seems to work but I don't like to use globals:
main.py:
import myapp
result = compute_stuff()
if show_in_browser:
myapp.data = result
myapp.app.run()
myapp.py:
from flask import Flask
from typing import List
app = Flask(__name__)
result: List
#app.route("/")
def home():
return f"items: {len(result)}"
Reading the Flask docs I get the impression I should use an application context. On the other hand, its lifetime does not span across requests and I would not know how to populate it. Reading other questions I might use a Flask config object because it seems to be available on every request. But this is not really about configuration. Or maybe I should use Klein, inspired by this answer?
There does not seem to be a best practice way. So I am going with a modification of my original approach:
class MyData:
pass
class MyApp(Flask):
def __init__(self) -> None:
super().__init__(__name__)
self.env = "development"
self.debug = True
def getData(self) -> MyData:
return self._my_data
def setMyData(self, my_data: MyData) -> None:
self._my_data = my_data
app = MyApp()
This way I can set the data after the app instance was already created - which is necessary to be able to use it in routing decorators defined outside of the class. It would be nice to have more encapsulation: use app methods for routing (with decorators) instead of module global functions accessing a module global app object. Apparently that is not flaskic.

Trigger middleware within the call method using Python and uWSGI

I was confused at one point when I was developing an application to run on wsgi - more specifically on uwsgi.
After building my example application:
class MyCustomApp():
def __call__(self, environ, start_response):
start_response('200 OK', [('Content-Type','application/json')])
return "a".encode('utf-8')
application = MyCustomApp()
Everything works perfectly as expected.
I use a class instead of a method, I need to use it for other reasons.
Now let's get to the problem. I am using middleware called beaker and if I replace my application with:
application = SessionMiddleware(MyCustomApp(),options)
Everything is normal, but I don't want to modify my call, despite for the sake of study and understanding.
I would like to do the following:
class MyCustomApp():
def __call__(self, environ, start_response):
start_response('200 OK', [('Content-Type','application/json')])
...
here i want to implement SessionMiddleware without modify
application = MyCustomApp()
...
return "a".encode('utf-8')
application = MyCustomApp()
But nothing I try, my middleware replaces my environment with its defaults. I wish they could guide me to understand and try to implement keeping the logic above.

APscheduler and Pyramid python

I'm trying to use the wonder apscheduler in a pyarmid api. The idea is to have a background job run regularly, while we still query the api for the result from time to time. Basically I use the job in a class as:
def my_class(object):
def __init__(self):
self.current_result = 0
scheduler = BackGroundScheduler()
scheduler.start()
scheduler.add_job(my_job,"interval", id="foo", seconds=5)
def my_job():
print("i'm updating result")
self.current_result += 1
And outside of this class (a service for me), the api has a POST endpoint that returns my_class instance's current result:
class MyApi(object):
def __init__(self):
self.my_class = MyClass()
#view_config(request_method='POST')
def my_post(self):
return self.my_class.current_result
When everything runs, I see the prints and incrementation of value inside the service. But current_result stays as 0 when gathered from the post.
From what I know of the threading, I guess that the update I do is not on the same object my_class but must be on a copy passed to the thread.
One solution I see would be to update the variable in a shared intermediate (write on disk, or in a databse). But I wondered if that would be possible to do in memory.
I manage to do exactly this when I do this in a regular script, or with one script and a very simple FLASK api (no class for the API there) but I can't manage to have this logic function inside the pyramid api.
It must be linked to some internal of Pyramid spawning my api endpoint on a different thread but I can't get right on the problem.
Thanks !
=== EDIT ===
I have tried several things to solve the issue. First, the instance of MyClass used is intitialized in another script, follow a container pattern. That container is by default contained in all MyApi instances of pyramid, and supposed to contain all global variables linked to my project.
I also define a global instance of MyClass just to be sure, and print its current result value to compare
global_my_class = MyClass()
class MyApi(object):
def __init__(self):
pass
#view_config(request_method='POST')
def my_post(self):
print(global_my_class.current_result)
return self.container.my_class.current_result
I check using debug that MyClass is only spawned twice during the api execution (one for the global variable, one inside the container. However.
So what I see in logging are two values of current_result getting incremented, but at each calls of my_post I only get 0s.
An instance of view class only lives for the duration of the request - request comes in, a view class is created, produces the result and is disposed. As such, each instance of your view gets a new copy of MyClass() which is separate from the previous requests.
As a very simple solution you may try defining a global instance which will be shared process-wide:
my_class = MyClass()
class MyApi(object):
#view_config(request_method='POST')
def my_post(self):
return my_class.current_result

How to spy on Apollo Client cache calls for unit testing?

I am using enzyme to "mount" a component wrapped with withApollo, hence it has a client object in context that it finds available in the props. The component writes on the client cache using writeQuery conditionally, and I am writing a unit test that simulates those conditions, I would like to be able to assert that this cache method has been called with the expected arguments.
Following Apollo-Client guidelines, I am using the MockedWrapper. I think this would a good place to intercept the client object and replace its writeQuery function with a mock function. I do not know how or if it is even possible.
Alternatively, I could ditch the MockedProvider and simulate the context entirely, but I do not know the expected object inside the context nor its shape/schema.

Resources