I am using:
python = "3.8.3",
django="3.0.5"
I have written a django test with APITestCase. I am running other tests inside my test class. I am trying to call other functions inside my test class. In order to do this I have a dictionary in a list which i mapped like this:
[
{
"class_name": class_name,
"func_name": "test_add_product", # Comes test name
"trigger_type": trigger_type, # Comes from model choices field.
"request_type": RequestTypes.POST,
"success": True,
},
{
...
},
]
I am looping these with a for loop and running each one. After each loop db should be cleared in order to not get error. I tried to do this using:
# Lets say that, we have a Foo model
Foo.objects.all().delete()
This method works, but I want a better solution.
How can I manually clear the test db before the test finishes?
you can do this
from django.test import Client, TestCase
class TestsDB(TestCase):
def __init__(self):
self.e = Client()
def test_delete_db(self):
foo = Foo.objects.all()
foo.delete()
Related
I'm trying to add a new screen to Subiquity but Subiquity gets stuck every time I run it.
running server pid 40812
connecting... /
To create my controller I used an example from DESIGN.md
My server controller class is as below:
import logging
from subiquity.common.apidef import API
from subiquity.server.controller import SubiquityController
log = logging.getLogger('subiquity.server.controllers.example')
class ExampleController(SubiquityController):
endpoint = API.example
model_name = 'example'
async def GET(self) -> str:
return self.model.thing
async def POST(self, data: str):
self.model.thing = data
await self.configured()
Also, I added my Example controller class to controllers[] in subiquity/server/server.py:
controllers = [
# other classes
"Kernel",
"Keyboard",
"Example",
"Zdev",
"Source",
# some other classes
]
And added example = simple_endpoint(str) to apidef.py
My model as simple as possible:
class ExampleModel:
thing = "example_var"
What might cause the problem? If I remove my Example controller from controllers[], Subiquity works but obviously doesn't use my controller.
I have a process that executes a method, let's say called 'save_details', which runs a loop to go out to an external source twice. I want to be able to mock the two responses that would be returned to 'save_details', to give me two different IDs. I can do it when I have one response, but it doesn't seem to work when I need two mock responses to be returned.
So I have the following example fixtures.
Mock 1
#pytest.fixture
def test_201_mock1(mocker) -> None:
"""
Test return data for the 201 response.
"""
mocker.patch(
"save_details",
return_value=[
"201",
{
"ACK": {
"id": "e944126b-db78-4711-9f97-83c2cd1e09a4",
}
},
],
)
Mock 2
#pytest.fixture
def test_201_mock2(mocker) -> None:
"""
Test return data for the 201 response.
"""
mocker.patch(
"save_details",
return_value=[
"201",
{
"ACK": {
"id": "4758a428-8f33-4dc8-bb64-a500855f9a8c",
}
},
],
)
And I have my test:
async def test_create_valid_resource_201(
test_201_mock1,
test_201_mock2,
...
) -> None:
"""
Tests if a valid POST request returns a 201 response.
"""
#rest of test
...
The 'test_create_valid_resource_201' test will simply run the 'save_details' method I have. What happens is I basically get the ID from test_201_mock2 duplicated.
4758a428-8f33-4dc8-bb64-a500855f9a8c
4758a428-8f33-4dc8-bb64-a500855f9a8c
How would I get pytest to recognise that there are two mocks to be returned, one for each iteration of my loop in 'save_details'? Do I create one mock with the two responses for example?
Side effect is meant just for this purpose - it can be used to return different values in the same mock. This example is taken from the documentation - 3 consecutive calls to the mock would produce different results as specified in side_effect:
>>> mock = MagicMock()
>>> mock.side_effect = [5, 4, 3, 2, 1]
>>> mock(), mock(), mock()
(5, 4, 3)
In your example you can have a single fixture to set the side_effect like so:
#pytest.fixture
def test_201_mock(mocker) -> None:
"""
Test return data for the 201 response.
"""
mocker.patch(
"save_details",
side_effect=[
["201",
{
"ACK": {
"id": "e944126b-db78-4711-9f97-83c2cd1e09a4",
}
}],
["201",
{
"ACK": {
"id": "4758a428-8f33-4dc8-bb64-a500855f9a8c",
}
}]
],
)
I have a rather contrived code here :
backend_data = {
"admins": ["Leo", "Martin", "Thomas", "Katrin"],
"members": [
"Leo",
"Martin",
"Thomas",
"Katrin",
"Subhayan",
"Clemens",
"Thoralf"
],
"juniors": ["Orianne", "Antonia", "Sarah"]
}
class Backend:
def __init__(self, data):
self.backend_data = data
def get_all_admins(self):
return self.backend_data.get("admins")
def get_all_members(self):
return self.backend_data.get("members")
def get_all_juniors(self):
return self.backend_data.get("juniors")
class BackendAdaptor:
# Does some conversion and validation
def __init__(self, backend):
self.backend = backend
def get_all_admins(self):
return (admin for admin in self.backend.get_all_admins())
def get_all_members(self):
return (member for member in self.backend.get_all_members() if member not in self.backend.get_all_admins())
def get_all_juniors(self):
return (junior for junior in self.backend.get_all_juniors())
if __name__ == "__main__":
backend = Backend(data=backend_data)
adaptor = BackendAdaptor(backend=backend)
print(f"All admins are : {list(adaptor.get_all_admins())}")
print(f"All members are : {list(adaptor.get_all_members())}")
print(f"All juniors are : {list(adaptor.get_all_juniors())}")
So the BackendAdaptor class basically would be used to do some validation and conversion of the data that we get from the Backend .
The client should only be asked to interact with the API of the BackendAdaptor which is exactly similar to that of Backend . The adaptor class sits in middle and gets data from Backend does some validation if required and the gives back the data to client.
The issue is that the validation on the data that is getting returned from the Backend is not done for every method(For ex: there is validation done on get_all_members but not on get_all_admins and also not on get_all_juniors). The method just gives back a generator on whatever data it gets from Backend.
As is the case now i still have to implement a one liner methods for them .
Is there a way in Python to avoid this ? I am thinking in lines of magic methods like __getattribute__ ? But i have no idea on how to do this for methods.
So the best case scenario is this:
I implement the methods for which i know that i have to do some validation on Backend data
For the rest of the methods it is automatically delegated to Backend and then i just return a generator from what i get back
Any help would be greatly appreciated.
You can implement __getattr__. It is only called if a non-existing attribute is accessed. This will return some generic function with the desired functionality.
class BackendAdaptor:
def __init__(self, backend):
self.backend = backend
def __getattr__(self, name):
if not hasattr(self.backend, name):
raise AttributeError(f"'{name}' not in backend.")
return lambda: (i for i in getattr(self.backend, name)())
def get_all_members(self):
return (member for member in self.backend.get_all_members() if member not in self.backend.get_all_admins())
I'm trying to automate UI tests for a Django app with SeleniumWebDriver and I wanted to integrate Pytest because we are already using it for Unitary tests, and I would like to have a folder with profiles according to the specific environment like local, production or staging with a bunch of variables, so we can run Pytest with a specific profile.
I want different files like local, staging and production with different variables values but the same schema, and use them inside each test like they were global variables.
I tried to use a fixture inside the conftest.py file, but it gives me an error saying that I cannot import a whole module.
Is there a way to have anything like profiles for all tests in pytest and change it with an argument?
I would like to do something like:
pytest --profile=local
And automatically all test run with the variables defined in a local.py file. This is an idea i had, but any advise is welcome so I can implement this in some other way.
It can be the next format of config:
class Config:
def __init__(self, env):
self.base_url = {
'local': 'https://local-env.com',
'prod': 'https://prod-env.com',
'stage': 'https://stage-env.com',
}[env]
self.app_port = {
'local': 8080,
'prod': 80,
'stage': 1111
}[env]
In that case the conftest.py will be:
from pytest import fixture
from config import Config
def pytest_addoption(parser):
parser.addoption(
"--env",
action="store",
help="Environment to run tests against"
)
#fixture(scope='session')
def env(request):
return request.config.getoption("--env")
#fixture(scope='session')
def app_config(env):
cfg = Config(env)
return cfg
And test:
def test_environment_is_staging(app_config):
base_url = app_config.base_url
port = app_config.app_port
assert base_url == 'staging'
The usage will be like:
pytest --env=stage
I want to use transactions for tests with the pytest's fixtures as follows:
import pytest
from app.db.clients import (
get_database, # Returns instance of the class PooledMySQLDatabase
BaseManager # Extends the class peewee_async.Manager
)
db = get_database()
db.set_allow_sync(False)
objects = BaseManager(database=db)
#pytest.yield_fixture
async def transactional():
async with objects.transaction() as trx:
yield trx
await trx.rollback()
#pytest.mark.usefixtures('transactional')
#pytest.mark.asyncio:
async def test_transactional_fixture():
pass # Do something with the objects
However the code above doesn't work as expected in return tests are collected but aren't executing. Looks like pytest tries to yield tests infinite. I have no idea how to run transactional tests with such technology stack. Could someone help me, please?
P.S. The code snippet above is just represention of the project's workflow (screenshot was taken from real project).