Mock few lines of a function for Unit Tests - python-3.x

I have this below function for which I would like to write unit tests.
def fetch_latest_topics(request):
username = request.META['username']
try:
client = create_discourse_connection(username)
response = client.latest_topics()
topic_list = response['topic_list']['topics']
filtered_topics_data = []
for topic in topic_list:
filtered_topics_data.append(
{
"title": topic["title"],
"created_at": topic["created_at"],
"topic_id": topic["id"]
}
)
except Exception as e:
return sendResponse(formatErrorResponse(badreq_err, str(e)))
return sendResponse({"post_replies": filtered_topics_data})
I want to mock below lines
client = create_discourse_connection(username)
response = client.latest_topics()
Basically In these lines, I will create a connection with external server and fetch some response. I don't want to do this in unit test (Since this is a library it will be tested beforehand and it's not my code responsibility to test this). I just have to pass a mock response json and validate the response formatting and keys in my unit test.
I explored mock library for mocking functions but this is more of a library and it's method call rather than a plain function. It will be a great help if someone can show me the direction to test this.
Thank you for giving time to this question.

Related

How to correctly call queryStringParameters for AWS Lambda + API Gateway?

I'm following a tutorial on setting up AWS API Gateway with a Lambda Function to create a restful API. I have the following code:
import json
def lambda_handler(event, context):
# 1. Parse query string parameters
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
# 2. Construct the body of the response object
transactionResponse = {}
# returning values originally passed in then add separate field at the bottom
transactionResponse['transactionid'] = transactionId
transactionResponse['type'] = transactionType
transactionResponse['amount'] = transactionAmounts
transactionResponse['message'] = 'hello from lambda land'
# 3. Construct http response object
responseObject = {}
responseObject['StatusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps(transactionResponse)
# 4. Return the response object
return responseObject
When I link the API Gateway to this function and try to call it using query parameters I get the error:
{
"message":"Internal server error"
}
When I test the lambda function it returns the error:
{
"errorMessage": "'transactionid'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 5, in lambda_handler\n transactionId = event['queryStringParameters']['transactionid']\n"
]
Does anybody have any idea of what's going on here/how to get it to work?
I recommend adding a couple of diagnostics, as follows:
import json
def lambda_handler(event, context):
print('event:', json.dumps(event))
print('queryStringParameters:', json.dumps(event['queryStringParameters']))
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
// remainder of code ...
That way you can see what is in event and event['queryStringParameters'] to be sure that it matches what you expected to see. These will be logged in CloudWatch Logs (and you can see them in the AWS Lambda console if you are testing events using the console).
In your case, it turns out that your test event included transactionId when your code expected to see transactionid (different spelling). Hence the KeyError exception.
just remove ['queryStringParameters']. the print event line shows the event i only a array not a key value pair. I happen to be following the same tutorial. I'm still on the api gateway part so i'll update once mine is completed.
Whan you test from the lambda function there is no queryStringParameters in the event but it is there when called from the api gateway, you can also test from the api gateway where queryStringParameters is required to get the values passed.
The problem is not your code. It is the Lambda function intergration setting. Please do not enable Lambda function intergration setting . You can still attach the Lambda function without it. Leave this unchecked.
It's because of the typo in responseObject['StatusCode'] = 200.
'StatusCode' should be 'statusCode'.
I got the same issue, and it was that.

How do I create a callable object to mimic an API call?

How do I create a object that I can invoke to mimic the following api call and response. I am aware of the mock library but the use case prohibits me from using it.
response = client.users.create(email='test#gmail.com', phone=123)
outcome = response.ok
My current solution below works however I feel like there is a more pythonic and generic way to do this so I can mimic other calls without having to rewrite different inner classes
class Client:
ok = True
class users:
class create():
ok = True
def __init__(self, email, phone):
pass
Input
client = Client()
response = client.users.create(email='test#gmail.com', phone=123)
response.ok
Output
True

How to pass params to a HTTP request (teststep) in a SOAP UI Test case using groovy and run it

I am writing a groovy script to execute/automate my test suite. In one test case i have a HTTPRequest where i have a request URL, parameters( username and password) and method( GET) to get a token-id and then i would pass that token id to my next step( a SOAP request)to get the data.
I am stuck at a point where i need to pass the params(username and password), request URL and method(GET) using groovy.
I have a test step created manaully under a test case, i just need to pass the params
as i search online i got to know how to pass headers,url to a SOAP request which is like below
def headers = new StringToStringMap()
testRunner = new com.eviware............WsdlTestCaseRunner(myTestCase,null);
testStepContext = new com.eviware.soapui........WsdlTestRunContext(testsetp);
headers.put("apikey", "abcd")
teststep.getTestRequest().setRequestHeaders(headers)
teststep.getHttpRequest().setEndpoint(encpointurl);
testsetp.run(testRunner ,testStepContext )
but i looking to know how to pass params to a http request(test step) and run it.
Add a Properties teststep to your testcase. Just let it keep the default "Properties" name.
Add the properties to the Properties teststep, that you need to transfer
Inside your groovy teststep, you may set the properties using something like:
def properties = testRunner.testCase.getTestStepByName("Properties");
properties.setPropertyValue("name", "value");
Add the parameters directly in your request using variables in the format ${Properties#name} and replace "name" with the actual parameter name. This can be done both in the request body and in the URL if you should wish to do so.
It can be done completely in groovy by using groovy.json.JsonBuilder Class.
def body = new StringToStringMap()
def jsonbildr = new JsonBuilder()
body.put("username","Hackme")
body.put("password","LockUout")
def root = jsonbildr body
jsonbildr = jsonbildr.toPrettyString()
log.info(jsonbildr)
testStep.setPropertyValue("Request", jsonbildr)
Output :
{
"password": "LockUout",
"username": "Hackme"
}

Failure on unit tests with pytest, tornado and aiopg, any query fail

I've a REST API running on Python 3.7 + Tornado 5, with postgresql as database, using aiopg with SQLAlchemy core (via the aiopg.sa binding). For the unit tests, I use py.test with pytest-tornado.
All the tests go ok as soon as no query to the database is involved, where I'd get this:
Runtime error: Task cb=[IOLoop.add_future..() at venv/lib/python3.7/site-packages/tornado/ioloop.py:719]> got Future attached to a different loop
The same code works fine out of the tests, I'm capable of handling 100s of requests so far.
This is part of an #auth decorator which will check the Authorization header for a JWT token, decode it and get the user's data and attach it to the request; this is the part for the query:
partner_id = payload['partner_id']
provided_scopes = payload.get("scope", [])
for scope in scopes:
if scope not in provided_scopes:
logger.error(
'Authentication failed, scopes are not compliant - '
'required: {} - '
'provided: {}'.format(scopes, provided_scopes)
)
raise ForbiddenException(
"insufficient permissions or wrong user."
)
db = self.settings['db']
partner = await Partner.get(db, username=partner_id)
# The user is authenticated at this stage, let's add
# the user info to the request so it can be used
if not partner:
raise UnauthorizedException('Unknown user from token')
p = Partner(**partner)
setattr(self.request, "partner_id", p.uuid)
setattr(self.request, "partner", p)
The .get() async method from Partner comes from the Base class for all models in the app. This is the .get method implementation:
#classmethod
async def get(cls, db, order=None, limit=None, offset=None, **kwargs):
"""
Get one instance that will match the criteria
:param db:
:param order:
:param limit:
:param offset:
:param kwargs:
:return:
"""
if len(kwargs) == 0:
return None
if not hasattr(cls, '__tablename__'):
raise InvalidModelException()
tbl = cls.__table__
instance = None
clause = cls.get_clause(**kwargs)
query = (tbl.select().where(text(clause)))
if order:
query = query.order_by(text(order))
if limit:
query = query.limit(limit)
if offset:
query = query.offset(offset)
logger.info(f'GET query executing:\n{query}')
try:
async with db.acquire() as conn:
async with conn.execute(query) as rows:
instance = await rows.first()
except DataError as de:
[...]
return instance
The .get() method above will either return a model instance (row representation) or None.
It uses the db.acquire() context manager, as described in aiopg's doc here: https://aiopg.readthedocs.io/en/stable/sa.html.
As described in this same doc, the sa.create_engine() method returns a connection pool, so the db.acquire() just uses one connection from the pool. I'm sharing this pool to every request in Tornado, they use it to perform the queries when they need it.
So this is the fixture I've set up in my conftest.py:
#pytest.fixture
async def db():
dbe = await setup_db()
return dbe
#pytest.fixture
def app(db, event_loop):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
return app
I can't find an explanation of why this is happening; Tornado's doc and source makes it clear that asyncIO event loop is used as default, and by debugging it I can see the event loop is indeed the same one, but for some reason it seems to get closed or stopped abruptly.
This is one test that fails:
#pytest.mark.gen_test(timeout=2)
def test_score_returns_204_empty(app, http_server, http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = yield http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
This test fails as it returns 401 instead of 204, given the query on the auth decorator fails due to the RuntimeError, which returns then an Unauthorized response.
Any idea from the async experts here will be very appreciated, I'm quite lost on this!!!
Well, after a lot of digging, testing and, of course, learning quite a lot about asyncio, I made it work myself. Thanks for the suggestions so far.
The issue was that the event_loop from asyncio was not running; as #hoefling mentioned, pytest itself does not support coroutines, but pytest-asyncio brings such a useful feature to your tests. This is very well explained here: https://medium.com/ideas-at-igenius/testing-asyncio-python-code-with-pytest-a2f3628f82bc
So, without pytest-asyncio, your async code that needs to be tested will look like this:
def test_this_is_an_async_test():
loop = asyncio.get_event_loop()
result = loop.run_until_complete(my_async_function(param1, param2, param3)
assert result == 'expected'
We use loop.run_until_complete() as, otherwise, the loop will never be running, as this is the way asyncio works by default (and pytest makes nothing to make it work differently).
With pytest-asyncio, your test works with the well-known async / await parts:
async def test_this_is_an_async_test(event_loop):
result = await my_async_function(param1, param2, param3)
assert result == 'expected'
pytest-asyncio in this case wraps the run_until_complete() call above, summarizing it heavily, so the event loop will run and be available for your async code to use it.
Please note: the event_loop parameter in the second case is not even necessary here, pytest-asyncio gives one available for your test.
On the other hand, when you are testing your Tornado app, you usually need to get a http server up and running during your tests, listening in a well-known port, etc., so the usual way goes by writing fixtures to get a http server, base_url (usually http://localhost:, with an unused port, etc etc).
pytest-tornado comes up as a very useful one, as it offers several of these fixtures for you: http_server, http_client, unused_port, base_url, etc.
Also to mention, it gets a pytest mark's gen_test() feature, which converts any standard test to use coroutines via yield, and even to assert it will run with a given timeout, like this:
#pytest.mark.gen_test(timeout=3)
def test_fetch_my_data(http_client, base_url):
result = yield http_client.fetch('/'.join([base_url, 'result']))
assert len(result) == 1000
But, this way it does not support async / await, and actually only Tornado's ioloop will be available via the io_loop fixture (although Tornado's ioloop uses by default asyncio underneath from Tornado 5.0), so you'd need to combine both pytest.mark.gen_test and pytest.mark.asyncio, but in the right order! (which I did fail).
Once I understood better what could be the problem, this was the next approach:
#pytest.mark.gen_test(timeout=2)
#pytest.mark.asyncio
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
But this is utterly wrong, if you understand how Python's decorator wrappers work. With the code above, pytest-asyncio's coroutine is then wrapped in a pytest-tornado yield gen.coroutine, which won't get the event-loop running... so my tests were still failing with the same problem. Any query to the database were returning a Future waiting for an event loop to be running.
My updated code once I made myself up of the silly mistake:
#pytest.mark.asyncio
#pytest.mark.gen_test(timeout=2)
async def test_score_returns_204_empty(http_client, base_url):
score_url = '/'.join([base_url, URL_PREFIX, 'score'])
token = create_token('test', scopes=['score:get'])
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json',
}
response = await http_client.fetch(score_url, headers=headers, raise_error=False)
assert response.code == 204
In this case, the gen.coroutine is wrapped inside the pytest-asyncio coroutine, and the event_loop runs the coroutines as expected!
But there were still a minor issue that took me a little while to realize, too; pytest-asyncio's event_loop fixture creates for every test a new event loop, while pytest-tornado creates too a new IOloop. And the tests were still failing, but this time with a different error.
The conftest.py file now looks like this; please note I've re-declared the event_loop fixture to use the event_loop from pytest-tornado io_loop fixture itself (please recall pytest-tornado creates a new io_loop on each test function):
#pytest.fixture(scope='function')
def event_loop(io_loop):
loop = io_loop.current().asyncio_loop
yield loop
loop.stop()
#pytest.fixture(scope='function')
async def db():
dbe = await setup_db()
yield dbe
#pytest.fixture
def app(db):
"""
Returns a valid testing Tornado Application instance.
:return:
"""
app = make_app(db)
settings.JWT_SECRET = 'its_secret_one'
yield app
Now all my tests work, I'm back a happy man and very proud of my now better understanding of the asyncio way of life. Cool!

Can't extract data from RESTClient response

I am writing my first Groovy script, where I am calling a REST API.
I have the following call:
def client = new RESTClient( 'http://myServer:9000/api/resources/?format=json' )
That returns:
[[msr:[[data:{"level":"OK"}]], creationDate:2017-02-14T16:44:11+0000, date:2017-02-14T16:46:39+0000, id:136]]
I am trying to get the field level, like this:
def level_value = client.get( path : 'msr/data/level' )
However, when I print the value of the variable obtained:
println level_value.getData()
I get the whole JSON object instead of the field:
[[msr:[[data:{"level":"OK"}]], creationDate:2017-02-14T16:44:11+0000, date:2017-02-14T16:46:39+0000, id:136]]
So, what am I doing wrong?
Haven't looked at the docs for RESTClient but like Tim notes you seem to have a bit of a confusion around the rest client instance vs the respons object vs the json data. Something along the lines of:
def client = new RESTClient('http://myServer:9000/api/resources/?format=json')
def response = client.get(path: 'msr/data/level')
def level = response.data[0].msr[0].data.level
might get you your value. The main point here is that client is an instance of RESTClient, response is a response object representing the http response from the server, and response.data contains the parsed json payload in the response.
You would need to experiment with the expression on the last line to pull out the 'level' value.

Resources