Should python unittest mock calls copy passed arguments by reference? - python-3.x

The following code snippets were all run with python 3.7.11.
I came across some unexpected behavior with the unittest.mock Mock class. I wrote a unit test where a Mock was called with specific arguments to cover the case that the the real object, being mocked for, was expected to be called with during real runtime. The tests passed so I pushed a build to a real running device to discover a bug where not all of the arguments were being passed to the real object method. I quickly found my bug however to me it initially looked like my unit test should have failed when it instead succeeded. Below are some simplified examples of that situation. What I am curious about is if this behavior should be considered a bug or an error in understanding on my part.
from `unittest.mock` import Mock
mock = Mock()
l = [1]
mock(l)
l.append(2)
mock.call_args
# Output: call([1,2])
# rather than call([1])
id(l) == id(mock.call_args[0][0])
# Output: True
# This means the l object and latest call_args reference occupy the same space in memory
This copy by reference behavior is confusing because when a function is called in the same procedure, it is not expected that the function would be called with arguments appended to the object after the call.
def print_var(x):
print(x)
l = [1]
print_var(l)
# Output: 1
l.append(2)
# print_var never be called with [1,2]
Would it make sense for call_args to use deepcopy to emulate the behavior I was expecting?

Found a solution on the unittest.mock examples
from unittest.mock import Mock
from copy import deepcopy
class CopyingMock(MagicMock):
def __call__(self, *args, **kwargs):
args = deepcopy(args)
kwargs = deepcopy(kwargs)
return super().__call__(*args, **kwargs)
c = CopyingMock(return_value=None)
arg = set()
c(arg)
arg.add(1)
c.assert_called_with(set())
c.assert_called_with(arg)
# Traceback (most recent call last):
# ...
# AssertionError: Expected call: mock({1})
# Actual call: mock(set())

Related

Using singledispatch with custom class(CPython 3.8.2)

Let's say I want to set functions for each classes in module Named 'MacroMethods'. So I've set up singledispatch after seeing it in 'Fluent Python' like this:
#singledispatch
def addMethod(self, obj):
print(f'Wrong Object {str(obj)} supplied.')
return obj
...
#addMethod.register(MacroMethods.Wait)
def _(self, obj):
print('adding object wait')
obj.delay = self.waitSpin.value
obj.onFail = None
obj.onSuccess = None
return obj
Desired behavior is - when instance of class 'MacroMethods.Wait' is given as argument, singledispatch runs registered function with that class type.
Instead, it runs default function rather than registered one.
>>> Wrong Object <MacroMethods.Wait object at 0x0936D1A8> supplied.
However, type() clearly shows instance is class 'MacroMethods.Wait', and dict_keys property also contains it.
>>> dict_keys([<class 'object'>, ..., <class 'MacroMethods.Wait'>])
I suspect all custom classes I made count as 'object' type and don't run desired functions in result.
Any way to solve this problem? Entire codes are here.
Update
I've managed to mimic singledispatch's actions as following:
from functools import wraps
def state_deco(func_main):
"""
Decorator that mimics singledispatch for ease of interaction expansions.
"""
# assuming no args are needed for interaction functions.
func_main.dispatch_list = {} # collect decorated functions
#wraps(func_main)
def wrapper(target):
# dispatch target to destination interaction function.
nonlocal func_main
try:
# find and run callable for target
return func_main.dispatch_list[type(target)]()
except KeyError:
# If no matching case found, main decorated function will run instead.
func_main()
def register(target):
# A decorator that register decorated function to main decorated function.
def decorate(func_sub):
nonlocal func_main
func_main.dispatch_list[target] = func_sub
def register_wrapper(*args, **kwargs):
return func_sub(*args, **kwargs)
return register_wrapper
return decorate
wrapper.register = register
return wrapper
Used like:
#state_deco
def general():
return "A's reaction to undefined others."
#general.register(StateA)
def _():
return "A's reaction of another A"
#general.register(StateB)
def _():
return "A's reaction of B"
But still it's not singledispatch, so I find this might be inappropriate to post this as answer.
I wanted to do similar and had the same trouble. Looks like we have bumped into a python bug. Found a write-up that describes this situation.
Here is the link to the Python Bug Tracker.
Python 3.7 breaks on singledispatch_function.register(pseudo_type), which Python 3.6 accepted

python3 mock member variable get multiple times

I have a use case where I need to mock a member variable but I want it to return a different value every time it is accessed.
Example;
def run_test():
myClass = MyDumbClass()
for i in range(2):
print(myClass.response)
class MyDumbClass():
def __init__(self):
self.response = None
#pytest.mark.parametrize("responses", [[200,201]])
#patch("blah.MyDumbClass")
def test_stuff(mockMyDumbClass, responses)
run_test()
assert stuff
What I am hoping for here is in the run_test method the first iteration will print 200 then the next will print 201. Is this possible, been looking through unittest and pytest documentation but can't find anything about mocking a member variable in this fashion.
Just started learning pytest and unittest with python3 so forgive me if the style isn't the best.
If you wrap myDumbClass.response in a get function - say get_response() then you can use the side_effect parameter of the mock class.
side_effect sets the return_value of the mocked method to an iterator returning a different value each time you call the mocked method.
For example you can do
def run_test():
myClass = MyDumbClass()
for i in range(2):
print(myClass.get_response())
class MyDumbClass():
def __init__(self):
self.response = None
def get_response(self):
return self.response
#pytest.mark.parametrize("responses", [([200,201])])
def test_stuff( responses):
with mock.patch('blah.MyDumbClass.get_response', side_effect=responses):
run_test()
assert False
Result
----------------------------------- Captured stdout call ------------------------------------------------------------
200
201
Edit
No need to patch via context manager e.g with mock.patch. You can patch via decorator in pretty much the same way. For example this works fine
#patch('blah.MyDumbClass.get_response',side_effect=[200,100])
def test_stuff(mockMyDumbClass):
run_test()
assert False
----------------------------------- Captured stdout call ------------------------------------------------------------
200
201

Clarification requested for decorators

Got big doubts on decorators.
Below a simple test code, read in some book for beginner.
# -*-coding:Latin-1 -*
import sys
# The decorator.
def my_decorator(modified_function):
"""My first decorator."""
# Modifying modified_function.
def modifying_function():
print("--> before")
ret = modified_function()
print("<-- after={}".format(ret))
return ret
print("Decorator is called with modified_function '{0}'".format(modified_function))
# Return the modifying function.
return modifying_function
#my_decorator
def hello_world():
"""Decorated function."""
print("!! That's all folks !!")
return (14)
print("Python version = {}".format(sys.version))
# We try to call hello_world(), but the decorator is called.
hello_world()
print("--------------------------------------------------------------")
my_decorator(hello_world)
print("--------------------------------------------------------------")
# Found this other way on the WEB, but does not work for me
my_hello = my_decorator(hello_world)
my_hello()
print("--------------------------------------------------------------")
For this code, the output is rather strange, to me.
Maybe it's stupid, but ...
Decorator is called with modified_function '<function hello_world at 0x0000011D5FDCDEA0>'
Python version = 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:57:36) [MSC v.1900 64 bit (AMD64)]
--> before
!! That's all folks !!
<-- after=14
--------------------------------------------------------------
Decorator is called with modified_function '<function my_decorator.<locals>.modifying_function at 0x0000011D5FDCDF28>'
--------------------------------------------------------------
Decorator is called with modified_function '<function my_decorator.<locals>.modifying_function at 0x0000011D5FDCDF28>'
--> before
--> before
!! That's all folks !!
<-- after=14
<-- after=14
--------------------------------------------------------------
the python version is printed after the decorator trace.
in 2nd and 3rd call, print("Decorator is called with modified_function ...") gives me some strange value for the function. At least, not what I expected.
the traces ("--> ...") and ("<-- ...") are doubled.
Any clarification for a newbie is welcome.
Your confusion has to do with the fact that you're using the decorator manually, but you also used Python's special syntax to apply decorators to functions as you define them: #my_decorator. You usually don't want to do both of those for the same function.
Try this example and it might help you understand things better:
# after defining my_decorator as before...
def foo(): # define a function for testing with manual decorator application
print("foo")
print("----")
foo() # test unmodified function
print("----")
decorated_foo = my_decorator(foo) # manually apply the decorator (assigning to a new name)
print("----")
decorated_foo() # test decorated function
print("----")
foo() # confirm that the original function is still the same
print("----")
foo = my_decorator(foo) # apply the decorator again, replacing the original name this time
print("----")
foo() # see that the original function has been replaced
print("----")
#my_decorator # this line applies the decorator for you (like the "foo = ..." line above)
def bar():
print("bar")
print("----")
bar() # note that the decorator has already been applied to bar
print("----")
double_decorated_bar = my_decorator(bar) # apply the decorator again, just for fun
print("----")
double_decorated_bar() # get doubled output from the doubled decorators
Oh ... OK.
Light came when I saw :
double_decorated_bar = my_decorator(bar) # apply the decorator again, just for fun
Did not pay attention that the #decorator was not there in example.
Thx again for clarification.

Python, mocking and wrapping methods without instantating objects

I want to mock a method of a class and use wraps, so that it is actually called, but I can inspect the arguments passed to it. I have seen at several places (here for example) that the usual way to do that is as follows (adapted to show my point):
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
spud = Potato()
#patch.object(Potato, 'foo', wraps=spud.foo)
def test_something(self, mock):
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
However, this instantiates the class Potato, in order to bind the mock to the instance method spud.foo.
What I need is to mock the method foo in all instances of Potato, and wrap them around the original methods. I.e, I need the following:
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
#patch.object(Potato, 'foo', wraps=Potato.foo)
def test_something(self, mock):
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
This of course doesn't work. I get the error:
TypeError: foo() missing 1 required positional argument: 'self'
It works however if wraps is not used, so the problem is not in the mock itself, but in the way it calls the wrapped function. For example, this works (but of course I had to "fake" the returned value, because now Potato.foo is never actually run):
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
#patch.object(Potato, 'foo', return_value=42)#, wraps=Potato.foo)
def test_something(self, mock):
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
This works, but it does not run the original function, which I need to run because the return value is used elsewhere (and I cannot fake it from the test).
Can it be done?
Note The actual reason behind my needs is that I'm testing a rest api with webtest. From the tests I perform some wsgi requests to some paths, and my framework instantiates some classes and uses their methods to fulfill the request. I want to capture the parameters sent to those methods to do some asserts about them in my tests.
In short, you can't do this using Mock instances alone.
patch.object creates Mock's for the specified instance (Potato), i.e. it replaces Potato.foo with a single Mock the moment it is called. Therefore, there is no way to pass instances to the Mock as the mock is created before any instances are. To my knowledge getting instance information to the Mock at runtime is also very difficult.
To illustrate:
from unittest.mock import MagicMock
class MyMock(MagicMock):
def __init__(self, *a, **kw):
super(MyMock, self).__init__(*a, **kw)
print('Created Mock instance a={}, kw={}'.format(a,kw))
with patch.object(Potato, 'foo', new_callable=MyMock, wrap=Potato.foo):
print('no instances created')
spud = Potato()
print('instance created')
The output is:
Created Mock instance a=(), kw={'name': 'foo', 'wrap': <function Potato.foo at 0x7f5d9bfddea0>}
no instances created
instance created
I would suggest monkey-patching your class in order to add the Mock to the correct location.
from unittest.mock import MagicMock
class PotatoTest(TestCase):
def test_something(self):
old_foo = Potato.foo
try:
mock = MagicMock(wraps=Potato.foo, return_value=42)
Potato.foo = lambda *a,**kw: mock(*a, **kw)
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(self.spud, n=40) # Now needs self instance
self.assertEqual(forty_two, 42)
finally:
Potato.foo = old_foo
Note that you using called_with is problematic as you are calling your functions with an instance.
Do you control creation of Potato instances, or at least have access to these instances after creating them? You should, else you'd not be able to check particular arg lists.
If so, you can wrap methods of individual instances using
spud = dig_out_a_potato()
with mock.patch.object(spud, "foo", wraps=spud.foo) as mock_spud:
# do your thing.
mock_spud.assert_called...
Your question looks identical to python mock - patching a method without obstructing implementation to me. https://stackoverflow.com/a/72446739/9230828 implements what you want (except that it uses a with statement instead of a decorator). wrap_object.py:
# Copyright (C) 2022, Benjamin Drung <bdrung#posteo.de>
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import contextlib
import typing
import unittest.mock
#contextlib.contextmanager
def wrap_object(
target: object, attribute: str
) -> typing.Generator[unittest.mock.MagicMock, None, None]:
"""Wrap the named member on an object with a mock object.
wrap_object() can be used as a context manager. Inside the
body of the with statement, the attribute of the target is
wrapped with a :class:`unittest.mock.MagicMock` object. When
the with statement exits the patch is undone.
The instance argument 'self' of the wrapped attribute is
intentionally not logged in the MagicMock call. Therefore
wrap_object() can be used to check all calls to the object,
but not differentiate between different instances.
"""
mock = unittest.mock.MagicMock()
real_attribute = getattr(target, attribute)
def mocked_attribute(self, *args, **kwargs):
mock.__call__(*args, **kwargs)
return real_attribute(self, *args, **kwargs)
with unittest.mock.patch.object(target, attribute, mocked_attribute):
yield mock
Then you can write following unit test:
from unittest import TestCase
from wrap_object import wrap_object
class Potato:
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
def test_something(self):
with wrap_object(Potato, 'foo') as mock:
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)

Wrapping all possible method calls of a class in a try/except block

I'm trying to wrap all methods of an existing Class (not of my creation) into a try/except suite. It could be any Class, but I'll use the pandas.DataFrame class here as a practical example.
So if the invoked method succeeds, we simply move on. But if it should generate an exception, it is appended to a list for later inspection/discovery (although the below example just issues a print statement for simplicity).
(Note that the kinds of data-related exceptions that can occur when a method on the instance is invoked, isn't yet known; and that's the reason for this exercise: discovery).
This post was quite helpful (particularly #martineau Python-3 answer), but I'm having trouble adapting it. Below, I expected the second call to the (wrapped) info() method to emit print output but, sadly, it doesn't.
#!/usr/bin/env python3
import functools, types, pandas
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): #Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('Exception: %r' % sys.exc_info()) # Something trivial.
#<Actual code would append that exception info to a list>.
return wrapper
class MetaClass(type):
def __new__(mcs, class_name, base_classes, classDict):
newClassDict = {}
for attributeName, attribute in classDict.items():
if type(attribute) == types.FunctionType: # Replace it with a
attribute = method_wrapper(attribute) # decorated version.
newClassDict[attributeName] = attribute
return type.__new__(mcs, class_name, base_classes, newClassDict)
class WrappedDataFrame2(MetaClass('WrappedDataFrame',
(pandas.DataFrame, object,), {}),
metaclass=type):
pass
print('Unwrapped pandas.DataFrame().info():')
pandas.DataFrame().info()
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
print()
This outputs:
Unwrapped pandas.DataFrame().info():
<class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Empty DataFrame
Wrapped pandas.DataFrame().info(): <-- Missing print statement after this line.
<class '__main__.WrappedDataFrame2'>
Index: 0 entries
Empty WrappedDataFrame2
In summary,...
>>> unwrapped_object.someMethod(...)
# Should be mirrored by ...
>>> wrapping_object.someMethod(...)
# Including signature, docstring, etc. (i.e. all attributes); except that it
# executes inside a try/except suite (so I can catch exceptions generically).
long time no see. ;-) In fact it's been such a long time you may no longer care, but in case you (or others) do...
Here's something I think will do what you want. I've never answered your question before now because I don't have pandas installed on my system. However, today I decided to see if there was a workaround for not having it and created a trivial dummy module to mock it (only as far as I needed). Here's the only thing in it:
mockpandas.py:
""" Fake pandas module. """
class DataFrame:
def info(self):
print('pandas.DataFrame.info() called')
raise RuntimeError('Exception raised')
Below is code that seems to do what you need by implementing #Blckknght's suggestion of iterating through the MRO—but ignores the limitations noted in his answer that could arise from doing it that way). It ain't pretty, but as I said, it seems to work with at least the mocked pandas library I created.
import functools
import mockpandas as pandas # mock the library
import sys
import traceback
import types
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): # Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('An exception occurred in the wrapped method {}.{}()'.format(
args[0].__class__.__name__, method.__name__))
traceback.print_exc(file=sys.stdout)
# (Actual code would append that exception info to a list)
return wrapper
class MetaClass(type):
def __new__(meta, class_name, base_classes, classDict):
""" See if any of the base classes were created by with_metaclass() function. """
marker = None
for base in base_classes:
if hasattr(base, '_marker'):
marker = getattr(base, '_marker') # remember class name of temp base class
break # quit looking
if class_name == marker: # temporary base class being created by with_metaclass()?
return type.__new__(meta, class_name, base_classes, classDict)
# Temporarily create an unmodified version of class so it's MRO can be used below.
TempClass = type.__new__(meta, 'TempClass', base_classes, classDict)
newClassDict = {}
for cls in TempClass.mro():
for attributeName, attribute in cls.__dict__.items():
if isinstance(attribute, types.FunctionType):
# Convert it to a decorated version.
attribute = method_wrapper(attribute)
newClassDict[attributeName] = attribute
return type.__new__(meta, class_name, base_classes, newClassDict)
def with_metaclass(meta, classname, bases):
""" Create a class with the supplied bases and metaclass, that has been tagged with a
special '_marker' attribute.
"""
return type.__new__(meta, classname, bases, {'_marker': classname})
class WrappedDataFrame2(
with_metaclass(MetaClass, 'WrappedDataFrame', (pandas.DataFrame, object))):
pass
print('Unwrapped pandas.DataFrame().info():')
try:
pandas.DataFrame().info()
except RuntimeError:
print(' RuntimeError exception was raised as expected')
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
Output:
Unwrapped pandas.DataFrame().info():
pandas.DataFrame.info() called
RuntimeError exception was raised as expected
Wrapped pandas.DataFrame().info():
Calling: WrappedDataFrame2.info()...
pandas.DataFrame.info() called
An exception occurred in the wrapped method WrappedDataFrame2.info()
Traceback (most recent call last):
File "test.py", line 16, in wrapper
return method(*args, **kwargs)
File "mockpandas.py", line 9, in info
raise RuntimeError('Exception raised')
RuntimeError: Exception raised
As the above illustrates, the method_wrapper() decoratored version is being used by methods of the wrapped class.
Your metaclass only applies your decorator to the methods defined in classes that are instances of it. It doesn't decorate inherited methods, since they're not in the classDict.
I'm not sure there's a good way to make it work. You could try iterating through the MRO and wrapping all the inherited methods as well as your own, but I suspect you'd get into trouble if there were multiple levels of inheritance after you start using MetaClass (as each level will decorate the already decorated methods of the previous class).

Resources