Clarification requested for decorators - python-3.x

Got big doubts on decorators.
Below a simple test code, read in some book for beginner.
# -*-coding:Latin-1 -*
import sys
# The decorator.
def my_decorator(modified_function):
"""My first decorator."""
# Modifying modified_function.
def modifying_function():
print("--> before")
ret = modified_function()
print("<-- after={}".format(ret))
return ret
print("Decorator is called with modified_function '{0}'".format(modified_function))
# Return the modifying function.
return modifying_function
#my_decorator
def hello_world():
"""Decorated function."""
print("!! That's all folks !!")
return (14)
print("Python version = {}".format(sys.version))
# We try to call hello_world(), but the decorator is called.
hello_world()
print("--------------------------------------------------------------")
my_decorator(hello_world)
print("--------------------------------------------------------------")
# Found this other way on the WEB, but does not work for me
my_hello = my_decorator(hello_world)
my_hello()
print("--------------------------------------------------------------")
For this code, the output is rather strange, to me.
Maybe it's stupid, but ...
Decorator is called with modified_function '<function hello_world at 0x0000011D5FDCDEA0>'
Python version = 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:57:36) [MSC v.1900 64 bit (AMD64)]
--> before
!! That's all folks !!
<-- after=14
--------------------------------------------------------------
Decorator is called with modified_function '<function my_decorator.<locals>.modifying_function at 0x0000011D5FDCDF28>'
--------------------------------------------------------------
Decorator is called with modified_function '<function my_decorator.<locals>.modifying_function at 0x0000011D5FDCDF28>'
--> before
--> before
!! That's all folks !!
<-- after=14
<-- after=14
--------------------------------------------------------------
the python version is printed after the decorator trace.
in 2nd and 3rd call, print("Decorator is called with modified_function ...") gives me some strange value for the function. At least, not what I expected.
the traces ("--> ...") and ("<-- ...") are doubled.
Any clarification for a newbie is welcome.

Your confusion has to do with the fact that you're using the decorator manually, but you also used Python's special syntax to apply decorators to functions as you define them: #my_decorator. You usually don't want to do both of those for the same function.
Try this example and it might help you understand things better:
# after defining my_decorator as before...
def foo(): # define a function for testing with manual decorator application
print("foo")
print("----")
foo() # test unmodified function
print("----")
decorated_foo = my_decorator(foo) # manually apply the decorator (assigning to a new name)
print("----")
decorated_foo() # test decorated function
print("----")
foo() # confirm that the original function is still the same
print("----")
foo = my_decorator(foo) # apply the decorator again, replacing the original name this time
print("----")
foo() # see that the original function has been replaced
print("----")
#my_decorator # this line applies the decorator for you (like the "foo = ..." line above)
def bar():
print("bar")
print("----")
bar() # note that the decorator has already been applied to bar
print("----")
double_decorated_bar = my_decorator(bar) # apply the decorator again, just for fun
print("----")
double_decorated_bar() # get doubled output from the doubled decorators

Oh ... OK.
Light came when I saw :
double_decorated_bar = my_decorator(bar) # apply the decorator again, just for fun
Did not pay attention that the #decorator was not there in example.
Thx again for clarification.

Related

Should python unittest mock calls copy passed arguments by reference?

The following code snippets were all run with python 3.7.11.
I came across some unexpected behavior with the unittest.mock Mock class. I wrote a unit test where a Mock was called with specific arguments to cover the case that the the real object, being mocked for, was expected to be called with during real runtime. The tests passed so I pushed a build to a real running device to discover a bug where not all of the arguments were being passed to the real object method. I quickly found my bug however to me it initially looked like my unit test should have failed when it instead succeeded. Below are some simplified examples of that situation. What I am curious about is if this behavior should be considered a bug or an error in understanding on my part.
from `unittest.mock` import Mock
mock = Mock()
l = [1]
mock(l)
l.append(2)
mock.call_args
# Output: call([1,2])
# rather than call([1])
id(l) == id(mock.call_args[0][0])
# Output: True
# This means the l object and latest call_args reference occupy the same space in memory
This copy by reference behavior is confusing because when a function is called in the same procedure, it is not expected that the function would be called with arguments appended to the object after the call.
def print_var(x):
print(x)
l = [1]
print_var(l)
# Output: 1
l.append(2)
# print_var never be called with [1,2]
Would it make sense for call_args to use deepcopy to emulate the behavior I was expecting?
Found a solution on the unittest.mock examples
from unittest.mock import Mock
from copy import deepcopy
class CopyingMock(MagicMock):
def __call__(self, *args, **kwargs):
args = deepcopy(args)
kwargs = deepcopy(kwargs)
return super().__call__(*args, **kwargs)
c = CopyingMock(return_value=None)
arg = set()
c(arg)
arg.add(1)
c.assert_called_with(set())
c.assert_called_with(arg)
# Traceback (most recent call last):
# ...
# AssertionError: Expected call: mock({1})
# Actual call: mock(set())

Confusion about UnboundLocalError: local variable 'number_sum' referenced before assignment when manually applying decorator

Was experimenting with applying memoization decorator to recursive functions using standard python #decorator notation, which worked beautifully. According to much of the documentation I've read, these code examples are supposed to be equivalent:
# letting python decorator mechanism take care of wrapping your function with the decorator function
#decorator
def func(): . . .
print(func(arg))
AND
# manually wrapping the decorator function around the function
func = decorator(func)
print(func(arg))
When I do this, however, I get an UnboundLocalError on func.
If I change the code to
new_name = decorator(func)
print(new_name(func))
The code runs, but the decorator is only applied to the first call and not to any of the recursive calls (this did NOT surprise me), but I don't get any error messages either.
What seems weird to me, however, is the original error message itself.
If I experiment further and try the following code:
new_name = decorator(func)
func = new_name
print(func(arg))
I get the same error ON THE SAME LINE AS BEFORE (????)
In fact, if I form a chain of these assignments, with func = several_names_later
I still get the same error on the ORIGINAL line
Can anyone explain what is going on, and why I'm getting the error, and why it seems that the error is disconnected from the location of the variable in question?
As requested, here is most of the actual code (with just one of the recursive functions), hope it's not too much . . .
import functools
def memoize(fn):
cache = dict()
#functools.wraps(fn)
def memoizer(*args):
print(args)
if args not in cache:
cache[args] = fn(*args)
return cache[args]
return memoizer
##memoize
def number_sum(n):
'''Returns the sum of the first n numbers'''
assert(n >= 0), 'n must be >= 0'
if n == 0:
return 0
else:
return n + number_sum(n-1)
def main():
# Book I'm reading claims this can be done but I get error instead:
number_sum = memoize(number_sum) # this is the flagged line
print(number_sum(300))
#UnboundLocalError: local variable 'number_sum' referenced before assignment
# When I do this instead, only applies decorator to first call, as I suspected
# but works otherwise: no errors and correct - but slow - results
# No complaints about number_sum
wrapped_number_sum = memoize(number_sum)
print(wrapped_number_sum(300))
# This is what is odd:
# When I do any version of this, I get the same error as above, but always on
# the original line, flagging number_sum as the problem
wrapped_number_sum = memoize(number_sum) # the flagged line, no matter what
number_sum = wrapped_number_sum
print(number_sum(300))
OR even:
wrapped_number_sum = memoize(number_sum) # still the flagged line
another_variable = wrapped_number_sum
number_sum = another_variable
print(number_sum(300))
if __name__ == '__main__':
main()
I am more than a little mystified at this.

Using singledispatch with custom class(CPython 3.8.2)

Let's say I want to set functions for each classes in module Named 'MacroMethods'. So I've set up singledispatch after seeing it in 'Fluent Python' like this:
#singledispatch
def addMethod(self, obj):
print(f'Wrong Object {str(obj)} supplied.')
return obj
...
#addMethod.register(MacroMethods.Wait)
def _(self, obj):
print('adding object wait')
obj.delay = self.waitSpin.value
obj.onFail = None
obj.onSuccess = None
return obj
Desired behavior is - when instance of class 'MacroMethods.Wait' is given as argument, singledispatch runs registered function with that class type.
Instead, it runs default function rather than registered one.
>>> Wrong Object <MacroMethods.Wait object at 0x0936D1A8> supplied.
However, type() clearly shows instance is class 'MacroMethods.Wait', and dict_keys property also contains it.
>>> dict_keys([<class 'object'>, ..., <class 'MacroMethods.Wait'>])
I suspect all custom classes I made count as 'object' type and don't run desired functions in result.
Any way to solve this problem? Entire codes are here.
Update
I've managed to mimic singledispatch's actions as following:
from functools import wraps
def state_deco(func_main):
"""
Decorator that mimics singledispatch for ease of interaction expansions.
"""
# assuming no args are needed for interaction functions.
func_main.dispatch_list = {} # collect decorated functions
#wraps(func_main)
def wrapper(target):
# dispatch target to destination interaction function.
nonlocal func_main
try:
# find and run callable for target
return func_main.dispatch_list[type(target)]()
except KeyError:
# If no matching case found, main decorated function will run instead.
func_main()
def register(target):
# A decorator that register decorated function to main decorated function.
def decorate(func_sub):
nonlocal func_main
func_main.dispatch_list[target] = func_sub
def register_wrapper(*args, **kwargs):
return func_sub(*args, **kwargs)
return register_wrapper
return decorate
wrapper.register = register
return wrapper
Used like:
#state_deco
def general():
return "A's reaction to undefined others."
#general.register(StateA)
def _():
return "A's reaction of another A"
#general.register(StateB)
def _():
return "A's reaction of B"
But still it's not singledispatch, so I find this might be inappropriate to post this as answer.
I wanted to do similar and had the same trouble. Looks like we have bumped into a python bug. Found a write-up that describes this situation.
Here is the link to the Python Bug Tracker.
Python 3.7 breaks on singledispatch_function.register(pseudo_type), which Python 3.6 accepted

python3 mock member variable get multiple times

I have a use case where I need to mock a member variable but I want it to return a different value every time it is accessed.
Example;
def run_test():
myClass = MyDumbClass()
for i in range(2):
print(myClass.response)
class MyDumbClass():
def __init__(self):
self.response = None
#pytest.mark.parametrize("responses", [[200,201]])
#patch("blah.MyDumbClass")
def test_stuff(mockMyDumbClass, responses)
run_test()
assert stuff
What I am hoping for here is in the run_test method the first iteration will print 200 then the next will print 201. Is this possible, been looking through unittest and pytest documentation but can't find anything about mocking a member variable in this fashion.
Just started learning pytest and unittest with python3 so forgive me if the style isn't the best.
If you wrap myDumbClass.response in a get function - say get_response() then you can use the side_effect parameter of the mock class.
side_effect sets the return_value of the mocked method to an iterator returning a different value each time you call the mocked method.
For example you can do
def run_test():
myClass = MyDumbClass()
for i in range(2):
print(myClass.get_response())
class MyDumbClass():
def __init__(self):
self.response = None
def get_response(self):
return self.response
#pytest.mark.parametrize("responses", [([200,201])])
def test_stuff( responses):
with mock.patch('blah.MyDumbClass.get_response', side_effect=responses):
run_test()
assert False
Result
----------------------------------- Captured stdout call ------------------------------------------------------------
200
201
Edit
No need to patch via context manager e.g with mock.patch. You can patch via decorator in pretty much the same way. For example this works fine
#patch('blah.MyDumbClass.get_response',side_effect=[200,100])
def test_stuff(mockMyDumbClass):
run_test()
assert False
----------------------------------- Captured stdout call ------------------------------------------------------------
200
201

Wrapping all possible method calls of a class in a try/except block

I'm trying to wrap all methods of an existing Class (not of my creation) into a try/except suite. It could be any Class, but I'll use the pandas.DataFrame class here as a practical example.
So if the invoked method succeeds, we simply move on. But if it should generate an exception, it is appended to a list for later inspection/discovery (although the below example just issues a print statement for simplicity).
(Note that the kinds of data-related exceptions that can occur when a method on the instance is invoked, isn't yet known; and that's the reason for this exercise: discovery).
This post was quite helpful (particularly #martineau Python-3 answer), but I'm having trouble adapting it. Below, I expected the second call to the (wrapped) info() method to emit print output but, sadly, it doesn't.
#!/usr/bin/env python3
import functools, types, pandas
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): #Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('Exception: %r' % sys.exc_info()) # Something trivial.
#<Actual code would append that exception info to a list>.
return wrapper
class MetaClass(type):
def __new__(mcs, class_name, base_classes, classDict):
newClassDict = {}
for attributeName, attribute in classDict.items():
if type(attribute) == types.FunctionType: # Replace it with a
attribute = method_wrapper(attribute) # decorated version.
newClassDict[attributeName] = attribute
return type.__new__(mcs, class_name, base_classes, newClassDict)
class WrappedDataFrame2(MetaClass('WrappedDataFrame',
(pandas.DataFrame, object,), {}),
metaclass=type):
pass
print('Unwrapped pandas.DataFrame().info():')
pandas.DataFrame().info()
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
print()
This outputs:
Unwrapped pandas.DataFrame().info():
<class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Empty DataFrame
Wrapped pandas.DataFrame().info(): <-- Missing print statement after this line.
<class '__main__.WrappedDataFrame2'>
Index: 0 entries
Empty WrappedDataFrame2
In summary,...
>>> unwrapped_object.someMethod(...)
# Should be mirrored by ...
>>> wrapping_object.someMethod(...)
# Including signature, docstring, etc. (i.e. all attributes); except that it
# executes inside a try/except suite (so I can catch exceptions generically).
long time no see. ;-) In fact it's been such a long time you may no longer care, but in case you (or others) do...
Here's something I think will do what you want. I've never answered your question before now because I don't have pandas installed on my system. However, today I decided to see if there was a workaround for not having it and created a trivial dummy module to mock it (only as far as I needed). Here's the only thing in it:
mockpandas.py:
""" Fake pandas module. """
class DataFrame:
def info(self):
print('pandas.DataFrame.info() called')
raise RuntimeError('Exception raised')
Below is code that seems to do what you need by implementing #Blckknght's suggestion of iterating through the MRO—but ignores the limitations noted in his answer that could arise from doing it that way). It ain't pretty, but as I said, it seems to work with at least the mocked pandas library I created.
import functools
import mockpandas as pandas # mock the library
import sys
import traceback
import types
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): # Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('An exception occurred in the wrapped method {}.{}()'.format(
args[0].__class__.__name__, method.__name__))
traceback.print_exc(file=sys.stdout)
# (Actual code would append that exception info to a list)
return wrapper
class MetaClass(type):
def __new__(meta, class_name, base_classes, classDict):
""" See if any of the base classes were created by with_metaclass() function. """
marker = None
for base in base_classes:
if hasattr(base, '_marker'):
marker = getattr(base, '_marker') # remember class name of temp base class
break # quit looking
if class_name == marker: # temporary base class being created by with_metaclass()?
return type.__new__(meta, class_name, base_classes, classDict)
# Temporarily create an unmodified version of class so it's MRO can be used below.
TempClass = type.__new__(meta, 'TempClass', base_classes, classDict)
newClassDict = {}
for cls in TempClass.mro():
for attributeName, attribute in cls.__dict__.items():
if isinstance(attribute, types.FunctionType):
# Convert it to a decorated version.
attribute = method_wrapper(attribute)
newClassDict[attributeName] = attribute
return type.__new__(meta, class_name, base_classes, newClassDict)
def with_metaclass(meta, classname, bases):
""" Create a class with the supplied bases and metaclass, that has been tagged with a
special '_marker' attribute.
"""
return type.__new__(meta, classname, bases, {'_marker': classname})
class WrappedDataFrame2(
with_metaclass(MetaClass, 'WrappedDataFrame', (pandas.DataFrame, object))):
pass
print('Unwrapped pandas.DataFrame().info():')
try:
pandas.DataFrame().info()
except RuntimeError:
print(' RuntimeError exception was raised as expected')
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
Output:
Unwrapped pandas.DataFrame().info():
pandas.DataFrame.info() called
RuntimeError exception was raised as expected
Wrapped pandas.DataFrame().info():
Calling: WrappedDataFrame2.info()...
pandas.DataFrame.info() called
An exception occurred in the wrapped method WrappedDataFrame2.info()
Traceback (most recent call last):
File "test.py", line 16, in wrapper
return method(*args, **kwargs)
File "mockpandas.py", line 9, in info
raise RuntimeError('Exception raised')
RuntimeError: Exception raised
As the above illustrates, the method_wrapper() decoratored version is being used by methods of the wrapped class.
Your metaclass only applies your decorator to the methods defined in classes that are instances of it. It doesn't decorate inherited methods, since they're not in the classDict.
I'm not sure there's a good way to make it work. You could try iterating through the MRO and wrapping all the inherited methods as well as your own, but I suspect you'd get into trouble if there were multiple levels of inheritance after you start using MetaClass (as each level will decorate the already decorated methods of the previous class).

Resources