I'd like to force certain methods in child classes to call to invoke the method they're overriding.
#abstractmethod can require certain methods be implemented; I'd like a behavior similar to this (i.e., if the overriding method doesn't call super(), don't execute and complain to the user).
Example:
class Foo:
#must_call_super
def i_do_things(self):
print('called')
class Good(Foo):
def i_do_things(self):
# super().i_do_things() is called; will run.
super().i_do_things()
print('called as well')
class Bad(Foo):
def i_do_things(self):
# should complain that super().i_do_things isn't called here
print('called as well')
# should work fine
good = Good()
# should error
bad = Bad()
Thanks for sending me down the rabbit hole.
Below is my solution to this problem. It uses metaclass, ast, and some hacking to detect whether a child class calls super().some_func() in its version of some_func method.
Core classes
These should be controlled by the developer.
import inspect
import ast
import textwrap
class Analyzer(ast.NodeVisitor):
def __init__(self, ast_sig: str):
self.func_exists = False
self.sig = ast_sig
def visit_Call(self, node):
"""Traverse the ast tree. Once a node's signature matches the given
method call's signature, we consider that the method call exists.
"""
# print(ast.dump(node))
if ast.dump(node) == self.sig:
self.func_exists |= True
self.generic_visit(node)
class FooMeta(type):
# _ast_sig_super_methods stores the ast signature of any method that
# a `super().method()` call must be made in its overridden version in an
# inherited child. One can add more method and its associted ast sig in
# this dict.
_ast_sig_super_methods = {
'i_do_things': "Call(func=Attribute(value=Call(func=Name(id='super', ctx=Load()), args=[], keywords=[]), attr='i_do_things', ctx=Load()), args=[], keywords=[])",
}
def __new__(cls, name, bases, dct):
# cls = FooMeta
# name = current class name
# bases = any parents of the current class
# dct = namespace dict of the current class
for method, ast_sig in FooMeta._ast_sig_super_methods.items():
if name != 'Foo' and method in dct: # desired method in subclass
source = inspect.getsource(dct[method]) # get source code
formatted_source = textwrap.dedent(source) # correct indentation
tree = ast.parse(formatted_source) # obtain ast tree
analyzer = Analyzer(ast_sig)
analyzer.visit(tree)
if not analyzer.func_exists:
raise RuntimeError(f'super().{method} is not called in {name}.{method}!')
return super().__new__(cls, name, bases, dct)
class Foo(metaclass=FooMeta):
def i_do_things(self):
print('called')
Usage and Effect
This is done by other people, from whom we want to dictate that super().i_do_things must be called in the overridden version in their inherited classes.
Good
class Good(Foo):
def i_do_things(self):
# super().i_do_things() is called; will run.
super().i_do_things()
print('called as well')
good = Good()
good.i_do_things()
# output:
# called
# called as well
Bad
class Bad(Foo):
def i_do_things(self):
# should complain that super().i_do_things isn't called here
print('called as well')
# Error output:
# RuntimeError: super().i_do_things is not called in Bad.i_do_things!
Secretly Bad
class Good(Foo):
def i_do_things(self):
# super().i_do_things() is called; will run.
super().i_do_things()
print('called as well')
class SecretlyBad(Good):
def i_do_things(self):
# also shall complain super().i_do_things isn't called
print('called as well')
# Error output:
# RuntimeError: super().i_do_things is not called in SecretlyBad.i_do_things!
Note
Since FooMeta is executed when the inherited classes are defined, not when they are instantiated, error is thrown before Bad().i_do_things() or SecretlyBad().i_do_things() is called. This is not the same as the requirement by the OP, but it does achieve the same end goal.
To obtain the ast signature of super().i_do_things(), we can uncomment the print statement in Analyzer, analyze the source code of Good.i_do_things, and inspect from there.
Related
I am interested in patching the a classmethod called _validate in a Schema class and in a replaced fn using the value of cls and the other arguments.
For context ArrayHoldingAnyType inherits from Schema and _validate is called when it is instantiated.
When I try it with the below code, the value for cls is not a class. How do I fix the cls variable?
def test_validate_called_n_times(self):
def replacement_validate(cls, *args):
# code which will return the correct values
with patch.object(Schema, '_validate', new=replacement_validate) as mock_validate:
path_to_schemas = ArrayHoldingAnyType(['a'])
# I will check that the mock was called a certain number of times here with specific inputs
So the problem here was that the classmethod decorator was missing from replacement_validate.
This fixes it:
def test_validate_called_n_times(self):
#classmethod
def replacement_validate(cls, *args):
# code which will return the correct values
with patch.object(Schema, '_validate', new=replacement_validate) as mock_validate:
path_to_schemas = ArrayHoldingAnyType(['a'])
# I will check that the mock was called a certain number of times here with specific inputs
I would like to define a decorator that will register classes by a name given as an argument of my decorator. I could read from stackoverflow and other sources many examples that show how to derive such (tricky) code but when adapted to my needs my code fails to produce the expected result. Here is the code:
import functools
READERS = {}
def register(typ):
def decorator_register(kls):
#functools.wraps(kls)
def wrapper_register(*args, **kwargs):
READERS[typ] = kls
return wrapper_register
return decorator_register
#register(".pdb")
class PDBReader:
pass
#register(".gro")
class GromacsReader:
pass
print(READERS)
This code produces an empty dictionary while I would expect a dictionary with two entries. Would you have any idea about what is wrong with my code ?
Taking arguments (via (...)) and decoration (via #) both result in calls of functions. Each "stage" of taking arguments or decoration maps to one call and thus one nested functions in the decorator definition. register is a three-stage decorator and takes as many calls to trigger its innermost code. Of these,
the first is the argument ((".pdb")),
the second is the class definition (#... class), and
the third is the class call/instantiation (PDBReader(...))
This stage is broken as it does not instantiate the class.
In order to store the class itself in the dictionary, store it at the second stage. As the instances are not to be stored, remove the third stage.
def register(typ): # first stage: file extension
"""Create a decorator to register its target for the given `typ`"""
def decorator_register(kls): # second stage: Reader class
"""Decorator to register its target `kls` for the previously given `typ`"""
READERS[typ] = kls
return kls # <<< return class to preserve it
return decorator_register
Take note that the result of a decorator replaces its target. Thus, you should generally return the target itself or an equivalent object. Since in this case the class is returned immediately, there is no need to use functools.wraps.
READERS = {}
def register(typ): # first stage: file extension
"""Create a decorator to register its target for the given `typ`"""
def decorator_register(kls): # second stage: Reader class
"""Decorator to register its target `kls` for the previously given `typ`"""
READERS[typ] = kls
return kls # <<< return class to preserve it
return decorator_register
#register(".pdb")
class PDBReader:
pass
#register(".gro")
class GromacsReader:
pass
print(READERS) # {'.pdb': <class '__main__.PDBReader'>, '.gro': <class '__main__.GromacsReader'>}
If you don't actually call the code that the decorator is "wrapping" then the "inner" function will not fire, and you will not create an entry inside of READER. However, even if you create instances of PDBReader or GromacsReader, the value inside of READER will be of the classes themselves, not an instance of them.
If you want to do the latter, you have to change wrapper_register to something like this:
def register(typ):
def decorator_register(kls):
#functools.wraps(kls)
def wrapper_register(*args, **kwargs):
READERS[typ] = kls(*args, **kwargs)
return READERS[typ]
return wrapper_register
return decorator_register
I added simple init/repr inside of the classes to visualize it better:
#register(".pdb")
class PDBReader:
def __init__(self, var):
self.var = var
def __repr__(self):
return f"PDBReader({self.var})"
#register(".gro")
class GromacsReader:
def __init__(self, var):
self.var = var
def __repr__(self):
return f"GromacsReader({self.var})"
And then we initialize some objects:
x = PDBReader("Inside of PDB")
z = GromacsReader("Inside of Gromacs")
print(x) # Output: PDBReader(Inside of PDB)
print(z) # Output: GromacsReader(Inside of Gromacs)
print(READERS) # Output: {'.pdb': PDBReader(Inside of PDB), '.gro': GromacsReader(Inside of Gromacs)}
If you don't want to store the initialized object in READER however, you will still need to return an initialized object, otherwise when you try to initialize the object, it will return None.
You can then simply change wrapper_register to:
def wrapper_register(*args, **kwargs):
READERS[typ] = kls
return kls(*args, **kwargs)
I have a class that in principle carries all the information about it in its class body. When instantiated, it receives additional information that together with the class attributes forms a regular instance. My problem now lies in the fact that I need to implement a method which should be called as class method when it is called from a class object but should be called as regular instance method when called from an instance:
e.g. something like
class MyClass(object):
attribs = 1, 2, 3
def myMethod(self, args):
if isclass(self):
"do class stuff"
else:
"do instance stuff"
MyClass.myMethod(2) #should now be called as a class method, e.g. I would normally do #classmethod
MyClass().myMethod(2) #should now be called as instance method
Of course I could declare it as staticmethod and pass either the instance or the class object explicitly, but that seems rather unpythonic and also user unfriendly.
If the methods are to behave differently, you could simply change which one is exposed by that name at initialization time:
class MyCrazyClass:
#classmethod
def magicmeth(cls):
print("I'm a class")
def _magicmeth(self):
print("I'm an instance")
def __init__(self):
self.magicmeth = self._magicmeth
You can define a decorator that works like a regular method when called on an instance, or class method when called on a class. This requires a descriptor:
from functools import partial
class anymethod:
"""Transform a method into both a regular and class method"""
def __init__(self, call):
self.__wrapped__ = call
def __get__(self, instance, owner):
if instance is None: # called on class
return partial(self.__wrapped__, owner)
else: # called on instance
return partial(self.__wrapped__, instance)
class Foo:
#anymethod
def bar(first):
print(first)
Foo.bar() # <class '__main__.Foo'>
Foo().bar() # <__main__.Foo object at 0x106f86610>
Note that this behaviour will not be obvious to most programmers. Only use it if you really need it.
I want to mock a method of a class and use wraps, so that it is actually called, but I can inspect the arguments passed to it. I have seen at several places (here for example) that the usual way to do that is as follows (adapted to show my point):
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
spud = Potato()
#patch.object(Potato, 'foo', wraps=spud.foo)
def test_something(self, mock):
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
However, this instantiates the class Potato, in order to bind the mock to the instance method spud.foo.
What I need is to mock the method foo in all instances of Potato, and wrap them around the original methods. I.e, I need the following:
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
#patch.object(Potato, 'foo', wraps=Potato.foo)
def test_something(self, mock):
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
This of course doesn't work. I get the error:
TypeError: foo() missing 1 required positional argument: 'self'
It works however if wraps is not used, so the problem is not in the mock itself, but in the way it calls the wrapped function. For example, this works (but of course I had to "fake" the returned value, because now Potato.foo is never actually run):
from unittest import TestCase
from unittest.mock import patch
class Potato(object):
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
#patch.object(Potato, 'foo', return_value=42)#, wraps=Potato.foo)
def test_something(self, mock):
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
This works, but it does not run the original function, which I need to run because the return value is used elsewhere (and I cannot fake it from the test).
Can it be done?
Note The actual reason behind my needs is that I'm testing a rest api with webtest. From the tests I perform some wsgi requests to some paths, and my framework instantiates some classes and uses their methods to fulfill the request. I want to capture the parameters sent to those methods to do some asserts about them in my tests.
In short, you can't do this using Mock instances alone.
patch.object creates Mock's for the specified instance (Potato), i.e. it replaces Potato.foo with a single Mock the moment it is called. Therefore, there is no way to pass instances to the Mock as the mock is created before any instances are. To my knowledge getting instance information to the Mock at runtime is also very difficult.
To illustrate:
from unittest.mock import MagicMock
class MyMock(MagicMock):
def __init__(self, *a, **kw):
super(MyMock, self).__init__(*a, **kw)
print('Created Mock instance a={}, kw={}'.format(a,kw))
with patch.object(Potato, 'foo', new_callable=MyMock, wrap=Potato.foo):
print('no instances created')
spud = Potato()
print('instance created')
The output is:
Created Mock instance a=(), kw={'name': 'foo', 'wrap': <function Potato.foo at 0x7f5d9bfddea0>}
no instances created
instance created
I would suggest monkey-patching your class in order to add the Mock to the correct location.
from unittest.mock import MagicMock
class PotatoTest(TestCase):
def test_something(self):
old_foo = Potato.foo
try:
mock = MagicMock(wraps=Potato.foo, return_value=42)
Potato.foo = lambda *a,**kw: mock(*a, **kw)
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(self.spud, n=40) # Now needs self instance
self.assertEqual(forty_two, 42)
finally:
Potato.foo = old_foo
Note that you using called_with is problematic as you are calling your functions with an instance.
Do you control creation of Potato instances, or at least have access to these instances after creating them? You should, else you'd not be able to check particular arg lists.
If so, you can wrap methods of individual instances using
spud = dig_out_a_potato()
with mock.patch.object(spud, "foo", wraps=spud.foo) as mock_spud:
# do your thing.
mock_spud.assert_called...
Your question looks identical to python mock - patching a method without obstructing implementation to me. https://stackoverflow.com/a/72446739/9230828 implements what you want (except that it uses a with statement instead of a decorator). wrap_object.py:
# Copyright (C) 2022, Benjamin Drung <bdrung#posteo.de>
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import contextlib
import typing
import unittest.mock
#contextlib.contextmanager
def wrap_object(
target: object, attribute: str
) -> typing.Generator[unittest.mock.MagicMock, None, None]:
"""Wrap the named member on an object with a mock object.
wrap_object() can be used as a context manager. Inside the
body of the with statement, the attribute of the target is
wrapped with a :class:`unittest.mock.MagicMock` object. When
the with statement exits the patch is undone.
The instance argument 'self' of the wrapped attribute is
intentionally not logged in the MagicMock call. Therefore
wrap_object() can be used to check all calls to the object,
but not differentiate between different instances.
"""
mock = unittest.mock.MagicMock()
real_attribute = getattr(target, attribute)
def mocked_attribute(self, *args, **kwargs):
mock.__call__(*args, **kwargs)
return real_attribute(self, *args, **kwargs)
with unittest.mock.patch.object(target, attribute, mocked_attribute):
yield mock
Then you can write following unit test:
from unittest import TestCase
from wrap_object import wrap_object
class Potato:
def foo(self, n):
return self.bar(n)
def bar(self, n):
return n + 2
class PotatoTest(TestCase):
def test_something(self):
with wrap_object(Potato, 'foo') as mock:
self.spud = Potato()
forty_two = self.spud.foo(n=40)
mock.assert_called_once_with(n=40)
self.assertEqual(forty_two, 42)
class Class1(object):
def __init__(self, parameter1):
# action with parameter
def method1(self, parameter1):
# method actions
So what I want to happen is that I am able to make a Class1 object without having loaded the parameter1 yet and then when that has happened, I use method1 to set parameter1 and run actions with method1 as __init__ will use the results of method1. This is a python tutorial practice exam by the way so it has to be done this way.
EDIT:
>>>object1 = Class1()
>>>object1.method1(parameter1)
In order to allow a later initialization, you want to move all your actual initialization stuff into the method and make the parameter to the __init__ optional. Then, if the parameter is specified, you can call the method or not.
class SomeClass (object):
def __init__ (self, param = None):
# do some general initialization, like initializing instance members
self.foo = 'bar'
# if the parameter is specified, call the init method
if param is not None:
self.init(param)
def init (self, param):
# do initialization stuff
Then, both of the following ways to create the object are equivalent:
x = SomeClass('param value')
y = SomeClass()
y.init('param value')
If the idea is to be able to assign a value for the attribute at the method level and not in the initialization of the Class, I would suggest the following implementation:
class Class:
def __init__(self, parameter=None):
self.parameter=parameter
def method(self, parameter):
self.parameter = parameter
You can check that the attribute is certainly assigned through the method:
>>> c = Class1()
>>> c.method('whatever')
>>> print(c.parameter)
whatever
BTW in Python3 you don't need to explicitly inherit from object anymore, since already "all classes inherit from object".