Wrapping all possible method calls of a class in a try/except block - python-3.x

I'm trying to wrap all methods of an existing Class (not of my creation) into a try/except suite. It could be any Class, but I'll use the pandas.DataFrame class here as a practical example.
So if the invoked method succeeds, we simply move on. But if it should generate an exception, it is appended to a list for later inspection/discovery (although the below example just issues a print statement for simplicity).
(Note that the kinds of data-related exceptions that can occur when a method on the instance is invoked, isn't yet known; and that's the reason for this exercise: discovery).
This post was quite helpful (particularly #martineau Python-3 answer), but I'm having trouble adapting it. Below, I expected the second call to the (wrapped) info() method to emit print output but, sadly, it doesn't.
#!/usr/bin/env python3
import functools, types, pandas
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): #Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('Exception: %r' % sys.exc_info()) # Something trivial.
#<Actual code would append that exception info to a list>.
return wrapper
class MetaClass(type):
def __new__(mcs, class_name, base_classes, classDict):
newClassDict = {}
for attributeName, attribute in classDict.items():
if type(attribute) == types.FunctionType: # Replace it with a
attribute = method_wrapper(attribute) # decorated version.
newClassDict[attributeName] = attribute
return type.__new__(mcs, class_name, base_classes, newClassDict)
class WrappedDataFrame2(MetaClass('WrappedDataFrame',
(pandas.DataFrame, object,), {}),
metaclass=type):
pass
print('Unwrapped pandas.DataFrame().info():')
pandas.DataFrame().info()
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
print()
This outputs:
Unwrapped pandas.DataFrame().info():
<class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Empty DataFrame
Wrapped pandas.DataFrame().info(): <-- Missing print statement after this line.
<class '__main__.WrappedDataFrame2'>
Index: 0 entries
Empty WrappedDataFrame2
In summary,...
>>> unwrapped_object.someMethod(...)
# Should be mirrored by ...
>>> wrapping_object.someMethod(...)
# Including signature, docstring, etc. (i.e. all attributes); except that it
# executes inside a try/except suite (so I can catch exceptions generically).

long time no see. ;-) In fact it's been such a long time you may no longer care, but in case you (or others) do...
Here's something I think will do what you want. I've never answered your question before now because I don't have pandas installed on my system. However, today I decided to see if there was a workaround for not having it and created a trivial dummy module to mock it (only as far as I needed). Here's the only thing in it:
mockpandas.py:
""" Fake pandas module. """
class DataFrame:
def info(self):
print('pandas.DataFrame.info() called')
raise RuntimeError('Exception raised')
Below is code that seems to do what you need by implementing #Blckknght's suggestion of iterating through the MRO—but ignores the limitations noted in his answer that could arise from doing it that way). It ain't pretty, but as I said, it seems to work with at least the mocked pandas library I created.
import functools
import mockpandas as pandas # mock the library
import sys
import traceback
import types
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): # Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('An exception occurred in the wrapped method {}.{}()'.format(
args[0].__class__.__name__, method.__name__))
traceback.print_exc(file=sys.stdout)
# (Actual code would append that exception info to a list)
return wrapper
class MetaClass(type):
def __new__(meta, class_name, base_classes, classDict):
""" See if any of the base classes were created by with_metaclass() function. """
marker = None
for base in base_classes:
if hasattr(base, '_marker'):
marker = getattr(base, '_marker') # remember class name of temp base class
break # quit looking
if class_name == marker: # temporary base class being created by with_metaclass()?
return type.__new__(meta, class_name, base_classes, classDict)
# Temporarily create an unmodified version of class so it's MRO can be used below.
TempClass = type.__new__(meta, 'TempClass', base_classes, classDict)
newClassDict = {}
for cls in TempClass.mro():
for attributeName, attribute in cls.__dict__.items():
if isinstance(attribute, types.FunctionType):
# Convert it to a decorated version.
attribute = method_wrapper(attribute)
newClassDict[attributeName] = attribute
return type.__new__(meta, class_name, base_classes, newClassDict)
def with_metaclass(meta, classname, bases):
""" Create a class with the supplied bases and metaclass, that has been tagged with a
special '_marker' attribute.
"""
return type.__new__(meta, classname, bases, {'_marker': classname})
class WrappedDataFrame2(
with_metaclass(MetaClass, 'WrappedDataFrame', (pandas.DataFrame, object))):
pass
print('Unwrapped pandas.DataFrame().info():')
try:
pandas.DataFrame().info()
except RuntimeError:
print(' RuntimeError exception was raised as expected')
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
Output:
Unwrapped pandas.DataFrame().info():
pandas.DataFrame.info() called
RuntimeError exception was raised as expected
Wrapped pandas.DataFrame().info():
Calling: WrappedDataFrame2.info()...
pandas.DataFrame.info() called
An exception occurred in the wrapped method WrappedDataFrame2.info()
Traceback (most recent call last):
File "test.py", line 16, in wrapper
return method(*args, **kwargs)
File "mockpandas.py", line 9, in info
raise RuntimeError('Exception raised')
RuntimeError: Exception raised
As the above illustrates, the method_wrapper() decoratored version is being used by methods of the wrapped class.

Your metaclass only applies your decorator to the methods defined in classes that are instances of it. It doesn't decorate inherited methods, since they're not in the classDict.
I'm not sure there's a good way to make it work. You could try iterating through the MRO and wrapping all the inherited methods as well as your own, but I suspect you'd get into trouble if there were multiple levels of inheritance after you start using MetaClass (as each level will decorate the already decorated methods of the previous class).

Related

Should python unittest mock calls copy passed arguments by reference?

The following code snippets were all run with python 3.7.11.
I came across some unexpected behavior with the unittest.mock Mock class. I wrote a unit test where a Mock was called with specific arguments to cover the case that the the real object, being mocked for, was expected to be called with during real runtime. The tests passed so I pushed a build to a real running device to discover a bug where not all of the arguments were being passed to the real object method. I quickly found my bug however to me it initially looked like my unit test should have failed when it instead succeeded. Below are some simplified examples of that situation. What I am curious about is if this behavior should be considered a bug or an error in understanding on my part.
from `unittest.mock` import Mock
mock = Mock()
l = [1]
mock(l)
l.append(2)
mock.call_args
# Output: call([1,2])
# rather than call([1])
id(l) == id(mock.call_args[0][0])
# Output: True
# This means the l object and latest call_args reference occupy the same space in memory
This copy by reference behavior is confusing because when a function is called in the same procedure, it is not expected that the function would be called with arguments appended to the object after the call.
def print_var(x):
print(x)
l = [1]
print_var(l)
# Output: 1
l.append(2)
# print_var never be called with [1,2]
Would it make sense for call_args to use deepcopy to emulate the behavior I was expecting?
Found a solution on the unittest.mock examples
from unittest.mock import Mock
from copy import deepcopy
class CopyingMock(MagicMock):
def __call__(self, *args, **kwargs):
args = deepcopy(args)
kwargs = deepcopy(kwargs)
return super().__call__(*args, **kwargs)
c = CopyingMock(return_value=None)
arg = set()
c(arg)
arg.add(1)
c.assert_called_with(set())
c.assert_called_with(arg)
# Traceback (most recent call last):
# ...
# AssertionError: Expected call: mock({1})
# Actual call: mock(set())

MetaClass in Python

I am trying to create a Meta-Class for my Class.
I have tried to print information about my class in meta-class
Now I have created two objects of my class
But Second object gets created without referencing my Meta-Class
Does Meta Class gets called only once per Class??
Any help will be appreciated
Thanks
class Singleton(type):
def __new__(cls,name,bases,attr):
print (f"name {name}")
print (f"bases {bases}")
print (f"attr {attr}")
print ("Space Please")
return super(Singleton,cls).__new__(cls,name,bases,attr)
class Multiply(metaclass = Singleton):
pass
objA = Multiply()
objB = Multiply()
print (objA)
print (objB)
Yes - the metaclass's __new__ and __init__ methods are called only when the class is created. After that, in your example, the class will be bound to theMultiply name. In many aspects, it is just an object like any other in Python. When you do objA = Multiply() you are not creating a new instance of type(Multiply), which is the metaclass - you are creating a new instance of Multiply itself: Multiply.__new__ and Multiply.__init__ are called.
Now, there is this: the mechanism in Python which make __new__ and __init__ be called when creating an instance is the code in the metaclass __call__ method. That is, just as when you create any class with a __call__ method and use an instance of it with the calling syntax obj() will invoke type(obj).__call__(obj), when you do Multiply() what is called (in this case) is Singleton.__call__(Multiply).
Since it is not implemented, Singleton's superclass, which is type __call__ method is called instead - and it is in there that Multiply.__new__ and __init__ are called.
That said, there is nothing in the code above that would make your classes behave as "singletons". And more importantly you don't need a metaclass to have a singleton in Python. I don't know who invented this thing, but it keeps circulating around.
First, if you really need a singleton, all you need to do is to write a plain class, no special anything, create your single instance, and document that the instance should be used. Just as people use None - no one keeps getting a reference to Nonetype and keep calling it to get a None reference:
class _Multiply:
...
# document that the code should use this instance:
Multiply = _Multiply()
second Alternatively, if your code have a need, whatsoever, for instantiating the class that should be a singleton where it will be used, you can use the class' __new__ method itself to control instantiation, no need for a metaclass:
class Multiply:
_instance = None
def __new__(cls):
if not cls._instance:
cls._instance = super().__new__(cls)
# insert any code that would go in `__init__` here:
...
...
return cls._instance
Third just for demonstration purposes, please don't use this, the metaclass mechanism to have singletons can be built in the __call__ method:
class Singleton(type):
registry = {}
def __new__(mcls,name,bases,attr):
print(f"name {name}")
print(f"bases {bases}")
print(f"attr {attr}")
print("Class created")
print ("Space Please")
return super(Singleton,mcls).__new__(cls,name,bases,attr)
def __call__(cls, *args, **kw):
registry = type(cls).registry
if cls not in registry:
print(f"{cls.__name__} being instantiated for the first time")
registry[cls] = super().__call__(*args, **kw)
else:
print(f"Attempting to create a new instance of {cls.__name__}. Returning single instance instead")
return registry[cls]
class Multiply(metaclass = Singleton):
pass

Using singledispatch with custom class(CPython 3.8.2)

Let's say I want to set functions for each classes in module Named 'MacroMethods'. So I've set up singledispatch after seeing it in 'Fluent Python' like this:
#singledispatch
def addMethod(self, obj):
print(f'Wrong Object {str(obj)} supplied.')
return obj
...
#addMethod.register(MacroMethods.Wait)
def _(self, obj):
print('adding object wait')
obj.delay = self.waitSpin.value
obj.onFail = None
obj.onSuccess = None
return obj
Desired behavior is - when instance of class 'MacroMethods.Wait' is given as argument, singledispatch runs registered function with that class type.
Instead, it runs default function rather than registered one.
>>> Wrong Object <MacroMethods.Wait object at 0x0936D1A8> supplied.
However, type() clearly shows instance is class 'MacroMethods.Wait', and dict_keys property also contains it.
>>> dict_keys([<class 'object'>, ..., <class 'MacroMethods.Wait'>])
I suspect all custom classes I made count as 'object' type and don't run desired functions in result.
Any way to solve this problem? Entire codes are here.
Update
I've managed to mimic singledispatch's actions as following:
from functools import wraps
def state_deco(func_main):
"""
Decorator that mimics singledispatch for ease of interaction expansions.
"""
# assuming no args are needed for interaction functions.
func_main.dispatch_list = {} # collect decorated functions
#wraps(func_main)
def wrapper(target):
# dispatch target to destination interaction function.
nonlocal func_main
try:
# find and run callable for target
return func_main.dispatch_list[type(target)]()
except KeyError:
# If no matching case found, main decorated function will run instead.
func_main()
def register(target):
# A decorator that register decorated function to main decorated function.
def decorate(func_sub):
nonlocal func_main
func_main.dispatch_list[target] = func_sub
def register_wrapper(*args, **kwargs):
return func_sub(*args, **kwargs)
return register_wrapper
return decorate
wrapper.register = register
return wrapper
Used like:
#state_deco
def general():
return "A's reaction to undefined others."
#general.register(StateA)
def _():
return "A's reaction of another A"
#general.register(StateB)
def _():
return "A's reaction of B"
But still it's not singledispatch, so I find this might be inappropriate to post this as answer.
I wanted to do similar and had the same trouble. Looks like we have bumped into a python bug. Found a write-up that describes this situation.
Here is the link to the Python Bug Tracker.
Python 3.7 breaks on singledispatch_function.register(pseudo_type), which Python 3.6 accepted

Can't pickle <class 'a class'>: attribute lookup inner class on a class failed

I was using PySpark to process some calls data. As you see, I added some inner classes to class GetInfoFromCalls dynamically by using metaclass.
code below located in package for_test that existed in all nodes:
class StatusField(object):
"""
some alias.
"""
failed = "failed"
succeed = "succeed"
status = "status"
getNothingDefaultValue = "-999999"
class Result(object):
"""
Result that store result and some info about it.
"""
def __init__(self, result, status, message=None):
self.result = result
self.status = status
self.message = message
structureList = [
("user_mobile", str, None),
("real_name", str, None),
("channel_attr", str, None),
("channel_src", str, None),
("task_data", dict, None),
("bill_info", list, "task_data"),
("account_info", list, "task_data"),
("payment_info", list, "task_data"),
("call_info", list, "task_data")
]
def inner_get(self, defaultValue=StatusField.getNothingDefaultValue):
try:
return self.holder.get(self)
except Exception as e:
return Result(defaultValue, StatusField.failed)
print(e)
class call_meta(type):
def __init__(cls, name, bases, attrs):
for name_str, type_class, pLevel_str in structureList:
setattr(cls, name_str, type(
name_str,
(object,),
{})
)
class GetInfoFromCalls(object, metaclass = call_meta):
def __init__(self, call_deatails):
for name_str, type_class, pLevel_str in structureList:
inn = getattr(self.__class__, name_str)()
object_dict = {
"name": name_str,
"type": type_class,
"pLevel": None if pLevel_str is None else getattr(self, pLevel_str),
"context": None,
"get": inner_get,
"holder": self,
}
for attr_str, real_attr in object_dict.items():
setattr(inn, attr_str, real_attr)
setattr(self, name_str, inn)
self.call_details = call_deatails
when I ran
import pickle
pickle.dumps(GetInfoFromCalls("foo"))
it raised error like this:
Traceback (most recent call last):
File "<ipython-input-11-b2d409e35eb4>", line 1, in <module>
pickle.dumps(GetInfoFromCalls("foo"))
PicklingError: Can't pickle <class '__main__.user_mobile'>: attribute lookup user_mobile on __main__ failed
It seemed that I can't pickle inner classes because them were added dynamically by code. When classes were pickled, inner classes were not existed, is it right?
Really I don't want to write these classes that were nearly same to each other. Does someone has good way to avoid this problem?
Python's pickle actually does not serializes classes: it does serialize instances, and put in the serialization a reference to each instance's class - and that reference is based on the class being bound to a name in a well defined module. So, instances of classes that don't have a module name, but rather live as attribute in other classes, or data inside lists and dictionaries, typically will not work.
One straight forward thing one can try to do is try to use dill instead of pickle. It is a third party package that works like "pickle" but has extensions to actually serialize arbitrary dynamic classes.
While using dill may help other people reaching here, it is not your case, because in order to use dill, you'd have to monkey patch the underlying RPC mechanism PySpark is using to make use of dill instead of pickle, and that might not be trivial nor consistent enough for production use.
If the problem is really about dynamically created classes being unpickable, what you can do is to create extra meta-classes, for the dynamic classes themselves, instead of using "type", and on these metaclasses, create proper __getstate__ and __setstate__ (or other helper methods as it is on pickle documentation) - that might enable these classes to be pickled by ordinary Pickle. That is, a separate metaclass with Pickler helper methods to be used instead of type(..., (object, ), ...) in your code.
However, "unpickable object" is not the error you are getting - it is an attribute lookup error, which suggests the structure you are building is not good enough for Pickle to introspect into it and get all the members from one of your instances - it is not related (yet) to the unpickleability of the class object. Since your dynamic classes live as attributes on the class (which is not itself pickled) and not of the instance, it is very well possible that pickle does not care about it. Check the docs on pickle above, and maybe all you need there is proper helper-method to pickle on you class, nothing different on the the metaclass for all that you have there to work properly.

Prevent instantiation of an abstract class

In Java, we can prevent instantiation of a class by making it an abstract class. I thought python would behave the same way. But to my surprise, I found that I can create an object of an abstract class:
from abc import ABCMeta
class Foo(metaclass=ABCMeta):
pass
Foo()
Why does python allow this and how can I prevent this?
Python is for "consenting adults" - you could mark a class as abstract by naming convention within a project if you want (or module membership for that). However, it is feasible to do a hard "uninstantiable" abstract class - but that would not increase the security or good practices in a project in itself, as the commenters to the question propose.
So, to keep the remaining machinery for ABC's abstract classes, you can inherit the ABCMeta class, and use it to decorate the __new__ method so it won't instantiate - otherwise, just do the same, but inherit from type instead.
In other words, the code below wraps __new__ method on classes created with it as a metaclass. When that method is run, it checks if the class it is instantiating is the class marked with the ABC meta itself, if it is, it raises a typeerror instead.
class ReallyAbstract(ABCMeta):
def __new__(metacls, name, bases, namespace):
outter_cls = super().__new__(metacls, name, bases, namespace)
for bases in outter_cls.__mro__:
if getattr(getattr(bases, "__new__", None), "_abstract", False):
# Base class already marked as abstract. No need to do anything else
return outter_cls
original_new = getattr(outter_cls, "__new__")
if getattr(original_new, "_abstract", False):
# If we get a method that has already been wrapped
# just return it unchanged.
# TODO: if further classes on the hierarhy redfine __new__
return outter_cls
def __new__(cls, *args, **kw):
if cls is outter_cls:
raise TypeError
return original_new(cls, *args, **kw)
__new__._abstract = True
outter_cls.__new__ = __new__
return outter_cls
And on the console:
In [7]: class A(metaclass=ReallyAbstract):
...: pass
...:
In [7]: A()
TypeError Traceback (most recent call last)
<ipython-input-7-...> in <module>()
----> 1 A()
....
Just for sake of completeness - ABCMeta's in Python are not instantiable if they contain at least one "abstractmethod". Just like other O.O. features that are enforced in more static languages, the idea is to have this by convention. But yes, I agree that since they got to the work of creating an AbstractClass mechanism at all, it should probably behave with less surprises, and that would mean that the should not be instantiable by default.

Resources