Prevent instantiation of an abstract class - python-3.x

In Java, we can prevent instantiation of a class by making it an abstract class. I thought python would behave the same way. But to my surprise, I found that I can create an object of an abstract class:
from abc import ABCMeta
class Foo(metaclass=ABCMeta):
pass
Foo()
Why does python allow this and how can I prevent this?

Python is for "consenting adults" - you could mark a class as abstract by naming convention within a project if you want (or module membership for that). However, it is feasible to do a hard "uninstantiable" abstract class - but that would not increase the security or good practices in a project in itself, as the commenters to the question propose.
So, to keep the remaining machinery for ABC's abstract classes, you can inherit the ABCMeta class, and use it to decorate the __new__ method so it won't instantiate - otherwise, just do the same, but inherit from type instead.
In other words, the code below wraps __new__ method on classes created with it as a metaclass. When that method is run, it checks if the class it is instantiating is the class marked with the ABC meta itself, if it is, it raises a typeerror instead.
class ReallyAbstract(ABCMeta):
def __new__(metacls, name, bases, namespace):
outter_cls = super().__new__(metacls, name, bases, namespace)
for bases in outter_cls.__mro__:
if getattr(getattr(bases, "__new__", None), "_abstract", False):
# Base class already marked as abstract. No need to do anything else
return outter_cls
original_new = getattr(outter_cls, "__new__")
if getattr(original_new, "_abstract", False):
# If we get a method that has already been wrapped
# just return it unchanged.
# TODO: if further classes on the hierarhy redfine __new__
return outter_cls
def __new__(cls, *args, **kw):
if cls is outter_cls:
raise TypeError
return original_new(cls, *args, **kw)
__new__._abstract = True
outter_cls.__new__ = __new__
return outter_cls
And on the console:
In [7]: class A(metaclass=ReallyAbstract):
...: pass
...:
In [7]: A()
TypeError Traceback (most recent call last)
<ipython-input-7-...> in <module>()
----> 1 A()
....
Just for sake of completeness - ABCMeta's in Python are not instantiable if they contain at least one "abstractmethod". Just like other O.O. features that are enforced in more static languages, the idea is to have this by convention. But yes, I agree that since they got to the work of creating an AbstractClass mechanism at all, it should probably behave with less surprises, and that would mean that the should not be instantiable by default.

Related

MetaClass in Python

I am trying to create a Meta-Class for my Class.
I have tried to print information about my class in meta-class
Now I have created two objects of my class
But Second object gets created without referencing my Meta-Class
Does Meta Class gets called only once per Class??
Any help will be appreciated
Thanks
class Singleton(type):
def __new__(cls,name,bases,attr):
print (f"name {name}")
print (f"bases {bases}")
print (f"attr {attr}")
print ("Space Please")
return super(Singleton,cls).__new__(cls,name,bases,attr)
class Multiply(metaclass = Singleton):
pass
objA = Multiply()
objB = Multiply()
print (objA)
print (objB)
Yes - the metaclass's __new__ and __init__ methods are called only when the class is created. After that, in your example, the class will be bound to theMultiply name. In many aspects, it is just an object like any other in Python. When you do objA = Multiply() you are not creating a new instance of type(Multiply), which is the metaclass - you are creating a new instance of Multiply itself: Multiply.__new__ and Multiply.__init__ are called.
Now, there is this: the mechanism in Python which make __new__ and __init__ be called when creating an instance is the code in the metaclass __call__ method. That is, just as when you create any class with a __call__ method and use an instance of it with the calling syntax obj() will invoke type(obj).__call__(obj), when you do Multiply() what is called (in this case) is Singleton.__call__(Multiply).
Since it is not implemented, Singleton's superclass, which is type __call__ method is called instead - and it is in there that Multiply.__new__ and __init__ are called.
That said, there is nothing in the code above that would make your classes behave as "singletons". And more importantly you don't need a metaclass to have a singleton in Python. I don't know who invented this thing, but it keeps circulating around.
First, if you really need a singleton, all you need to do is to write a plain class, no special anything, create your single instance, and document that the instance should be used. Just as people use None - no one keeps getting a reference to Nonetype and keep calling it to get a None reference:
class _Multiply:
...
# document that the code should use this instance:
Multiply = _Multiply()
second Alternatively, if your code have a need, whatsoever, for instantiating the class that should be a singleton where it will be used, you can use the class' __new__ method itself to control instantiation, no need for a metaclass:
class Multiply:
_instance = None
def __new__(cls):
if not cls._instance:
cls._instance = super().__new__(cls)
# insert any code that would go in `__init__` here:
...
...
return cls._instance
Third just for demonstration purposes, please don't use this, the metaclass mechanism to have singletons can be built in the __call__ method:
class Singleton(type):
registry = {}
def __new__(mcls,name,bases,attr):
print(f"name {name}")
print(f"bases {bases}")
print(f"attr {attr}")
print("Class created")
print ("Space Please")
return super(Singleton,mcls).__new__(cls,name,bases,attr)
def __call__(cls, *args, **kw):
registry = type(cls).registry
if cls not in registry:
print(f"{cls.__name__} being instantiated for the first time")
registry[cls] = super().__call__(*args, **kw)
else:
print(f"Attempting to create a new instance of {cls.__name__}. Returning single instance instead")
return registry[cls]
class Multiply(metaclass = Singleton):
pass

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

typing.Protocol class `__init__` method not called during explicit subtype construction

Python's PEP 544 introduces typing.Protocol for structural subtyping, a.k.a. "static duck typing".
In this PEP's section on Merging and extending protocols, it is stated that
The general philosophy is that protocols are mostly like regular ABCs,
but a static type checker will handle them specially.
Thus, one would expect to inherit from a subclass of typing.Protocol in much the same way that one expects to inherit from a subclasses of abc.ABC:
from abc import ABC
from typing import Protocol
class AbstractBase(ABC):
def method(self):
print("AbstractBase.method called")
class Concrete1(AbstractBase):
...
c1 = Concrete1()
c1.method() # prints "AbstractBase.method called"
class ProtocolBase(Protocol):
def method(self):
print("ProtocolBase.method called")
class Concrete2(ProtocolBase):
...
c2 = Concrete2()
c2.method() # prints "ProtocolBase.method called"
As expected, the concrete subclasses Concrete1 and Concrete2 inherit method from their respective superclasses. This behavior is documented in the Explicitly declaring implementation section of the PEP:
To explicitly declare that a certain class implements a given
protocol, it can be used as a regular base class. In this case a class
could use default implementations of protocol members.
...
Note that there is little difference between explicit and implicit
subtypes, the main benefit of explicit subclassing is to get some
protocol methods "for free".
However, when the protocol class implements the __init__ method, __init__ is not inherited by explicit subclasses of the protocol class. This is in contrast to subclasses of an ABC class, which do inherit the __init__ method:
from abc import ABC
from typing import Protocol
class AbstractBase(ABC):
def __init__(self):
print("AbstractBase.__init__ called")
class Concrete1(AbstractBase):
...
c1 = Concrete1() # prints "AbstractBase.__init__ called"
class ProtocolBase(Protocol):
def __init__(self):
print("ProtocolBase.__init__ called")
class Concrete2(ProtocolBase):
...
c2 = Concrete2() # NOTHING GETS PRINTED
We see that, Concrete1 inherits __init__ from AbstractBase, but Concrete2 does not inherit __init__ from ProtocolBase. This is in contrast to the previous example, where Concrete1 and Concrete2 both inherit method from their respective superclasses.
My questions are:
What is the rationale behind not having __init__ inherited by explicit subtypes of a protocol class? Is there some type-theoretic reason for protocol classes not being able to supply an __init__ method "for free"?
Is there any documentation concerning this discrepancy? Or is it a bug?
You can't instantiate a protocol class directly. This is currently implemented by replacing a protocol's __init__ with a method whose sole function is to enforce this restriction:
def _no_init(self, *args, **kwargs):
if type(self)._is_protocol:
raise TypeError('Protocols cannot be instantiated')
...
class Protocol(Generic, metaclass=_ProtocolMeta):
...
def __init_subclass__(cls, *args, **kwargs):
...
cls.__init__ = _no_init
Your __init__ doesn't execute because it isn't there any more.
This is pretty weird and messes with even more stuff than it looks like at first glance - for example, it interacts poorly with multiple inheritance, interrupting super().__init__ chains.

How to write own metaclass?

How to create a metaclass in python? I tried to write as in tutorials:
class Meta(type):
def __new__(mcs, name, bases, attrs):
attrs2 = {'field2': 'Test'}
attrs2.update(attrs)
return super(Meta, mcs).__new__(mcs, name, bases, attrs2)
class Test(object):
__metaclass__ = Meta
field1 = 10
test = Test()
print(test.field1)
print(test.field2)
But this code fails with error:
10
Traceback (most recent call last):
File "main.py", line 18, in <module>
print(test.field2)
AttributeError: 'Test' object has no attribute 'field2'
How to declare a metaclass in python 3.7+ correctly?
UPDATED
I've changed my question with actual error.
The tutorials you are checking are covering Python 2.
In Python 3, one of the syntactic changes was exactly the way of declaring a metaclass for a class.
You don't need to change the metaclass code, just change your class declaration to:
class Test(metaclass=Meta):
field1 = 10
and it will work.
So, in short: for a metaclass in Python 3, you have to pass the equivalent of a "keyword argument" in the class declaration, with the name "metaclass". (Also, in Python 3, there is no need to inherit explicitly from object)
In Python 2, this was accomplished by the presence of the special variable __metaclass__ in the body of the class, as is in your example. (Also, when setting a metaclass, inheriting from 'object' would be optional, since the metaclass, derived from type, would do that for you).
One of the main advantages of the new syntax is that it allows the special method __prepare__ in the metaclass which can return a custom namespace object to be used when building the class body itself. It is seldom used, and a really "serious" use case would be hard to put up today. For toys and playing around, it is great, allowing for "magic autonamed enumerations" and other things - but when designing Python 3, this was way they thought to allow having an OrderedDict as the class namespace, so that the metaclass' __new__ and __init__ methods could know the order of the declaration of the attributes. Since Python 3.6, a class body namespace is ordered by default and there is no need for a __prepare__ method for this use alone.

Wrapping all possible method calls of a class in a try/except block

I'm trying to wrap all methods of an existing Class (not of my creation) into a try/except suite. It could be any Class, but I'll use the pandas.DataFrame class here as a practical example.
So if the invoked method succeeds, we simply move on. But if it should generate an exception, it is appended to a list for later inspection/discovery (although the below example just issues a print statement for simplicity).
(Note that the kinds of data-related exceptions that can occur when a method on the instance is invoked, isn't yet known; and that's the reason for this exercise: discovery).
This post was quite helpful (particularly #martineau Python-3 answer), but I'm having trouble adapting it. Below, I expected the second call to the (wrapped) info() method to emit print output but, sadly, it doesn't.
#!/usr/bin/env python3
import functools, types, pandas
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): #Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('Exception: %r' % sys.exc_info()) # Something trivial.
#<Actual code would append that exception info to a list>.
return wrapper
class MetaClass(type):
def __new__(mcs, class_name, base_classes, classDict):
newClassDict = {}
for attributeName, attribute in classDict.items():
if type(attribute) == types.FunctionType: # Replace it with a
attribute = method_wrapper(attribute) # decorated version.
newClassDict[attributeName] = attribute
return type.__new__(mcs, class_name, base_classes, newClassDict)
class WrappedDataFrame2(MetaClass('WrappedDataFrame',
(pandas.DataFrame, object,), {}),
metaclass=type):
pass
print('Unwrapped pandas.DataFrame().info():')
pandas.DataFrame().info()
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
print()
This outputs:
Unwrapped pandas.DataFrame().info():
<class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Empty DataFrame
Wrapped pandas.DataFrame().info(): <-- Missing print statement after this line.
<class '__main__.WrappedDataFrame2'>
Index: 0 entries
Empty WrappedDataFrame2
In summary,...
>>> unwrapped_object.someMethod(...)
# Should be mirrored by ...
>>> wrapping_object.someMethod(...)
# Including signature, docstring, etc. (i.e. all attributes); except that it
# executes inside a try/except suite (so I can catch exceptions generically).
long time no see. ;-) In fact it's been such a long time you may no longer care, but in case you (or others) do...
Here's something I think will do what you want. I've never answered your question before now because I don't have pandas installed on my system. However, today I decided to see if there was a workaround for not having it and created a trivial dummy module to mock it (only as far as I needed). Here's the only thing in it:
mockpandas.py:
""" Fake pandas module. """
class DataFrame:
def info(self):
print('pandas.DataFrame.info() called')
raise RuntimeError('Exception raised')
Below is code that seems to do what you need by implementing #Blckknght's suggestion of iterating through the MRO—but ignores the limitations noted in his answer that could arise from doing it that way). It ain't pretty, but as I said, it seems to work with at least the mocked pandas library I created.
import functools
import mockpandas as pandas # mock the library
import sys
import traceback
import types
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs): # Note: args[0] points to 'self'.
try:
print('Calling: {}.{}()... '.format(args[0].__class__.__name__,
method.__name__))
return method(*args, **kwargs)
except Exception:
print('An exception occurred in the wrapped method {}.{}()'.format(
args[0].__class__.__name__, method.__name__))
traceback.print_exc(file=sys.stdout)
# (Actual code would append that exception info to a list)
return wrapper
class MetaClass(type):
def __new__(meta, class_name, base_classes, classDict):
""" See if any of the base classes were created by with_metaclass() function. """
marker = None
for base in base_classes:
if hasattr(base, '_marker'):
marker = getattr(base, '_marker') # remember class name of temp base class
break # quit looking
if class_name == marker: # temporary base class being created by with_metaclass()?
return type.__new__(meta, class_name, base_classes, classDict)
# Temporarily create an unmodified version of class so it's MRO can be used below.
TempClass = type.__new__(meta, 'TempClass', base_classes, classDict)
newClassDict = {}
for cls in TempClass.mro():
for attributeName, attribute in cls.__dict__.items():
if isinstance(attribute, types.FunctionType):
# Convert it to a decorated version.
attribute = method_wrapper(attribute)
newClassDict[attributeName] = attribute
return type.__new__(meta, class_name, base_classes, newClassDict)
def with_metaclass(meta, classname, bases):
""" Create a class with the supplied bases and metaclass, that has been tagged with a
special '_marker' attribute.
"""
return type.__new__(meta, classname, bases, {'_marker': classname})
class WrappedDataFrame2(
with_metaclass(MetaClass, 'WrappedDataFrame', (pandas.DataFrame, object))):
pass
print('Unwrapped pandas.DataFrame().info():')
try:
pandas.DataFrame().info()
except RuntimeError:
print(' RuntimeError exception was raised as expected')
print('\n\nWrapped pandas.DataFrame().info():')
WrappedDataFrame2().info()
Output:
Unwrapped pandas.DataFrame().info():
pandas.DataFrame.info() called
RuntimeError exception was raised as expected
Wrapped pandas.DataFrame().info():
Calling: WrappedDataFrame2.info()...
pandas.DataFrame.info() called
An exception occurred in the wrapped method WrappedDataFrame2.info()
Traceback (most recent call last):
File "test.py", line 16, in wrapper
return method(*args, **kwargs)
File "mockpandas.py", line 9, in info
raise RuntimeError('Exception raised')
RuntimeError: Exception raised
As the above illustrates, the method_wrapper() decoratored version is being used by methods of the wrapped class.
Your metaclass only applies your decorator to the methods defined in classes that are instances of it. It doesn't decorate inherited methods, since they're not in the classDict.
I'm not sure there's a good way to make it work. You could try iterating through the MRO and wrapping all the inherited methods as well as your own, but I suspect you'd get into trouble if there were multiple levels of inheritance after you start using MetaClass (as each level will decorate the already decorated methods of the previous class).

Resources