Best way to implement abstract classes in Python - python-3.x

What is the best way to implement abstract classes in Python?
This is the main approach I have seen:
class A(ABC):
#abstractmethod
def foo(self):
pass
However, it does not prevent from calling the abstract method when you extend that class.
In Java you get an error if you try to do something similar, but not in Python:
class B(A):
def foo(self):
super().foo()
B().foo() # does not raise an error
In order to replicate the same Java's behaviour, you could adopt this approach:
class A(ABC):
#abstractmethod
def foo(self):
raise NotImplementedError
However, in practice I have rarely seen this latter solution, even if is apparently the most correct one. Is there a specific reason to prefer the first approach rather than the second one ?

If you really want the error to be raised if one of the subclasses try to call the superclass abstract method, then, yes, you should raise it manually. (and then, create an instance of the Exception class to the raise command raise NotImplementedError() even if it works with the class directly)
However, the existing behavior is actually convenient: if your abstractmethod contains just a pass, then you can have any number of sub-classes inheriting your base class, and as long as at least one implements the abstractmethod, it will work. Even if all of them call the super() equivalent method, without checking anything else.
If an error - NotImplementedError or any other, would be called, in a complex hierarchy, making use of mixins, and such, you'd need to check at each time you'd call super if the error was raised, just to skipt it. For the record, checking if super() would hit the class where method is abstract with a conditional is possible, this way:
if not getattr(super().foo, "__isabstractmethod__", False):
super().foo(...)
Since what do you want if you reach the base of the hierarchy for a method is for it to do nothing, it is far simples if just nothing happens!
I mean, check this:
class A(abc.ABC):
#abstractmethod
def validate(self, **kwargs):
pass
class B(A):
def validate(self, *, first_arg_for_B, second_arg_for_B=None, **kwargs):
super().validate(**kwargs)
# perform validation:
...
class C(A)
def validate(self, *, first_arg_for_C **kwargs):
super().validate(**kwargs)
# perform validation:
...
class Final(B, C):
...
Neither B.validate nor C.validate need to worry about any other class in the hierarchy, just do their thing and pass on.
If A.validate would raise, both methods would have to do super().validate(...) inside a try: ...;except ...:pass statement, or inside a weird if block, for the gain of...nothing.
update - I just found this note on the oficial documentation:
Note Unlike Java abstract methods, these abstract methods may have an
implementation. This implementation can be called via the super()
mechanism from the class that overrides it. This could be useful as an
end-point for a super-call in a framework that uses cooperative
multiple-inheritance.
https://docs.python.org/3/library/abc.html#abc.abstractmethod
I will even return you a personal question, if you can reply in the comments: I understand it is much less relevant in Java where one can't have multiple inheritance, so, even in a big hierarchy, the first subclass to implement the abstract method would usually be well known. But otherwise, in a Java project were one could pick one of various Base concrete classes, and proceed with others in an arbitrary order, since the abstractmethod raises, how is that resolved?

Related

How to make a singleton that inherits a normal class, with predefined values, and comparable by `is` without the need of rounded brackets?

My attempt was to create the default instance from inside of a metaclass, but to no avail. At least the reported class is the singleton in the example bellow.
EDIT: Clarifying requirements here: a singleton comparable by using the is keyword, without having to instantiate/call it. Unfortunately, this well known question-answer here doesn't seem to address that.
class MyNormalClass:
def __init__(self, values):
self.values = values
class MySingleton(MyNormalClass, type):
def __new__(mcs, *args, **kwargs):
return MyNormalClass(["default"])
print(MySingleton)
# <class '__main__.MySingleton'>
print(MySingleton.values)
# AttributeError: type object 'MySingleton' has no attribute 'values'
Metaclasses for singletons are overkill. (Search my answers for that, and there should be about 10 occurrences of this phrase).
In your example code in the question, the code inside the class and the metaclass methods is not even being run, not once. There is no black-magic in Python - the program just runs, and there are a few special methods marked with __xx__ that will be called intrinsically by the language runtime. In this case, the metaclass __new__ will be called whenever you create a new class using it as the metaclass, which you code does not show. And also, it would create a new instance of your "singleton" class each time it were used.
In this case, if all you need is a single instance, just create that instance, and let that one be public, instead of its class. You can even us the instance itself to shadow the class from the module namespace, so no one can instantiate it again by accident. If you want values to be immutable, well, you can't
ensure that with pure Python code in any way, but you can make changing values do not work casually with = so that people will know they should not be changing it:
class MySingleton:
__slots__ = ("values",)
def __init__(self, values):
self.values = values
def lock(self, name, value):
raise TypeError("Singleton can't change value")
self.__class__.__setitem__ = lock
MySingleton = MySingleton(["values"])

Inheriting __init_subclass__-parameters

Let's say I have a class that requires some arguments via __init_subclass__:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, *, engine_class, **kwargs):
super().__init_subclass__(**kwargs)
cls.engine_class = engine_class
class I4Engine:
pass
class V6Engine:
pass
class Compact(AbstractCar, engine_class=I4Engine):
pass
class SUV(AbstractCar, engine_class=V6Engine):
pass
Now I want to derive another class from one of those derived classes:
class RedCompact(Compact):
pass
The above does not work, because it expects me to re-provide the engine_class parameter. Now, I understand perfectly, why that happens. It is because the Compact inherits __init_subclass__ from AbstractCar, which is then called when RedCompact inherits from Compact and is subsequently missing the expected argument.
I find this behavior rather non-intuitive. After all, Compact specifies all the required arguments for AbstractClass and should be usable as a fully realized class. Am I completely wrong to expect this behavior? Is there some other mechanism that allows me to achieve this kind of behavior?
I already have two solutions but I find both lacking. The first one adds a new __init_subclass__ to Compact:
class Compact(AbstractCar, engine_class=I4Engine):
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(engine_class=I4Engine, **kwargs)
This works but it shifts responsibility for the correct working of the AbstractCar class from the writer of that class to the user. Also, it violates DRY as the engine specification is now in two places that must be kept in sync.
My second solution overrides __init_subclass__ in derived classes:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, * , engine_class, **kwargs):
super().__init_subclass__(**kwargs)
cls.engine_class=engine_class
#classmethod
def evil_black_magic(cls, **kwargs):
AbstractCar.__init_subclass__(engine_class=engine_class, **kwargs)
if '__init_subclass__' not in cls.__dict__:
cls.__init_subclass__ = evil_black_magic
While this works fine for now, it is purest black magic and bound to cause trouble down the road. I feel like this cannot be the solution to my problem.
Indeed—the way this works in Python is counter-intuitive—I agree with you on your reasoning.
The way to go to fix it is to have some logic in the metaclass. Which is a pity, since avoiding the need for metaclasses is exactly what __init_subclass__ was created for.
Even with metaclasses it would not be an easy thing—one would have to annotate the parameters given to __init_subclass__ somewhere in the class hierarchy, and then insert those back when creating new subclasses.
On second thought, that can work from within __init_subclass__ itself. That is: when __init_subclass__ "perceives" it did not receive a parameter that should have been mandatory, it checks for it in the classes in the mro (mro "method resolution order"—a sequence with all base classes, in order).
In this specific case, it can just check for the attribute itself—if it is already defined for at least one class in the mro, just leave it as is, otherwise raises.
If the code in __init_subclass__ should do something more complex than simply annotating the parameter as passed, then, besides that, the parameter should be stored in an attribute in the new class, so that the same check can be performed downstream.
In short, for your code:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, *, engine_class=None, **kwargs):
super().__init_subclass__(**kwargs)
if engine_class:
cls.engine_class = engine_class
return
for base in cls.__mro__[1:]:
if getattr(base, "engine_class", False):
return
raise TypeError("parameter 'engine_class' must be supplied as a class named argument")
I think this is a nice solution. It could be made more general with a decorator meant specifically for __init_subclass__ that could store the parameters in a named class attribute and perform this check automatically.
(I wrote the code for such a decorator, but having all the corner cases for named and unamed parameters, even using the inspect model can make things ugly)

"__new__" create an infinite recursive loop

I learned __new__ from new and init | Spyhce blog such an example:
class A(object):
def __new__(cls):
return super(A, cls).__new__(cls) #I think here is an infinite recursive
the code could be rewritten as
class A(object):
def __new__(A):
return object.__new__(A)
It's algorithms:
1, define A inherit from object
2, override method __new__ by def __new__(A) but parameter A is not implemented until it is called
3, object.new(A), recursively call A
This is definitely an infinite recursively iterating.
How the infinite loop stop?
This doesn't cause an infinite recursion because you aren't calling the __new__ method of the A() class. You are calling the __new__() method of it's super class which is still untouched and behaves as default.
So It's simple now, as the method belongs to its superclass not your A class, so this
__new__ requires class reference to be passed the first argument and it will create the instance for you. And that's what this super(A, cls).__new__(cls) line does.
As is clearly said in the comments of the other answer, you are mistaking the variable A local to the staticmethod __new__ for the class name A, which exists in a more global scope. By convention, this is something like cls for __new__, and your choice of typing A is confusing you. I think this is an example of massive overthinking of one of the first-learned concepts in Python.
That A in the inner call to object.__new__ is not the same variable as the A you chose for the class name. Scope is relevant no matter if programming magic methods or simple scripts.
You erroneously changed this between examples.... leading to your confusion. cls (as you wrote correct in first version) is essentially a placeholder for the class definition, which once and here only is sent to a different method so without a call back, like if object.__new__(x) called x.__new__, how could there be recursion?

Best way to register all subclasses

I am currently developing a piece of software where the I have class instamces that are generated from dictionaries. The way these dictionariea file are structured is as follows:
layer_dict = {
"layer_type": "Conv2D",
"name": "conv1",
"kernel_size": 3,
...
}
Then, the following code is ran
def create_layer(layer_dict):
LayerType = getattr(layers, layer_dict['layer_type']
del layer_dict['layer_type']
return LayerType(**layer_dict)
Now, I want to support the creation of new layer types (by subclassing the BaseLayer class). I've thought of a few ways to do this and thought I'd ask which way is best and why as I don't have much experience developing software (finishing an MSc in comp bio).
Method 1: Metaclasses
The first method I thought of was to have a metaclass that registers every subclass of BaseLayer in a dict and do a simple lookup of this dict instead of using getattr.
class MetaLayer(type)
layers = {}
def __init__(cls, name, bases, dct):
if name in MetaLayer.layers:
raise ValueError('Cannot have more than one layer with the same name')
MetaLayer.layers[name] = cls
Benefit: The metaclass can make sure that no two classes have the same name. The user doesn't need to think about anything but subclassing when creating new layers.
Downside: Metaclasses are difficult to understand and often frowned upon
Method 2: Traversing the __subclasses__ tree
The second method I thought of was to use the __subclassess__ function of BaseLayer to get a list of all subclasses, then create a dict with Layer.__name__ as keys and Layer as values. See example code below:
def get_subclasses(cls):
"""Returns all classes that inherit from `cls`
"""
subclasses = {
sub.__name__: sub for sub in cls.__subclasses__()
}
subsubclasses = (
get_subclasses(sub) for sub in subclasses.values()
)
subsubclasses = {
name: sub for subs in subsubclasses for name, sub in subs.items()
}
return {**subclasses, ** subsubclasses}
Benefit: Easy to explain how this works.
Downside: We might end up with two layers having the same name.
Method 3: Using a class decorator
The final method is my favourite as it doesn't hide any implementation details in a metaclass, and still manages to prevent multiple classes with the same name.
Here the layers module has a global variable named layers and a decorator named register_layer, which simply adds the decorated classes to the layers dict. See code below.
layers = {}
def register_layer(cls):
if cls.__name__ in layers:
raise ValueError('Cannot have two layers with the same name')
layers[cls.__name__] = cls
return cls
Benefit: No metaclasses and no way of having two layers with the same name.
Downside: Requires a global variable, which is often frowned upon.
So, my question is, which method is preferable? And more importantly, why?
Actually - that is the kind of things metaclases are designed for. As you can see from the options you stated above, it is the simpler and more straightforward design.
They are sometimes "frowned upon" because of two things: (1) people don't understand then and don't care for understanding; (2) people misuse then when they are actually not needed; (3) they are hard to combine - so if any of your classes is to be used with a mixn that have a different metaclass (say abc.ABC), you have also to produce a combining metaclass.
Method 4: __init_subclass__
Now, that said, from Python 3.6, there is a new feature that can cover your usecase without the need for metaclasses: the class __init_subclass__ method:
it is called as a classmethod on the base class when subclasses of it are created.
All you need is to write a proper __init_subclass__ method on your BaseLayer class and have all the benefits you'd have from the implementation in the metaclasses and none of the downsides
Like you, I like the class decorator approach as it is more readable.
You can avoid using a global variable by making the class decorator itself a class, and making layers a class variable instead. You can also avoid possible name collision by joining the target class' name with its module name:
class register_layer:
layers = {}
def __new__(cls, target):
cls.layers['.'.join((target.__module__, target.__name__))] = target
return target

Python - GUI checkbox cannot assign the function with arguments to variable

I'm having trouble getting my head around assigning a function to a variable when the function uses arguments. The arguments appear to be required but no matter what arguments I enter it doesn't work.
The scenario is that I'm creating my first GUI which has been designed in QT Designer. I need the checkbox to be ticked before the accept button allows the user to continue.
Currently this is coded to let me know if ticking the checkbox returns anything (which is does) however I don't know how to pass that result onto the next function 'accept_btn'. I thought the easiest way would be to create a variable however it requires positional arguments and that's where I'm stuck.
My code:
class MainWindow(QtWidgets.QMainWindow, Deleter_Main.Ui_MainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupUi(self)
self.ConfirmBox.stateChanged.connect(self.confirm_box)
self.Acceptbtn.clicked.connect(self.accept_btn)
def confirm_box(self, state):
if self.ConfirmBox.isChecked():
print("checked")
else:
print("not checked")
checked2 = confirm_box(self, state)
def accept_btn(self):
if checked2 == True:
print("clicked")
else:
print("not clicked")
app = QApplication(sys.argv)
form = MainWindow()
form.show()
app.exec_()
The code gets stuck on 'checked2' with the error:
NameError: name 'self' is not defined
I thought there might be other solutions for running this all within one function but I can't seem to find a way whilst the below is required.
self.ConfirmBox.stateChanged.connect(self.confirm_box)
Would extra appreciate if anyone could help me understand exactly why I need the 'self' argument in the function and variable.
Thanks in advance,
If you just need to enable a button when the checkbox is checked, it can be easily done within the signal connection:
self.ConfirmBox.toggled.connect(self.Acceptbtn.setEnabled)
QWidget.setEnabled requires a bool argument, which is the argument type passed on by the toggled signal, so the connection is very simple in this case.
Apart from this, there are some mistakes in your understanding of classes in Python: it seems like you are thinking in a "procedural" way, which doesn't work well with general PyQt implementations and common python usage, unless you really need some processing to be done when the class is created, for example to define some class attributes or manipulate the way some methods behave. But, even in this case, they will be class attributes, which will be inherited by every new instance.
The line checked2 = confirm_box(self, state) will obviously give you an error, since you are defining checked2 as a class atribute. This means that its value will be processed and assigned when the class is being created: at this point, the instance of the class does not exist yet, Python just executes the code that is not part of the methods until it reaches the end of the class definition (its primary indentation). When it reaches the checked2 line, it will try to call the confirm_box method, but the arguments "self" and "state" do not exist yet, as they have not been defined in the class attributes, hence the NameError exception.
Conceptually, what you have done is something similar to this:
class SomeObject(object):
print(something)
This wouldn't make any sense, since there is no "something" defined anywhere.
self is a python convention used for class methods: it is a keyword commonly used to refer to the instance of a class, you could actually use any valid python keyword at all.
The first argument of any class method is always the reference to the class instance, the only exceptions are classmethod and staticmethod decorators, but that's another story. When you call a method of an instanciated class, the instance object is automatically bound to the first argument of the called method: the self is the instance itself.
For example, you could create a class like this:
class SomeObject(object):
def __init__(Me):
Me.someValue = 0
def setSomeValue(Myself, value):
Myself.someValue = value
def multiplySomeValue(I, multi):
I.setSomeValue(I.someValue * multi)
return I.someValue
But that would be a bit confusing...

Resources