Python: why do I need super().__init__() call in metaclasses? - metaprogramming

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()

You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

Related

How __init__ works for inheritance

I cant have 2 init methods in one class because of function overloading. However, why is it possible that when initializing a subclass, im able to define a new __init__ method, and use the super().__init__ method or the parentclass init method within the subclass __init__ method. i'm just a little confused by the concept of 2 __init__ methods functioning at the same time
class Employee:
emps = 0
def __init__(self,name,age,pay):
self.name = name
self.age = age
self.pay = pay
class Developer(Employee):
def __init__(self,name,age,pay,level):
Employee.__init__(self,name,age,pay)
self.level = level
I cant have 2 init methods in one class because of function overloading.
Partially true. You can't have 2 __init__ methods in the same class because the language lacks function overloading. (Libraries can partially restore a limited form of function overloading; see functools.singledispatchmethod for an example.)
i'm just a little confused by the concept of 2 init methods functioning at the same time
But you aren't trying to overload __init__. You are overriding __init__, providing a different definition for Developer than the definition it inherits from Employer. (In fact, Employer is overriding __init__ as well, using its own definition in place of the one it inherits from object.) Each class has only one definition.
In your definition of Developer.__init__, you are simply making an explicit call to the inherited method to do the initialization common to all instances of Employee, before doing the Developer-specific initialization on the same object.
Using super, you are using a form of dynamic lookup to let the method resolution order for instance of Developer decide what the "next" version of __init__ available to Developer to call. For single inheritance, the benefit is little more than avoiding the need to hard-code a reference to Employee. But for multiple inheritance, super is crucial to ensuring that all inherited methods (both the ones you know about and the ones you may not) get called, and more importantly, are called in the right order.
A full discussion of how to properly use super is beyond the scope of this question, I think, but I'll show your two classes rewritten to make the best use of super, and refer you to Python's super() considered super! for more information.
# Main rules:
# 1. *All* classes use super().__init__, even if you are only inheriting
# from object, because you don't know who will use you as a base class.
# 2. __init__ should use keyword arguments, and be prepared to accept any
# keyword arguments.
# 3. All keyword arguments that don't get assigned to your own parameters
# are passed on to an inherited __init__() to process.
class Employee:
emps = 0
def __init__(self, *, name, age, pay, **kwargs):
super().__init__(**kwargs)
self.name = name
self.age = age
self.pay = pay
class Developer(Employee):
def __init__(self, *, level, **kwargs):
super().__init__(**kwargs)
self.level = level
d1 = Developer(name="Alice", age=30, pay=85000, level=1)
To whet your appetite for the linked article, consider
class A:
def __init__(self, *, x, **kwargs):
super().__init__(**kwargs)
self.x = x
class B:
def __init__(self, *, y, **kwargs):
super().__init__(**kwargs)
self.y = y
class C1(A, B):
pass
class C2(B, A):
pass
c1 = C1(x=1, y=2)
c2 = C2(x=4, y=3)
assert c1.x == 1 and c1.y == 2
assert c2.x == 4 and c2.y == 3
The assertions all pass, and both A.__init__ and B.__init__ are called as intended when c1 and c2 are created.
The super() function is used to give access to methods and properties of a parent or sibling class
check out: https://www.geeksforgeeks.org/python-super/

Change the class inside a class with different arguments

I'm a structured programming guy. So my attempts with object oriented programming are always "work in progress..."
My intent is to have a class which will adapt itself according to an external input. I saw in another post (which I was unable to find again) that I can change the class of an object, so I made this MWE, which works:
class Base:
def __init__(self, name):
self.name = name
def set_text(self, text):
self.text = text
class Terminator(Base):
terminator = '!'
def __init__(self):
super().__init__('terminator')
def get(self):
return self.text + terminator
class Prefix(Base):
def __init__(self):
super().__init__('prefix')
def get(self):
return str(len(self.text)) + self.text
class_list = {
'terminator': Terminator,
'prefix': Prefix
}
class Selector():
def __init__(self, option):
self.__class__ = class_list[option]
def main():
selection = input("Choose 'terminator' or 'prefix': ")
obj = Selector(selection)
obj.set_text('something')
print(obj.get())
if __name__ == '__main__':
main()
Terminator is a class to produce a text terminated with a special character (!); Prefix produces the same text prefixed with its length.
With Selector, I can use o = Selector('prefix') to get o as a Prefix instance.
The question
My question is if I can add extra arguments to Selector and pass them to the respective class. For example:
o = Selector('prefix', number_of_digits = 2) # '05hello' intead of '5hello'
or
o = Selector('terminator', terminator = '$') # use '$' instead of '!'
For now, I couldn't figure out how to accomplish this task. I tried to use *args and **kwargs, but unsuccessfully.
Additional information
The code I'm working on is intended to undergraduate students and I want to make it simple for teaching purposes, so Selector should be used to hide other classes and their details from the students (to hide Terminator and Prefix, for example).
I expect to have about 15 distinct classes to hide behind Selector.
Also, I'm ready to hear I'm completely wrong about this approach if there are alternatives.
Try calling the appropriate class's __init__() manually, and set the variables like you otherwise would:
class Terminator(Base):
# make terminator an instance variable instead of a class variable,
# and set it as an overridable default arg for the constructor
def __init__(self, terminator='!'):
super().__init__('terminator')
self.terminator = terminator
def get(self):
return self.text + self.terminator
class Selector():
def __init__(self, option, *args, **kwargs):
self.__class__ = class_list[option]
self.__class__.__init__(self, *args, **kwargs)
...
o = Selector('terminator', terminator='$')
o.set_text("Hello World")
print(o.get())
# Hello World$
I should leave a disclaimer: what you're trying to do is essentially a version of the Factory method pattern, which is usually easier to maintain if you bundle it into a method instead of messing with class types and reflection:
def Selector(option: str, *args, **kwargs) -> Base:
return class_list[option](*args, **kwargs)
# this will do .__new()__ and .__init__() normally,
# and is indistinguishable from normal class creation
Using a method to do this instead of overriding the class metadata also has the advantage of being easy to fit into a type system (see the type hinting in the above snippet), which is difficult to do with .__init__(). This is a common design pattern in Java, for example, which is very strongly and statically typed, requires a factory method to have a signature with the superclass of anything it could possibly return, and makes it impossible for an object to change its own type at runtime.
The disadvantage of your current approach, dynamically changing .__class__, is that the .__new__() and .__init__() methods which were called on the resulting object will not match with each other (it would be using Selector.__new__() but Terminator.__init__(), for example), which may cause weird and hard-to-diagnose problems in the future. It's a fun experiment, but be knowledgeable of the risks before using this in something you'll have to maintain for a long time.

python property referring to property/attribute of member attribute?

I'm wondering if I have:
class A(object):
def __init__(self):
self.attribute = 1
self._member = 2
def _get_member(self):
return self._member
def _set_member(self, member):
self._member = member
member = property(_get_member, _set_member)
class B(object):
def __init__(self):
self._member = A()
def _get_a_member(self):
return self._member.member
def _set_a_member(self, member):
self._member.member = member
member = property(_get_a_member, _set_a_member)
Can I somehow avoid to write get/setters for A.member, and simply refer to the attribute or property of the A object?
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute, then writing get/setters seems like overhead.
I think even if I could write the get/setters inline that would help?
I find the question a bit unclear, however I try to explain some context.
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute
If there is no logic in getter/setters, then there is no need to define the attribute as a property, but the attribute can be used directly (in any context).
So
class A(object):
def __init__(self):
self.attribute = 1
self.member = 2
class B(object):
def __init__(self):
self.member = A()
B().member.member # returns 2
B().member.member = 10
In some languages, it's considered good practice to abstract instance properties with getter/setter methods, That's not necessarily the case in Python.
Python properties are useful when you'd need more control over the attribute, for example:
when there is logic (validation, etc.)
to define a readonly attribute (so only providing a getter without a setter)
Update (after the comment)
properties are not necessarily a tool to "hide" some internal implementation. Hiding in Python is a bit different than say in Java, due to very dynamic nature of Python language. It's always possible to introspect and even change objects on the fly, you can add new attributes (even methods) to objects on runtime:
b = B()
b.foo = 4 # define a new attribute on runtime
b.foo # returns 4
So Python developers rely more on conventions to hint their intentions of abstractions.
About the polymorphic members, I think it's most natural for Python classes to just share an interface, that's what's meant by Duck typing. So as long as your next implementation of A supports the same interface (provides the same methods for callers), it should not be any issue to change its implementation.
So this is what I came up with - use a method to generate the properties, with the assumption that the obj has an attribute of _member:
def generate_cls_a_property(name):
"""Small helper method for generating a 'dumb' property for the A object"""
def getter(obj):
return getattr(obj._member, name)
def setter(obj, new_value):
setattr(obj._member, name, new_value)
return property(getter, setter)
This allows me to add properties like so:
class B(object):
def __init__(self):
self._member = A()
member = generate_cls_a_property('member') # generates a dumb/pass-through property
I'll accept my own, unless someone tops it within a week.. :)

Can python metaclasses inherit?

classes can inherit..
class Base:
def __init__(self,name):
self.name = name
class Derived1(Base):
def __init__(self,name):
super().__init__(name)
class Derived2(Base):
def __init__(self,name):
super().__init__(name)
Can a similar thing done for meta classes also?
I have a requirement where some of my classes will have to be both abstract base classes and also my own meta classes (say singleton types..)
Is it possible to do
class Singleton(type):
'''
implementation goes here..
'''
class AbstractSingleton(Singleton,ABCMeta):
'''
What code should go here??
'''
If its possible how to implement the AbstractSingleton class?
Yes, it is possible.
But first things first:
You should not be using metaclasses for creating singletons in Python.
Singletons are a simple concept, and just a custom __new__ method is enough - no need for a metaclass for that.
This simple 4 line normal class code can be used as a mixin, and will turn any derived classes into "singleton" classes - afer the first instance is created, no further instances are created, and the first instance is always returned:
class SingletonBase:
def __new__(cls, *args, **kw):
if not "instance" in cls.__dict__:
cls.instance = super().__new__(cls, *args, **kw)
return cls.instance
Now, if you'd have a real case for another metaclass and needed to combine that with ABCMeta or other metaclass, all you'd have to do is to create a third metaclass that inherits from both metaclasses - if both of them use super in a well behaved way, it would just work.
class SingletonMeta(type):
def __call__(cls, *args, **kw):
# You know - you _really_ should not be using metaclasses for singletons.
if not "instance" in cls.__dict__:
cls.instance = super().__call__(*args, **kw)
return cls.instance
class SingletonAbstractMeta(SingletonMeta, abc.ABCMeta):
pass
class SingleAbstractBase(metaclass=SingleAbstractMeta):
...
For sheer coincidence, earlier this week I used exactly this use case as an example of what can be achieved with a "meta meta class" in Python. By having a special "meta meta class" to the metaclass one wants to combine to another (I even use ABCMeta on the example), it can create the derived combined metaclass just by using the operator " + ", like in
class SingletonMeta(type, metaclass=MM):
...
class AbstractSingletonBase(metaclass=SingletonMeta + abc.ABCMeta):
# code here.
Check the answer here.

Can I derive from classmethod in Python?

I have a special statemachine implemented in Python, which uses class methods as state representation.
class EntityBlock(Block):
def __init__(self, name):
self._name = name
#classmethod
def stateKeyword1(cls, parserState : ParserState):
pass
#classmethod
def stateWhitespace1(cls, parserState : ParserState):
token = parserState.Token
if isinstance(token, StringToken):
if (token <= "generate"):
parserState.NewToken = GenerateKeyword(token)
parserState.NewBlock = cls(....)
else:
raise TokenParserException("....", token)
raise TokenParserException("....", token)
#classmethod
def stateDelimiter(cls, parserState : ParserState):
pass
Visit GitHub for full source code off pyVHDLParser.
When I debug my parser FSM, I get the statenames printed as:
State: <bound method Package.stateParse of <class 'pyVHDLParser.DocumentModel.Sequential.Package.Package'>>
I would like to get better reports, so I would like to overwrite the default behavior of __repr__ of each bound method object.
Yes, I could write a metaclass or apply a second decorator, but I was questioning myself:
Is it possible to derive from classmethod and have only one decorator called e.g. state?
According to PyCharm's builtins.py (a collection of dummy code for Python's builtins), classmethod is a class-based decorator.
Yes, you can write your own class that derives from classmethod if you want. It's a bit complicated though. You'll need to implement the descriptor protocol (overriding classmethod's implementation of __get__) so that it returns an instance of another custom class that behaves like a bound method object. Unfortunately, you can't inherit from Python's builtin bound method type (I'm not sure why not).
Probably the best approach then is to wrap one of the normal method objects in an instance of a custom class. I'm not sure how much of the method API you need to replicate though, so that might get a bit complicated. (Do you need your states to be comparable to one another? Do they need to be hashable? Picklable?)
Anyway, here's a bare bones implementation that does the minimum amount necessary to get a working method (plus the new repr):
class MethodWrapper:
def __init__(self, name, method):
self.name = name if name is not None else repr(method)
self.method = method
def __call__(self, *args, **kwargs):
return self.method(*args, **kwargs)
def __repr__(self):
return self.name
class State(classmethod):
def __init__(self, func):
self.name = None
super().__init__(func)
def __set_name__(self, owner, name):
self.name = "{}.{}".format(owner.__name__, name)
def __get__(self, owner, instance):
method = super().__get__(owner, instance)
return MethodWrapper(self.name, method)
And a quick demo of it in action:
>>> class Foo:
#State
def foo(cls):
print(cls)
>>> Foo.foo
Foo.foo
>>> Foo.foo()
<class '__main__.Foo'>
>>> f = Foo()
>>> f.foo()
<class '__main__.Foo'>
Note that the __set_name__ method used by the State descriptor is only called by Python 3.6. Without that new feature, it would be much more difficult for the descriptor to learn its own name (you might need to make a decorator factory that takes the name as an argument).

Resources