MetaClass in Python - python-3.x

I am trying to create a Meta-Class for my Class.
I have tried to print information about my class in meta-class
Now I have created two objects of my class
But Second object gets created without referencing my Meta-Class
Does Meta Class gets called only once per Class??
Any help will be appreciated
Thanks
class Singleton(type):
def __new__(cls,name,bases,attr):
print (f"name {name}")
print (f"bases {bases}")
print (f"attr {attr}")
print ("Space Please")
return super(Singleton,cls).__new__(cls,name,bases,attr)
class Multiply(metaclass = Singleton):
pass
objA = Multiply()
objB = Multiply()
print (objA)
print (objB)

Yes - the metaclass's __new__ and __init__ methods are called only when the class is created. After that, in your example, the class will be bound to theMultiply name. In many aspects, it is just an object like any other in Python. When you do objA = Multiply() you are not creating a new instance of type(Multiply), which is the metaclass - you are creating a new instance of Multiply itself: Multiply.__new__ and Multiply.__init__ are called.
Now, there is this: the mechanism in Python which make __new__ and __init__ be called when creating an instance is the code in the metaclass __call__ method. That is, just as when you create any class with a __call__ method and use an instance of it with the calling syntax obj() will invoke type(obj).__call__(obj), when you do Multiply() what is called (in this case) is Singleton.__call__(Multiply).
Since it is not implemented, Singleton's superclass, which is type __call__ method is called instead - and it is in there that Multiply.__new__ and __init__ are called.
That said, there is nothing in the code above that would make your classes behave as "singletons". And more importantly you don't need a metaclass to have a singleton in Python. I don't know who invented this thing, but it keeps circulating around.
First, if you really need a singleton, all you need to do is to write a plain class, no special anything, create your single instance, and document that the instance should be used. Just as people use None - no one keeps getting a reference to Nonetype and keep calling it to get a None reference:
class _Multiply:
...
# document that the code should use this instance:
Multiply = _Multiply()
second Alternatively, if your code have a need, whatsoever, for instantiating the class that should be a singleton where it will be used, you can use the class' __new__ method itself to control instantiation, no need for a metaclass:
class Multiply:
_instance = None
def __new__(cls):
if not cls._instance:
cls._instance = super().__new__(cls)
# insert any code that would go in `__init__` here:
...
...
return cls._instance
Third just for demonstration purposes, please don't use this, the metaclass mechanism to have singletons can be built in the __call__ method:
class Singleton(type):
registry = {}
def __new__(mcls,name,bases,attr):
print(f"name {name}")
print(f"bases {bases}")
print(f"attr {attr}")
print("Class created")
print ("Space Please")
return super(Singleton,mcls).__new__(cls,name,bases,attr)
def __call__(cls, *args, **kw):
registry = type(cls).registry
if cls not in registry:
print(f"{cls.__name__} being instantiated for the first time")
registry[cls] = super().__call__(*args, **kw)
else:
print(f"Attempting to create a new instance of {cls.__name__}. Returning single instance instead")
return registry[cls]
class Multiply(metaclass = Singleton):
pass

Related

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

how to access class' attribute instead of objects

suppose you have
class c:
pass
print(c.__call__)
output: <method-wrapper '__call__' of type object at 0x0000023378F28DC8>
my problem is I cannot get the same output if __call__ is defined
like so:
class c:
__call__ = lambda self: None
print(c.__call__)
output: <function c.<lambda> at 0x000002337A069B70>
and neither type.__getattribute__(c, '__call__') works
to conclude, I want first output in both examples
is it possible (I guess through some metaprogramming)
This is the same issue you could have with a class variable and an instance variable with the same name:
class Test:
var = 1 # class variable
def __init__(self):
self.var = 2 # instance variable with the same name
t = Test()
print(t.var) # prints 2, the instance variable, not the class variable
print(Test.var) # prints 1, the class variable
In your first exmaple, the __call__ method is defined in the metaclass, type. You're accessing it though an instance of type, the class c. If you define a class variable in c, it's essentially an instance variable in the metaclass perspective, so you can't see the version defined in the metaclass any more.
As in my class variable code above, the best way to get the __call__ method from the metaclass is to name it directly directly: type.__call__. If you think you might have some other metaclass, you could call type on the class, to get the metaclass without naming it: type(c).__call__.
Note that the type.__call__ method gets run in different situations than a __call__ method defined in a normal class. The interpreter runs type.__call__ when you call the class, e.g. c(), while c.__call__ gets run when you call an instance:
obj = c() # this is type.__call__
obj() # this is where c.__call__ runs

The metaclass's "__init_subclass__" method doesn't work in the class constructed by this metaclass

My question was inspired by this question.
The problem there is the 3 level class model - the terminating classes (3-rd level) only should be stored in the registry, but the 2-nd level are interfering and also have stored, because they are subclasses of 1-st level.
I wanted to get rid of 1-st level class by using metaclass. By this way the only 2 class levels are left - base classes for each group of settings and their childs - various setting classes, inherited from the according base class. The metaclass serves as a class factory - it should create base classes with needed methods and shouldn't be displayed in the inheritance tree.
But my idea doesn't work, because it seems that the __init_subclass__ method (the link to method) doesn't copied from the metaclass to constructed classes. In contrast of __init__ method, that works as I were expected.
Code snippet № 1. The basic framework of the model:
class Meta_Parent(type):
pass
class Parent_One(metaclass=Meta_Parent):
pass
class Child_A(Parent_One):
pass
class Child_B(Parent_One):
pass
class Child_C(Parent_One):
pass
print(Parent_One.__subclasses__())
Output:
[<class '__main__.Child_A'>, <class '__main__.Child_B'>, <class '__main__.Child_C'>]
I have wanted to add functionality to the subclassing process of the above model, so I have redefined the type's builtin __init_subclass__ like this:
Code snippet № 2.
class Meta_Parent(type):
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
print(cls)
From my point of view, now every new class, constructed by Meta_Parent metaclass (for example, Parent_One) should have __init_subclass__ method and thus, should print the subclass name when every class is inherited from this new class, but it prints nothing. That is, my __init_subclass__ method doesn't called when inheritance happens.
It works if Meta_Parent metaclass is directly inherited though:
Code snippet № 3.
class Meta_Parent(type):
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
print(cls)
class Child_A(Meta_Parent):
pass
class Child_B(Meta_Parent):
pass
class Child_C(Meta_Parent):
pass
Output:
<class '__main__.Child_A'>
<class '__main__.Child_B'>
<class '__main__.Child_C'>
Nothing strange here, the __init_subclass__ was created exactly for this purpose.
I were thinking at a moment, that dunder methods are belong to metaclass only and are not passed to new constructed classes, but then, I try the __init__ method and it works as I were expecting in the beginning - looks like the link to __init__ have copied to every metaclass's class.
Code snippet № 4.
class Meta_Parent(type):
def __init__(cls, name, base, dct):
super().__init__(name, base, dct)
print(cls)
Output:
<class '__main__.Parent_One'>
<class '__main__.Child_A'>
<class '__main__.Child_B'>
<class '__main__.Child_C'>
The questions:
Why __init__ works, but __init_subclass__ doesn't?
Is it possible to implement my idea by using metaclass?
1. Why __init__ works, but __init_subclass__ doesn't?
I found the answer by debugging CPython by GDB.
The creation of a new class (type) starts in the type_call() function. It does two main things: a new type object creation and this object initialization.
obj = type->tp_new(type, args, kwds); is an object creation. It calls the type's tp_new slot with passed arguments. By default the tp_new stores reference to the basic type object's tp_new slot, but if any ancestor class implements the __new__ method, the reference is changing to the slot_tp_new dispatcher function. Then the type->tp_new(type, args, kwds); callsslot_tp_new function and it, in own turn, invokes the search of __new__ method in the mro chain. The same happens with tp_init.
The subclass initialization happens at the end of new type creation - init_subclass(type, kwds). It searches the __init_subclass__ method in the mro chain of the just created new object by using the super object. In my case the object's mro chain has two items:
print(Parent_One.__mro__)
### Output
(<class '__main__.Parent_One'>, <class 'object'>).
int res = type->tp_init(obj, args, kwds); is an object initialization. It also searches the __init__ method in the mro chain, but use the metaclass mro, not the just created new object's mro. In my case the metaclass mro has three item:
print(Meta_Parent.__mro__)
###Output
(<class '__main__.Meta_Parent'>, <class 'type'>, <class 'object'>)
The simplified execution diagram:
So, the answer is: __init_subclass__ and __init__ methods are searched in the different places:
the __init_subclass__ firstly is searched in the Parent_One's __dict__, then in the object's __dict__.
the __init__ is searched in this order: Meta_Parent's __dict__, type's __dict__, object's __dict__.
2. Is it possible to implement my idea by using metaclass?
I came up with following solution. It has drawback - the __init__ method is called by each subclass, the children included, that means - all subclasses have registry and __init_subclass__ attributes, which is needless. But it works as I were requesting in the question.
#!/usr/bin/python3
class Meta_Parent(type):
def __init__(cls, name, base, dct, **kwargs):
super().__init__(name, base, dct)
# Add the registry attribute to the each new child class.
# It is not needed in the terminal children though.
cls.registry = {}
#classmethod
def __init_subclass__(cls, setting=None, **kwargs):
super().__init_subclass__(**kwargs)
cls.registry[setting] = cls
# Assign the nested classmethod to the "__init_subclass__" attribute
# of each child class.
# It isn't needed in the terminal children too.
# May be there is a way to avoid adding these needless attributes
# (registry, __init_subclass__) to there. I don't think about it yet.
cls.__init_subclass__ = __init_subclass__
# Create two base classes.
# All child subclasses will be inherited from them.
class Parent_One(metaclass=Meta_Parent):
pass
class Parent_Two(metaclass=Meta_Parent):
pass
### Parent_One's childs
class Child_A(Parent_One, setting='Child_A'):
pass
class Child_B(Parent_One, setting='Child_B'):
pass
class Child_C(Parent_One, setting='Child_C'):
pass
### Parent_Two's childs
class Child_E(Parent_Two, setting='Child_E'):
pass
class Child_D(Parent_Two, setting='Child_D'):
pass
# Print results.
print("Parent_One.registry: ", Parent_One.registry)
print("#" * 100, "\n")
print("Parent_Two.registry: ", Parent_Two.registry)
Output
Parent_One.registry: {'Child_A': <class '__main__.Child_A'>, 'Child_B': <class '__main__.Child_B'>, 'Child_C': <class '__main__.Child_C'>}
####################################################################################################
Parent_Two.registry: {'Child_E': <class '__main__.Child_E'>, 'Child_D': <class '__main__.Child_D'>}
The solution I came up with and use/like is:
class Meta_Parent(type):
def _init_subclass_override(cls, **kwargs):
super().__init_subclass__(**kwargs)
# Do whatever... I raise an exception if something is wrong
#
# i.e
# if sub-class's name does not start with "Child_"
# raise NameError
#
# cls is the actual class, Child_A in this case
class Parent_One(metaclass=Meta_Parent):
#classmethod
def __init_subclass__(cls, **kwargs):
Meta_Parent._init_subclass_override(cls, **kwargs)
### Parent_One's childs
class Child_A(Parent_One):
pass
I like this because it DRYs the sub-class creation code/checks. At the same time, if you see Parent_One, you know that there is something happening whenever a sub-class is created.
I did it while mucking around to mimic my own Interface functionality (instead of using ABC), and the override method checks for existence of certain methods in the sub-classes.
One can argue whether the override method really belongs in the metaclass, or somewhere else.

Change class parent dynamically from attributes python 3

I want the new class to dynamically inherit from different parents depending on an attribute given when creating an instance. So far I've tried something like this:
class Meta(type):
chooser = None
def __call__(cls, *args, **kwars):
if kwargs['thingy'] == 'option':
Meta.choose = option
return super().__call__(*args, **kwargs)
def __new__(cls, name, parents, attrs):
if Meta.choose == option:
bases = (parent1)
return super().__new__(cls, name, parents, attrs)
It doesn't work, is there a way that, depending on one of the parameters of the instance, I can dynamically choose a parent for the class?
First, lets fix a trivial mistake in the code, and then dig into the "real problem": the bases parameter needs to be a tuple. when you do bases=(option) the right hand side is not a tuple - it is merely a parenthesized expression that will be resolved and passed on as the non-tuple option.
Change that to bases=(option,) whenever you need to create a tuple for the bases.
The second mistake is more conceptual, and is probably why you didn't get it to work across varios attempts: the __call__ method of the metaclass is not something one usually fiddles with. To keep a long history short, the __call__ method of a __class__ is what is called to coordinate the calling of the __new__ and __init__ methods of instances of that class - that is made by Python automatically, and it is the __call__ from type that has this mechanism. When you transpose that to your metaclass you might realise that the __call__ method of your metaclass is not used when the __new__ and __init__ methods of your metaclass itself are about to be called (when a class is defined). In other words - the __call__ that is used is on the "meta-meta" class (which is again, type).
The __call__ method you wrote will instead be used by the instances of your custom classes when they are created (and this is what you intended), and will have no effect on class creation as it won't invoke the metaclass' __new__ - just the class __new__ itself. (and this is not what you intended).
So, what you need is, from inside __call__ not to call super().__call__ with the same arguments you received: that will pass cls to type's call, and the bases for cls are baked in when the metaclass __new__ was run - it just happens when the class body itself is declared.
You must have, in this __call__ dynamically create a new class, or use one of a pre-filled in table, and them pass that dynamically created class to the type.__call__.
But - in the end of the day, one can perceive that all this can be done with a factory function, so there is no need to create this super-complicated meta-class mechanism for that - and other Python tools such as linters and static analysers (as embbeded in an IDE you or your colleagues may be using) might work better with that.
Solution using factory function:
def factory(cls, *args, options=None, **kwargs):
if options == 'thingy':
cls = type(cls.__name__, (option1, ), cls.__dict__)
elif options = 'other':
...
return cls(*args, **kwargs)
If you don't want to create a new class on every call, but want to share a couple of pre-existing classes with common bases, just create a cache-dictionary, and use the dict's setdefault method:
class_cache = {}
def factory(cls, *args, options=None, **kwargs):
if options == 'thingy':
cls = class_cache.setdefault((cls.__name__, options),
type(cls.__name__, (option1, ), cls.__dict__))
elif options = 'other':
...
return cls(*args, **kwargs)
(The setdefault method will store the second argument on the dict if the key (name, options) does not exist yet).
using a metaclass:
updated
After breakfast :-) I came up with this:
make your metaclass __new__ inject a __new__ function on the created class itself that will either create a new class, with the desired bases dynamically or used a cached one for the same options. But unlike the other example, use a metaclass to anotate the original parameters to the class creation to create the derived class:
parameter_buffer = {}
derived_classes = {}
class Meta:
def __new__(metacls, name, bases, namespace):
cls = super().__new__(metacls, name, bases, namespace)
parameter_buffer[cls] = (name, bases, namespace)
def __new__(cls, *args, option=None, **kwargs):
if option is None:
return original_new(cls, *args, **kwargs)
name, bases, namespace = parameter_buffer[cls]
if option == 'thingy':
bases = (option1,)
elif option== 'thingy2':
...
if not (cls, bases) in derived_classes:
derived_classes[cls, bases] = type(name, bases, namespace)
return derived_classes[cls, bases](*args, **kwargs)
cls.__new__ = __new__
return cls
To keep the example short, this simply overwrites any explict __new__ method on the class that uses this metaclass. Also, the derived classes created this way are not themselves bearer of the same capability since they are created calling type and the metaclass is discarded in the process. Both things could be taken care off by writing more careful code but it would become complicated as an example here.

How to use method parameter as parameter in another method?

class Class1(object):
def __init__(self, parameter1):
# action with parameter
def method1(self, parameter1):
# method actions
So what I want to happen is that I am able to make a Class1 object without having loaded the parameter1 yet and then when that has happened, I use method1 to set parameter1 and run actions with method1 as __init__ will use the results of method1. This is a python tutorial practice exam by the way so it has to be done this way.
EDIT:
>>>object1 = Class1()
>>>object1.method1(parameter1)
In order to allow a later initialization, you want to move all your actual initialization stuff into the method and make the parameter to the __init__ optional. Then, if the parameter is specified, you can call the method or not.
class SomeClass (object):
def __init__ (self, param = None):
# do some general initialization, like initializing instance members
self.foo = 'bar'
# if the parameter is specified, call the init method
if param is not None:
self.init(param)
def init (self, param):
# do initialization stuff
Then, both of the following ways to create the object are equivalent:
x = SomeClass('param value')
y = SomeClass()
y.init('param value')
If the idea is to be able to assign a value for the attribute at the method level and not in the initialization of the Class, I would suggest the following implementation:
class Class:
def __init__(self, parameter=None):
self.parameter=parameter
def method(self, parameter):
self.parameter = parameter
You can check that the attribute is certainly assigned through the method:
>>> c = Class1()
>>> c.method('whatever')
>>> print(c.parameter)
whatever
BTW in Python3 you don't need to explicitly inherit from object anymore, since already "all classes inherit from object".

Resources