Python Cookbook recipe9.14 Discussion - python-3.x

When I read Python Cookbook recipe 9.14 discussion, it says that I have to convert the OrderedDict to a dict instance when making the final class object. Do I have to do that or using OrderedDict is ok?
I tried this, pass OrderedDict to type constructor, it doesn't raise any exception
class OrderedMeta(type):
def __new__(cls, clsname, bases, attr_dict):
order = []
for name, value in attr_dict.items():
if isinstance(value, Typed):
value._name = name
order.append(name)
attr_dict['_order'] = order
return type.__new__(cls, clsname, bases, attr_dict)
#classmethod
def __prepare__(cls, clsname, bases):
return OrderedDict()
code from book
class OrderedMeta(type):
def __new__(cls, clsname, bases, attr_dict):
d = dict(attr_dict)
order = []
for name, value in attr_dict.items():
if isinstance(value, Typed):
value._name = name
order.append(name)
d['_order'] = order
return type.__new__(cls, clsname, bases, d)
#classmethod
def __prepare__(cls, clsname, bases):
return OrderedDict()
enter image description here

No, it is not needed.
type.__new__ can take any mapping - maybe even with fewer methods than the full mapping procotocol - __iter__, __len__ and __getitem__ should suffice.
This is likely valid since Python 3.0 - the author of the recipe simply took a "safe road" as he was not sure what might ensue with a non-native dict being passed to type.__new__ and calling dict on the OrderedDict is a plain line of code anyway.
But it would not have made much sense to allow any custom mapping to be used in __prepare__ and then do not accept that same mapping on type.__new__ - That is, you don't even need a __new__ method on your metaclass if all you need is a custom mapping - just the __prepare__ method.
That said, this particular recipe is now obsolete anyway, as since Python 3.6 dictionaries used in class construction are ordered by default (and as of Python 3.7, all Python dictionaries are ordered by default).

You have to do that, for some reasons that David Beasley did not elaborate on further than saying that
"it may cause problems if you don't"
in his 2013 tutorial Python 3 Metaprogramming, at around 1 Hour and 30 minutes.
Since that tutorial, python 3.6 introduced dictionaries that retain their order, you can therefore, probably, safely ignore the use of an ordereddict for this recipe.

Related

Control the hash of methods in Python?

Is there a nice way to control the hash of a method of a Python class.
Let's say I have the following example:
class A:
def hello(self, arg):
print(arg)
def __hash__(self):
return 12345
a = A()
b = A()
hash(a.hello) == hash(b.hello) >>> False
Now I'm vaguely aware of why that is. Internally methods are functions that have a class reference with some magic to it but basically, they are just functions that (probably) inherit from object. So the __hash__ method of class A is only relevant to its own instances.
However, while this makes sense at first, I did realize that in Python 3.7 the example above evaluates to True, while in 3.8 it is False.
Does anyone: (1) know how to achieve this behavior in > 3.7 (thus controlling the hash of a method), and, (2) know why and what changed between the two versions (I am starting to doubt my sanity tbh)?

Self keyword usage in python? Detailed Explanation would be possible as if you're explaining a kid

I know that self keyword in python is a reference to the instance of that particular class, but how can one create an object within the same class? can someone please explain this concept to me in detail?
I think first of all that you should be aware of the differences between class, object and instance.
The class is basically the code definition of an entity.
An object or instance, usually mean the same exact thing: a running unique specific entity loaded in memory.
I personally make a distinction between the two, but this is probably just my opinion:
I use object in a phase when I am reading the code functionally and I want to say something like this is the object kind returned by this method.
I use instance when referring to the actual object running in memory.
===
Now in a python class, any member can be defined as an object, for example:
class MyClass(object):
def __init__(self):
self.some_member = MyOtherClass()
Please note that the class itself inherit from object, which is the base class (or the mother of all the python classes).
In [1]: object??
Init signature: object()
Docstring:
The base class of the class hierarchy.
When called, it accepts no arguments and returns a new featureless
instance that has no instance attributes and cannot be given any.
Type: type
Subclasses: type, weakref, weakcallableproxy, weakproxy, int, bytearray, bytes, list, NoneType, NotImplementedType, ...
===
If you meant instead, "How can I add an instance of the same class as a member of the class?", the following could be a way to do it:
class MyClass(object):
def __init__(self):
self.another_instance = None
def set_instance(self, instance):
if not isinstance(instance, MyClass):
raise TypeError("Wrong instance type!")
self.another_instance = instance
a = MyClass()
b = MyClass()
a.set_instance(b)
Here I will also share with you the wrong approach:
class MyClass(object):
def __init__(self):
self.another_instance = MyClass()
a = MyClass()
This code will result in a RecursionError: maximum recursion depth exceeded. This because if you think about it, each class instance will try to create a new instance, which will try to create a new instance, which will try... and so on.
====
This could be another way to do it, but I wouldn't suggest it unless for a very specific and particular case (maybe something related to chained objects, but there's surely a better way to manage this.)
In [1]: class MyClass(object):
...: def __init__(self, has_link=False):
...: if has_link:
...: self.another_instance = MyClass(has_link=False)
...:
In [2]: a = MyClass(has_link=True)
In [3]: a.another_instance
Out[3]: <__main__.MyClass at 0x110fe00d0>

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

Create instances from list of classes

How do I create instances of classes from a list of classes? I've looked at other SO answers but did understand them.
I have a list of classes:
list_of_classes = [Class1, Class2]
Now I want to create instances of those classes, where the variable name storing the class is the name of the class. I have tried:
for cls in list_of_classes:
str(cls) = cls()
but get the error: "SyntaxError: can't assign to function call". Which is of course obvious, but I don't know what to do else.
I really want to be able to access the class by name later on. Let's say we store all the instance in a dict and that one of the classes are called ClassA, then I would like to be able to access the instance by dict['ClassA'] later on. Is that possible? Is there a better way?
You say that you want "the variable name storing the class [to be] the name of the class", but that's a very bad idea. Variable names are not data. The names are for programmers to use, so there's seldom a good reason to generate them using code.
Instead, you should probably populate a list of instances, or if you are sure that you want to index by class name, use a dictionary mapping names to instances.
I suggest something like:
list_of_instances = [cls() for cls in list_of_classes]
Or this:
class_name_to_instance_mapping = {cls.__name__: cls() for cls in list_of_classes}
One of the rare cases where it can sometimes make sense to automatically generate variables is when you're writing code to create or manipulate class objects themselves (e.g. producing methods automatically). This is somewhat easier and less fraught than creating global variables, since at least the programmatically produced names will be contained within the class namespace rather than polluting the global namespace.
The collections.NamedTuple class factory from the standard library, for example, creates tuple subclasses on demand, with special descriptors as attributes that allow the tuple's values to be accessed by name. Here's a very crude example of how you could do something vaguely similar yourself, using getattr and setattr to manipulate attributes on the fly:
def my_named_tuple(attribute_names):
class Tup:
def __init__(self, *args):
if len(args) != len(attribute_names):
raise ValueError("Wrong number of arguments")
for name, value in zip(attribute_names, args):
setattr(self, name, value) # this programmatically sets attributes by name!
def __iter__(self):
for name in attribute_names:
yield getattr(self, name) # you can look up attributes by name too
def __getitem__(self, index):
name = attribute_names[index]
if isinstance(index, slice):
return tuple(getattr(self, n) for n in name)
return getattr(self, name)
return Tup
It works like this:
>>> T = my_named_tuple(['foo', 'bar'])
>>> i = T(1, 2)
>>> i.foo
1
>>> i.bar
2
If i did understood your question correctly, i think you can do something like this using globals():
class A:
pass
class B:
pass
class C:
pass
def create_new_instances(classes):
for cls in classes:
name = '{}__'.format(cls.__name__)
obj = cls()
obj.__class__.__name__ = name
globals()[name] = obj
if __name__ == '__main__':
classes = [A, B, C]
create_new_instances(classes)
classes_new = [globals()[k] for k in globals() if k.endswith('__') and not k.startswith('__')]
for cls in classes_new:
print(cls.__class__.__name__, repr(cls))
Output (you'll get a similar ouput):
A__ <__main__.A object at 0x7f7fac6905c0>
C__ <__main__.C object at 0x7f7fac6906a0>
B__ <__main__.B object at 0x7f7fac690630>

Initialising list superclass - self parameter needed sometimes but not others?

This is from the book head first python (pg 208 chapter 6). Initially I saw an example in the book of the initialisation of the subclass as so:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__([])
self.extend(a_times)
When I came to writing my own version I thought I could skip the extend step:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__(a_times)
When it comes to printing the list:
test = AthleteList([1,2,3])
print(test)
The output is [], so there is something wrong with the initialisation. When searching around, in every case I found I saw it necessary to initialise the superclass by explicitly passing self:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__(self, a_times)
Which makes more sense: the list superclass needs the object itself passed as an argument so that it can initialise its list values. Except why wasn't self needed in the very first example (which does actually work)? Even if I am initialising it with an empty list I still surely need to pass the self object so that self's list is made empty. I don't even need to initialise it to the empty list first, it seems to do it by default, and I can just extend later:
class AthleteList(list):
def __init__(self, a_times=[]):
self.extend(a_times)
this is probably the safest way to subclass list; use UserList as base class and things work as expected:
from collections import UserList
class AthleteList(UserList):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
athlete_list = AthleteList((1, 2, 3))
print(athlete_list) # -> [1, 2, 3]
that admittedly does not answer all your questions but may be a starting point.
here is a more in-depth answer about that: https://stackoverflow.com/a/25464724/4954037 .

Resources