Initialising list superclass - self parameter needed sometimes but not others? - python-3.x

This is from the book head first python (pg 208 chapter 6). Initially I saw an example in the book of the initialisation of the subclass as so:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__([])
self.extend(a_times)
When I came to writing my own version I thought I could skip the extend step:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__(a_times)
When it comes to printing the list:
test = AthleteList([1,2,3])
print(test)
The output is [], so there is something wrong with the initialisation. When searching around, in every case I found I saw it necessary to initialise the superclass by explicitly passing self:
class AthleteList(list):
def __init__(self, a_times=[]):
list.__init__(self, a_times)
Which makes more sense: the list superclass needs the object itself passed as an argument so that it can initialise its list values. Except why wasn't self needed in the very first example (which does actually work)? Even if I am initialising it with an empty list I still surely need to pass the self object so that self's list is made empty. I don't even need to initialise it to the empty list first, it seems to do it by default, and I can just extend later:
class AthleteList(list):
def __init__(self, a_times=[]):
self.extend(a_times)

this is probably the safest way to subclass list; use UserList as base class and things work as expected:
from collections import UserList
class AthleteList(UserList):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
athlete_list = AthleteList((1, 2, 3))
print(athlete_list) # -> [1, 2, 3]
that admittedly does not answer all your questions but may be a starting point.
here is a more in-depth answer about that: https://stackoverflow.com/a/25464724/4954037 .

Related

How __init__ works for inheritance

I cant have 2 init methods in one class because of function overloading. However, why is it possible that when initializing a subclass, im able to define a new __init__ method, and use the super().__init__ method or the parentclass init method within the subclass __init__ method. i'm just a little confused by the concept of 2 __init__ methods functioning at the same time
class Employee:
emps = 0
def __init__(self,name,age,pay):
self.name = name
self.age = age
self.pay = pay
class Developer(Employee):
def __init__(self,name,age,pay,level):
Employee.__init__(self,name,age,pay)
self.level = level
I cant have 2 init methods in one class because of function overloading.
Partially true. You can't have 2 __init__ methods in the same class because the language lacks function overloading. (Libraries can partially restore a limited form of function overloading; see functools.singledispatchmethod for an example.)
i'm just a little confused by the concept of 2 init methods functioning at the same time
But you aren't trying to overload __init__. You are overriding __init__, providing a different definition for Developer than the definition it inherits from Employer. (In fact, Employer is overriding __init__ as well, using its own definition in place of the one it inherits from object.) Each class has only one definition.
In your definition of Developer.__init__, you are simply making an explicit call to the inherited method to do the initialization common to all instances of Employee, before doing the Developer-specific initialization on the same object.
Using super, you are using a form of dynamic lookup to let the method resolution order for instance of Developer decide what the "next" version of __init__ available to Developer to call. For single inheritance, the benefit is little more than avoiding the need to hard-code a reference to Employee. But for multiple inheritance, super is crucial to ensuring that all inherited methods (both the ones you know about and the ones you may not) get called, and more importantly, are called in the right order.
A full discussion of how to properly use super is beyond the scope of this question, I think, but I'll show your two classes rewritten to make the best use of super, and refer you to Python's super() considered super! for more information.
# Main rules:
# 1. *All* classes use super().__init__, even if you are only inheriting
# from object, because you don't know who will use you as a base class.
# 2. __init__ should use keyword arguments, and be prepared to accept any
# keyword arguments.
# 3. All keyword arguments that don't get assigned to your own parameters
# are passed on to an inherited __init__() to process.
class Employee:
emps = 0
def __init__(self, *, name, age, pay, **kwargs):
super().__init__(**kwargs)
self.name = name
self.age = age
self.pay = pay
class Developer(Employee):
def __init__(self, *, level, **kwargs):
super().__init__(**kwargs)
self.level = level
d1 = Developer(name="Alice", age=30, pay=85000, level=1)
To whet your appetite for the linked article, consider
class A:
def __init__(self, *, x, **kwargs):
super().__init__(**kwargs)
self.x = x
class B:
def __init__(self, *, y, **kwargs):
super().__init__(**kwargs)
self.y = y
class C1(A, B):
pass
class C2(B, A):
pass
c1 = C1(x=1, y=2)
c2 = C2(x=4, y=3)
assert c1.x == 1 and c1.y == 2
assert c2.x == 4 and c2.y == 3
The assertions all pass, and both A.__init__ and B.__init__ are called as intended when c1 and c2 are created.
The super() function is used to give access to methods and properties of a parent or sibling class
check out: https://www.geeksforgeeks.org/python-super/

Python pro way to make an abstract class allowing each child class to define its own attributes, Python3

I have to model several cases, each case is realised by a class. I want to make sure that each class must have 2 methods get_input() and run(). So in my opinion, I can write a CaseBase class where these 2 methods are decorated as #abstractmethod. Therefore, any child class has to implement these 2 methods. And this is exactly my goal.
However, due to the nature of my work, each case is for distinct subject, and it is not easy to define a fixed group of attributes. The attributes should be defined in the __init__ method of a class. That means I don't know what exactly attributes to write in the CaseBase class. All I know is that all children cases must have some common attributes, like self._common_1 and self._common_2.
Therefore, my idea is that I also decorate the __init__ method of CaseBase class by #abstractmethod. See my code below.
from abc import ABC, abstractmethod
from typing import Dict, List
class CaseBase(ABC):
#abstractmethod
def __init__(self):
self._common_1: Dict[str, float] = {}
self._common_2: List[float] = []
...
#abstractmethod
def get_input(self, input_data: dict):
...
#abstractmethod
def run(self):
...
class CaseA(CaseBase):
def __init__(self):
self._common_1: Dict[str, float] = {}
self._common_2: List[float] = []
self._a1: int = 0
self._a2: str = ''
def get_input(self, input_data: dict):
self._common_1 = input_data['common_1']
self._common_2 = input_data['common_2']
self._a1 = input_data['a1']
self._a2 = input_data['a2']
def run(self):
print(self._common_1)
print(self._common_2)
print(self._a1)
print(self._a2)
def main():
case_a = CaseA()
case_a.get_input(input_data={'common_1': {'c1': 1.1}, 'common_2': [1.1, 2.2], 'a1': 2, 'a2': 'good'})
case_a.run()
if __name__ == '__main__':
main()
My question: Is my way a good Python style?
I followed many Python tutorials about how to make Abstract class and child class. They all give examples where a fixed group of attributes are defined in the __init__ method of the base class. I also see some approach to use super().__init__ code in the child class to change the attributes defined in the base class or to add new attributes. But I am not sure if it is better (more pro) than my way.
Thanks.
You mostly used the abc module in python 3.10 correctly. but it doesn't make sense to decorate the constructor with #abstractmethod. It's unnecessary. Each class, derived or not, can and will have its own constructor. You can call super().__init__(args) within the child class to call the constructor of its immediate parent if you didn't want to duplicate its code but wanted to do further initialization in the child class constructor.

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

Can I derive from classmethod in Python?

I have a special statemachine implemented in Python, which uses class methods as state representation.
class EntityBlock(Block):
def __init__(self, name):
self._name = name
#classmethod
def stateKeyword1(cls, parserState : ParserState):
pass
#classmethod
def stateWhitespace1(cls, parserState : ParserState):
token = parserState.Token
if isinstance(token, StringToken):
if (token <= "generate"):
parserState.NewToken = GenerateKeyword(token)
parserState.NewBlock = cls(....)
else:
raise TokenParserException("....", token)
raise TokenParserException("....", token)
#classmethod
def stateDelimiter(cls, parserState : ParserState):
pass
Visit GitHub for full source code off pyVHDLParser.
When I debug my parser FSM, I get the statenames printed as:
State: <bound method Package.stateParse of <class 'pyVHDLParser.DocumentModel.Sequential.Package.Package'>>
I would like to get better reports, so I would like to overwrite the default behavior of __repr__ of each bound method object.
Yes, I could write a metaclass or apply a second decorator, but I was questioning myself:
Is it possible to derive from classmethod and have only one decorator called e.g. state?
According to PyCharm's builtins.py (a collection of dummy code for Python's builtins), classmethod is a class-based decorator.
Yes, you can write your own class that derives from classmethod if you want. It's a bit complicated though. You'll need to implement the descriptor protocol (overriding classmethod's implementation of __get__) so that it returns an instance of another custom class that behaves like a bound method object. Unfortunately, you can't inherit from Python's builtin bound method type (I'm not sure why not).
Probably the best approach then is to wrap one of the normal method objects in an instance of a custom class. I'm not sure how much of the method API you need to replicate though, so that might get a bit complicated. (Do you need your states to be comparable to one another? Do they need to be hashable? Picklable?)
Anyway, here's a bare bones implementation that does the minimum amount necessary to get a working method (plus the new repr):
class MethodWrapper:
def __init__(self, name, method):
self.name = name if name is not None else repr(method)
self.method = method
def __call__(self, *args, **kwargs):
return self.method(*args, **kwargs)
def __repr__(self):
return self.name
class State(classmethod):
def __init__(self, func):
self.name = None
super().__init__(func)
def __set_name__(self, owner, name):
self.name = "{}.{}".format(owner.__name__, name)
def __get__(self, owner, instance):
method = super().__get__(owner, instance)
return MethodWrapper(self.name, method)
And a quick demo of it in action:
>>> class Foo:
#State
def foo(cls):
print(cls)
>>> Foo.foo
Foo.foo
>>> Foo.foo()
<class '__main__.Foo'>
>>> f = Foo()
>>> f.foo()
<class '__main__.Foo'>
Note that the __set_name__ method used by the State descriptor is only called by Python 3.6. Without that new feature, it would be much more difficult for the descriptor to learn its own name (you might need to make a decorator factory that takes the name as an argument).

Python 3 object construction: which is the most Pythonic / the accepted way?

Having a background in Java, which is very verbose and strict, I find the ability to mutate Python objects as to give them with fields other than those presented to the constructor really "ugly".
Trying to accustom myself to a Pythonic way of thinking, I'm wondering how I should allow my objects to be constructed.
My instinct is to have to pass the fields at construction time, such as:
def __init__(self, foo, bar, baz=None):
self.foo = foo
self.bar = bar
self.baz = baz
But that can become overly verbose and confusing with many fields to pass. To overcome this I assume the best method is to pass one dictionary to the constructor, from which the fields are extracted:
def __init__(self, field_map):
self.foo = field_map["foo"]
self.bar = field_map["bar"]
self.baz = field_map["baz"] if baz in field_map else None
The other mechanism I can think of is to have the fields added elsewhere, such as:
class Blah(object):
def __init__(self):
pass
...
blah = Blah()
blah.foo = var1
But as that feels way too loose for me.
(I suppose the issue in my head is how I deal with interfaces in Python...)
So, to reiterate the question: How I should construct my objects in Python? Is there an accepted convention?
The first you describe is very common. Some use the shorter
class Foo:
def __init__(self, foo, bar):
self.foo, self.bar = foo, bar
Your second approach isn't common, but a similar version is this:
class Thing:
def __init__(self, **kwargs):
self.something = kwargs['something']
#..
which allows to create objects like
t = Thing(something=1)
This can be further modified to
class Thing:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
allowing
t = Thing(a=1, b=2, c=3)
print t.a, t.b, t.c # prints 1, 2, 3
As Debilski points out in the comments, the last method is a bit unsafe, you can add a list of accepted parameters like this:
class Thing:
keywords = 'foo', 'bar', 'snafu', 'fnord'
def __init__(self, **kwargs):
for kw in self.keywords:
setattr(self, kw, kwargs[kw])
There are many variations, there is no common standard that I am aware of.
I’ve not seen many of your field_maps in real life. I think that would only make sense if you were to use the field_map at some other place in your code as well.
Concerning your third example: Even though you don’t need to assign to them (other than None), it is common practice to explicitly declare attributes in the __init__ method, so you’ll easily see what properties your object has.
So the following is better than simply having an empty __init__ method (you’ll also get a higher pylint score for that):
class Blah(object):
def __init__(self):
self.foo = None
self.bar = None
blah = Blah()
blah.foo = var1
The problem with this approach is, that your object might be in a not well-defined state after initialisation, because you have not yet defined all of your object’s properties. This depends on your object’s logic (logic in code and in meaning) and how your object works. If it is the case however, I’d advise you not to do it this way. If your object relies on foo and bar to be meaningfully defined, you should really put them inside of your __init__ method.
If, however, the properties foo and bar are not mandatory, you’re free to define them afterwards.
If readability of the argument lists is an issue for you: Use keyword arguments.

Resources