Python overriding default attribute assignment - python-3.x

for a specific framework i work with, i need to define object attributes as special classes, for example, instead of writing this:
class A:
def __init__(self):
self.some_int = 2
i would need to write:
class A:
def __init__(self):
self.some_int = SpecialIntWrapper(name = "some_int", value = 2)
I would like to somehow override operators/methods so that typing the first code (self.some_int = 2) will call SpecialIntWrapper behind the scenes, with the attribute name and value.
is this possible?

Basically there are two ways - via a #property decorator (preferable unless you want to affect arbitrary names)
class MyClass:
def __init__(self):
self.some_int = 2
# if you know the name of the property define it as a property - a getter
#property
def some_int(self):
return self._some_int
# and a setter
#some_int.setter
def some_int(self, value):
self._some_int = SpecialIntWrapper("some_int", value)
or overloading the __setattr__ magic method
class MyClass:
def __init__(self):
self.some_int = 2
def __setattr__(self, name, value):
# in general if you dont know the names of the properties
# beforehand you can somehow filter them here
if name == "some_int":
super().__setattr__(name, SpecialIntWrapper(name=name, value=value))
else:
# to use the setattr in a default way, just call it via super(Python 3)
super().__setattr__(name, value)
either way the some_int will be initialized to the SpecialIntWrapper instance
>>>print(MyClass().some_int)
<__main__.SpecialIntWrapper object at 0x03721810>

Something like this
class SpecialIntWrapper:
def __init__(self, name, value):
pass
class MyClass:
def __init__(self):
self.some_int = 3
def __setattr__(self, key, value):
if key == 'some_int':
self.__dict__[key] = SpecialIntWrapper(key, value)
else:
self.__dict__[key] = value
print(MyClass().some_int)
# >>> <__main__.SpecialIntWrapper object at 0x1076f1748>

Related

OOP - Python - printing instance variable too when I call static method alone

Here in this code I am just calling out my static method, but it prints my instance variable too. Could you please explain the reason for that, and how to avoid them being printed?
Like below:
I am a static Method
None
class Player:
def __init__(self, name = None):
self.name = name # creating instance variables
#staticmethod
def demo():
print("I am a static Method")
p1 = Player()
print(p1.demo())
As Python docs says:
Print objects to the text stream file, separated by sep and followed
by end. sep, end, file, and flush, if present, must be given as
keyword arguments.
So you can return your message in method and then just print it:
class Player:
def __init__(self, name = None):
self.name = name # creating instance variables
#staticmethod
def demo():
return "I am a static Method"
p1 = Player()
print(p1.demo())

Creating a child class from a parent method in python

I am trying to make a class that has a bunch of children that all have their own respective methods but share common methods through the parent. The problem is I need to create an instance of the child class in the parent method but am not sure how to go about it
my code so far looks like this
def filterAttribute(self, attribute, value):
newlist = []
for thing in self._things:
if thing._attributes[attribute] == value:
newlist.append(thing)
return self.__init__(newlist)
the class constructor takes in a list as its sole argument. Does anyone know if there is a standard way of doing this because my code is returning a NoneType object
Here are a few examples of classes I have made
This is the parent class:
class _DataGroup(object):
def __init__(self, things=None):
self._things=things
def __iter__(self):
for x in self._things:
yield x
def __getitem__(self, key):
return self._things[key]
def __len__(self):
return len(self._things)
def extend(self, datagroup):
if(isinstance(datagroup, self.__class__)):
self._things.extend(datagroup._things)
self._things = list(set(self._things))
def filterAttribute(self, attribute, value):
newlist = []
for thing in self._things:
if thing._attributes[attribute] == value:
newlist.append(thing)
#return self.__init__(newlist)
return self.__init__(newlist)
this is one of the child classes
class _AuthorGroup(_DataGroup):
def __init__(self, things=None):
self._things = things
def getIDs(self):
return [x.id for x in self._things]
def getNames(self):
return [x.name for x in self._things]
def getWDs(self):
return [x.wd for x in self._things]
def getUrns(self):
return [x.urn for x in self._things]
def filterNames(self, names, incl_none=False):
newlist = []
for thing in self._things:
if((thing is not None or (thing is None and incl_none)) and thing.name in names):
newlist.append(thing)
return _AuthorGroup(newlist)
The functionality I am looking for is that I can use the parent class's with the child classes and create instances of the child classes instead of the overall DataGroup parent class
So if I correctly understand what you are trying to accomplish:
You want a Base Class 'DataGroup' which has a set of defined attributes and methods;
You want one or mpore child classes with the ability to inherit both methods and attributes from the base class as well as have the ability to over-ride base class methjods if necessary: and
You want to invoke the child class without also having to manually invoke the base class.
If this in fact is your problem, this is how I would proceed:
Note: I have modified several functions, since I think you have several other issues with your code, for example in the base class self._things is set up as a list, but in the functions get_item and filterAttribute you are assuming self._things is a dictionary structure. I have modified the functions so all assume a dict structure for self._things
class _DataGroup:
def __init__(self, things=None):
if things == None:
self._things = dict() #Sets up default empty dict
else:
self._things=things
def __iter__(self):
for x in self._things.keys():
yield x
def __len__(self):
return len(self._things)
def extend(self, datagroup):
for k, v in datagroup.items():
nv = self._things.pop(k, [])
nv.append(v)
self._things[k] = nv
# This class utilizes the methods and attributes of DataGroup
# and adds new methods, unique to the child class
class AttributeGroup(_DataGroup):
def __init__(self, things=None):
super.__init__(things)
def getIDs(self):
return [x for x in self._things]
def getNames(self):
return [x.name for x in self._things]
def getWDs(self):
return [x.wd for x in self._things]
def getUrns(self):
return [x.urn for x in self._things]
# This class over-rides a DataGroup method and adds new attribute
class NewChild(_DataGroup):
def __init__(self, newAttrib, things = None):
self._newattrib = newAttrib
super.__init__(self, things)
def __len__(self):
return max(len(self._newattrib), len(self._things))
These examples are simplified, since I am not absolutely sure of what you really want.

Multiple inheritance, super() and their correct use with arguments in Python

I'm trying to understand multiple inheritance in python. I think that "kinda" got it, but I'm missing a few pieces. I know that if I have two clases I can do something like:
class A():
def __init__(self,name):
self.name = name
class B(A):
def __init__(self,name):
A.__init__(self,name)
self.mm = False
self.name = name
b = B("Peter")
My problem is when I have more classes and each class has their own init arguments. At first glance, it makes like no sense to have something like this:
class A():
def __init__(self,name,arg_a1,arg_a2):
self.name = name
class B(A):
def __init__(self,name,arg_b1,arg_b2,arg_a1,arg_a2...):
A.__init__(self,name,arg_a1,arg_a2...)
self.mm = False
self.name = name
class C(B):
def __init__(self,name,arg_c1,arg_c2,arg_b1,arg_b2,arg_a1,arg_a2.........):
B.__init__(self,name,arg_b1,arg_b2,arg_a1,arg_a2...)
self.name = name
So I started to look how to do it in an efficient way and not just hardcode it. Thats when I came across with multiple inheritance and thats when my doubts started to arraise.
If I have 3 classes:
class A():
def __init__(self,name):
self.name = name
class B(A):
def __init__(self,name,*args,**kwargs):
super().__init__(*args,**kwargs)
self.mm = False
self.name = name
class C(B):
def __init__(self,a,j,*args,**kwargs):
super().__init__(*args,**kwargs)
self.a = a
self.j = j
c = C("p",1,5,name="p")
Why this give an error but adding name as an init argument does not?
In this other example, if I add another argument to A init's function the I get TypeError: __init__() got multiple values for argument 'name'.
class A():
def __init__(self,name,lastname):
self.name = name
self.lastname = lastname
class B(A):
def __init__(self,name,*args,**kwargs):
super().__init__(name,*args,**kwargs)
self.mm = False
self.name = name
class C(B):
def __init__(self,a,j,*args,**kwargs):
super().__init__(*args,**kwargs)
self.a = a
self.j = j
c = C("p",1,5,name="p")
So, after all this, several questions comes to my mind.
Why this TypeError is generated?
How can I make inheritance "smart"?
Do I always need to use *args and **kwargs with multiple inheritance?
And all this gets me to the point to the libraries I use daily. Probably some of them use this concetps (I don't know, I'm assuming). What happes when the user puts a kwarg that is not present in any class? How do python "knows" that name goes in class A and not class B or viceversa?

Decorators unexpectedly change constructor behavior in Python

Below, I show a simplified example of a more complicated code, but nonetheless, it fully represents the issue that I have encountered.
Part 1: this works fine, no issues:
class Animal():
def __init__(self, animal_name = "no name given"):
self.set_name(animal_name)
def get_name(self):
return self._animal_name
def set_name(self, animal_name):
self._animal_name = animal_name
class Dog(Animal):
def __init__(self, dog_breed = "no breed", dog_name = "no name given"):
self._dog_breed = dog_breed
super().__init__(dog_name)
def get_breed(self):
print(self._dog_breed)
x = Dog('Greyhound', 'Rich')
Part 2: after introducing getter & setter decorators, the code stops working:
class Animal():
def __init__(self, animal_name = "no name given"):
#THE LINE BELOW SEEMS TO CAUSE AN ISSUE
self.name(animal_name)
#property
def name(self):
return self._animal_name
#name.setter
def name(self, animal_name):
self._animal_name = animal_name
class Dog(Animal):
def __init__(self, dog_breed = "no breed", dog_name = "no name given"):
self._dog_breed = dog_breed
super().__init__(dog_name)
def get_breed(self):
print(self._dog_breed)
x = Dog('Greyhound', 'Rich')
Output: AttributeError: 'Dog' object has no attribute '_animal_name'
When I keep the decorators in Part 2 but change the constructor in the Animal class to:
class Animal():
def __init__(self, animal_name = "no name given"):
self._animal_name=animal_name
It works.
I am just curious why it doesn't work in the example above in Part 2?
Short answer:
The line
self.name(animal_name)
can be split in two parts:
tmp = self.name
tmp(animal_name)
First, self.name calls the getter and the result is treated as a function. The getter uses return self._animal_name and since the setter has never been called, the respective error occurs.
Long answer:
Let's take the following class:
class Animal:
def __init__(self, animal_name):
self.name(animal_name)
#property
def name(self):
return self._animal_name
#name.setter
def name(self, animal_name):
self._animal_name = animal_name
To understand what the line
self.name(animal_name)
actually does, you first need to understand decorators.
The code
#dec
def func(a, b, ...):
[...]
is equivalent to
def func_impl(a, b, ...):
[...]
func = dec(func_impl)
(except that you can not call func_impl directly). See, for example, PEP 318 for more information.
This means that you can write the Animal class from above without using decorators:
class Animal:
def __init__(self, animal_name):
self.name(animal_name)
def get_name(self):
return self._animal_name
name = property(get_name)
def set_name(self, animal_name):
self._animal_name = animal_name
name = name.setter(set_name)
In order to understand this code, you need to understand the builtin property, which is a class. See the python docs for detailed information.
The line name = property(get_name) creates an object of type property. When retrieving the value of the property, get_name is called.
The line name = name.setter(set_name) first calls name.setter(set_name), which creates a copy of the property, and then overwrites name with this copy. When assigning a value to the copy, set_name is called.
All in all, name is an object of type property that uses get_name as getter and set_name as setter.
How does this help?
You need to understand this: name is not a function. It is a property. It is not callable.
The problematic line
self.name(animal_name)
is actually equivalent to
self.get_name()(animal_name)
which this explains the error message: The constructor calls the getter, which tries to use return self._animal_name. But since the setter has not been called, yet, self._animal_name has not been set.

dynamic class inheritance using super

I'm trying to dynamically create a class using type() and assign an __init__ constructor which calls super().__init__(...); however, when super() gets called I receive the following error:
TypeError: super(type, obj): obj must be an instance or subtype of type
Here is my code:
class Item():
def __init__(self, name, description, cost, **kwargs):
self.name = name
self.description = description
self.cost = cost
self.kwargs = kwargs
class ItemBase(Item):
def __init__(self, name, description, cost):
super().__init__(name, description, cost)
def __constructor__(self, n, d, c):
super().__init__(name=n, description=d, cost=c)
item = type('Item1', (ItemBase,), {'__init__':__constructor__})
item_instance = item('MyName', 'MyDescription', 'MyCost')
Why is super() inside the __constructor__ method not understanding the object parameter; and how do I fix it?
Solution 1: Using cls = type('ClassName', ...)
Note the solution of sadmicrowave creates an infinite loop if the dynamically-created class gets inherited as self.__class__ will correspond to the child class.
An alternative way which do not have this issue is to assigns __init__ after creating the class, such as the class can be linked explicitly through closure. Example:
# Base class
class A():
def __init__(self):
print('A')
# Dynamically created class
B = type('B', (A,), {})
def __init__(self):
print('B')
super(B, self).__init__()
B.__init__ = __init__
# Child class
class C(B):
def __init__(self):
print('C')
super().__init__()
C() # print C, B, A
Solution 2: Using MyClass.__name__ = 'ClassName'
An alternative way to dynamically create class is to define a class inside the function, then reassign the __name__ and __qualname__ attributes:
class A:
def __init__(self):
print(A.__name__)
def make_class(name, base):
class Child(base):
def __init__(self):
print(Child.__name__)
super().__init__()
Child.__name__ = name
Child.__qualname__ = name
return Child
B = make_class('B', A)
class C(B):
def __init__(self):
print(C.__name__)
super().__init__()
C() # Display C B A
Here is how I solved the issue. I reference the type() method to dynamically instantiate a class with variable references as such:
def __constructor__(self, n, d, c, h):
# initialize super of class type
super(self.__class__, self).__init__(name=n, description=d, cost=c, hp=h)
# create the object class dynamically, utilizing __constructor__ for __init__ method
item = type(item_name, (eval("{}.{}".format(name,row[1].value)),), {'__init__':__constructor__})
# add new object to the global _objects object to be used throughout the world
self._objects[ item_name ] = item(row[0].value, row[2].value, row[3].value, row[4].value)
There may be a better way to accomplish this, but I needed a fix and this is what I came up with... use it if you can.

Resources