dynamic inheritance with type and super - python-3.x

I'm looking for a way to dynamically inherit a parent class with its attributes and methods, by using type for class creation and super for inheritance, like so:
class A:
def __init__(self,a,b):
self.a = a
self.b = b
def some_method(self,q):
return (self.a + self.b)**q
def B_init(self,**kwargs):
super().__init__(**kwargs)
def another_method(self,):
return 1
def class_B_factory(parent_class):
return type(
'B',
(parent_class, some_other_parent_class),
{'__init__':B_init,
'another_method':another_method
}
)
And then be able to call...
model = class_B_factory(A)(a = 1, b = 5)
print(model.some_method(2)) # outputs to (1 + 5)**2 = 36
I'm not sure how to proceed. I don't think I'll need a custom metaclass since I'm pretty sure you can't call the parent class' __init__ method while also creating self in the process. I also tried overriding the default __init__ method outside the scope of class_B_factory like so:
def class_B_factory(parent_class):
return type(
'B',
(parent_class, some_other_parent_class),
{'another_method':another_method
}
)
B = class_B_factory(A)
def B_init(self,**kwargs):
super(B,self).__init__(**kwargs)
B.__init__ = B_init
model = B(a = 1, b = 5)
because I figured type doesn't need __init__ right away, as it is only needed during instantiation. But then I get TypeError: __init__() got an unexpected keyword argument error, which seems like it didn't work, and its not clean anyway.
EDIT: I tried defining the methods outside the factory via the following but I am still unsuccessful. Not sure how to fix it. Python has trouble instantiating maybe?
class A:
...
def B_init(self, produced_class = None, **kwargs):
super(produced_class,self).__init__(**kwargs)
def another_method(self, q, parent_class = None):
if parent_class is not None:
return 3 * parent_class.some_method(self,q) # I expect any parent_class passed to have a method called some_method
return 1
def class_B_factory(parent_class, additional_methods):
methods = {}
for name, method in additional_methods.items():
if "parent_class" in signature(method).parameters:
method = partial(method, parent_class = parent_class) # freeze the parent_class argument, which is a cool feature
methods[name] = method
newcls = type(
'B',
(parent_class,),
methods # would not contain B_init
)
newcls.__init__ = partial(B_init, produced_class = newcls) # freeze the produced class that I am trying to fabricate into B_init here
return newcls
model = class_B_factory(parent_class = A, additional_methods = {"another_method": another_method})
print(signature(model.__init__).parameters) # displays OrderedDict([('self', <Parameter "self">),...]) so it contains self!
some_instance_of_model = model(a = 1, b = 5) # throws TypeError: B_init() missing 1 required positional argument: 'self'

The parameterless form of super() relies on it being physically placed inside a class body - the Python machinnery them will, under the hood, create a __class__ cell variable referring that "physical" class (roughly equivalent to a non-local variable), and place it as the first parameter in the super() call.
For methods not written inside class statements, one have to resort to explicitly placing the parameters to super, and these are the child class, and the instance (self).
The easier way to do that in your code is to define the methods inside your factory function, so they can share a non-local variable containing the newly created class in the super call: ​
def class_B_factory(parent_class):
def B_init(self,**kwargs):
nonlocal newcls # <- a bit redundant, but shows how it is used here
​super(newcls, self).__init__(**kwargs)
def another_method(self,):
​​return 1
​ newcls = type(
​'B',
​(parent_class, some_other_parent_class),
​{'__init__':B_init,
​'another_method':another_method
​}
return newcls
If you have to define the methods outside of the factory function (which is likely), you have to pass the parent class into them in some form. The most straightforward would be to add a named-parameter (say __class__ or "parent_class"), and use functools.partial inside the factory to pass the parent_class to all methods in a lazy way:
from functools import partial
from inspect import signature
class A:
...
# the "parent_class" argument name is given a special treatement in the factory function:
def B_init(self, *, parent_class=None, **kwargs):
nonlocal newcls # <- a bit redundant, but shows how it is used here
​super([parent_class, self).__init__(**kwargs)
def another_method(self,):
​​return 1
def class_B_factory(parent_class, additional_methods, ...):
methods = {}
for name, method in additional_methods.items():
if "parent_class" in signature(method).parameters:
method = partial(method, parent_class=parent_class)
# we populate another dict instead of replacing methods
# so that we create a copy and don't modify the dict at the calling place.
methods[name] = method
​ newcls = type(
​'B',
​(parent_class, some_other_parent_class),
methods
)
return newcls
new_cls = class_B_factory(B, {"__init__": B_init, "another_method": another_method})

Related

how to dynamically access class Instance Attribute python [duplicate]

How do I call a function, using a string with the function's name? For example:
import foo
func_name = "bar"
call(foo, func_name) # calls foo.bar()
Given a module foo with method bar:
import foo
bar = getattr(foo, 'bar')
result = bar()
getattr can similarly be used on class instance bound methods, module-level methods, class methods... the list goes on.
Using locals(), which returns a dictionary with the current local symbol table:
locals()["myfunction"]()
Using globals(), which returns a dictionary with the global symbol table:
globals()["myfunction"]()
Based on Patrick's solution, to get the module dynamically as well, import it using:
module = __import__('foo')
func = getattr(module, 'bar')
func()
Just a simple contribution. If the class that we need to instance is in the same file, we can use something like this:
# Get class from globals and create an instance
m = globals()['our_class']()
# Get the function (from the instance) that we need to call
func = getattr(m, 'function_name')
# Call it
func()
For example:
class A:
def __init__(self):
pass
def sampleFunc(self, arg):
print('you called sampleFunc({})'.format(arg))
m = globals()['A']()
func = getattr(m, 'sampleFunc')
func('sample arg')
# Sample, all on one line
getattr(globals()['A'](), 'sampleFunc')('sample arg')
And, if not a class:
def sampleFunc(arg):
print('you called sampleFunc({})'.format(arg))
globals()['sampleFunc']('sample arg')
Given a string, with a complete python path to a function, this is how I went about getting the result of said function:
import importlib
function_string = 'mypackage.mymodule.myfunc'
mod_name, func_name = function_string.rsplit('.',1)
mod = importlib.import_module(mod_name)
func = getattr(mod, func_name)
result = func()
The best answer according to the Python programming FAQ would be:
functions = {'myfoo': foo.bar}
mystring = 'myfoo'
if mystring in functions:
functions[mystring]()
The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct
The answer (I hope) no one ever wanted
Eval like behavior
getattr(locals().get("foo") or globals().get("foo"), "bar")()
Why not add auto-importing
getattr(
locals().get("foo") or
globals().get("foo") or
__import__("foo"),
"bar")()
In case we have extra dictionaries we want to check
getattr(next((x for x in (f("foo") for f in
[locals().get, globals().get,
self.__dict__.get, __import__])
if x)),
"bar")()
We need to go deeper
getattr(next((x for x in (f("foo") for f in
([locals().get, globals().get, self.__dict__.get] +
[d.get for d in (list(dd.values()) for dd in
[locals(),globals(),self.__dict__]
if isinstance(dd,dict))
if isinstance(d,dict)] +
[__import__]))
if x)),
"bar")()
For what it's worth, if you needed to pass the function (or class) name and app name as a string, then you could do this:
myFnName = "MyFn"
myAppName = "MyApp"
app = sys.modules[myAppName]
fn = getattr(app,myFnName)
Try this. While this still uses eval, it only uses it to summon the function from the current context. Then, you have the real function to use as you wish.
The main benefit for me from this is that you will get any eval-related errors at the point of summoning the function. Then you will get only the function-related errors when you call.
def say_hello(name):
print 'Hello {}!'.format(name)
# get the function by name
method_name = 'say_hello'
method = eval(method_name)
# call it like a regular function later
args = ['friend']
kwargs = {}
method(*args, **kwargs)
As this question How to dynamically call methods within a class using method-name assignment to a variable [duplicate] marked as a duplicate as this one, I am posting a related answer here:
The scenario is, a method in a class want to call another method on the same class dynamically, I have added some details to original example which offers some wider scenario and clarity:
class MyClass:
def __init__(self, i):
self.i = i
def get(self):
func = getattr(MyClass, 'function{}'.format(self.i))
func(self, 12) # This one will work
# self.func(12) # But this does NOT work.
def function1(self, p1):
print('function1: {}'.format(p1))
# do other stuff
def function2(self, p1):
print('function2: {}'.format(p1))
# do other stuff
if __name__ == "__main__":
class1 = MyClass(1)
class1.get()
class2 = MyClass(2)
class2.get()
Output (Python 3.7.x)
function1: 12
function2: 12
none of what was suggested helped me. I did discover this though.
<object>.__getattribute__(<string name>)(<params>)
I am using python 2.66
Hope this helps
Although getattr() is elegant (and about 7x faster) method, you can get return value from the function (local, class method, module) with eval as elegant as x = eval('foo.bar')(). And when you implement some error handling then quite securely (the same principle can be used for getattr). Example with module import and class:
# import module, call module function, pass parameters and print retured value with eval():
import random
bar = 'random.randint'
randint = eval(bar)(0,100)
print(randint) # will print random int from <0;100)
# also class method returning (or not) value(s) can be used with eval:
class Say:
def say(something='nothing'):
return something
bar = 'Say.say'
print(eval(bar)('nice to meet you too')) # will print 'nice to meet you'
When module or class does not exist (typo or anything better) then NameError is raised. When function does not exist, then AttributeError is raised. This can be used to handle errors:
# try/except block can be used to catch both errors
try:
eval('Say.talk')() # raises AttributeError because function does not exist
eval('Says.say')() # raises NameError because the class does not exist
# or the same with getattr:
getattr(Say, 'talk')() # raises AttributeError
getattr(Says, 'say')() # raises NameError
except AttributeError:
# do domething or just...
print('Function does not exist')
except NameError:
# do domething or just...
print('Module does not exist')
In python3, you can use the __getattribute__ method. See following example with a list method name string:
func_name = 'reverse'
l = [1, 2, 3, 4]
print(l)
>> [1, 2, 3, 4]
l.__getattribute__(func_name)()
print(l)
>> [4, 3, 2, 1]
Nobody mentioned operator.attrgetter yet:
>>> from operator import attrgetter
>>> l = [1, 2, 3]
>>> attrgetter('reverse')(l)()
>>> l
[3, 2, 1]
>>>
getattr calls method by name from an object.
But this object should be parent of calling class.
The parent class can be got by super(self.__class__, self)
class Base:
def call_base(func):
"""This does not work"""
def new_func(self, *args, **kwargs):
name = func.__name__
getattr(super(self.__class__, self), name)(*args, **kwargs)
return new_func
def f(self, *args):
print(f"BASE method invoked.")
def g(self, *args):
print(f"BASE method invoked.")
class Inherit(Base):
#Base.call_base
def f(self, *args):
"""function body will be ignored by the decorator."""
pass
#Base.call_base
def g(self, *args):
"""function body will be ignored by the decorator."""
pass
Inherit().f() # The goal is to print "BASE method invoked."
i'm facing the similar problem before, which is to convert a string to a function. but i can't use eval() or ast.literal_eval(), because i don't want to execute this code immediately.
e.g. i have a string "foo.bar", and i want to assign it to x as a function name instead of a string, which means i can call the function by x() ON DEMAND.
here's my code:
str_to_convert = "foo.bar"
exec(f"x = {str_to_convert}")
x()
as for your question, you only need to add your module name foo and . before {} as follows:
str_to_convert = "bar"
exec(f"x = foo.{str_to_convert}")
x()
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
You means get the pointer to an inner function from a module
import foo
method = foo.bar
executed = method(parameter)
This is not a better pythonic way indeed is possible for punctual cases
This is a simple answer, this will allow you to clear the screen for example. There are two examples below, with eval and exec, that will print 0 at the top after cleaning (if you're using Windows, change clear to cls, Linux and Mac users leave as is for example) or just execute it, respectively.
eval("os.system(\"clear\")")
exec("os.system(\"clear\")")

How to define methods and member variables in class defined with custom metaclass in python

I am defining a singleton class, and using that class as a metaclass to create new classes.
class Singleton(type):
_lock: Lock = Lock()
_instance = {}
def __call__(cls, *args, **kwargs):
with cls._lock:
if cls not in cls._instance:
_instance = super().__call__(*args, **kwargs)
cls._instance[cls] = cls
return cls._instance.get(cls)
and the new class is defined like below
class SomeClass(metaclass=Singleton):
def __init__(self, some_list = []):
self.some_list = some_list
def add_to_list(self, a):
self.some_list.append(a)
some_class = SomeClass()
I am not able to access some_list variable of some_class object. It throws invalid attribute error.
some_class.some_list
a_list = [1,2,4,5]
for l in a_list:
some_class.add_to_list(l)
Also, I am not able to call add_to_list fn. It throws missing paramter "a" in the arguments.
Can some one help what I am missing in understanding of metaclass concept.
Your error is here:
cls._instance[cls] = cls
It should be:
cls._instance[cls] = _instance
You are storing the class itself on your class registry, not its single instance.
Before we proceed, I will point another problem your code:
def __init__(self, some_list = []):
Don't ever put a mutable object (an empty list) as a default parameter for a function or method: every time that function is called, the same object is re-used. In this case, this would be mitigated due to the method being in a singleton class, so this __init__ should run only once, but this is wrong enough. The correct pattern is:
def __init__(self, some_list = None):
if some_list is None:
some_list = []
This ensures a new, different, list is created each time the method is executed.
And, another thing, I don't know why this recipe of metaclass to create a singleton got so popular, but it is definitely overkill - I talk about it in some other answers, including Create singleton class in python by taking advantage of meta class , Dill doesn't seem to respect metaclass and Accessing the parameters of a constructor from a metaclass .

How to pass self to function instance when it gets assigned in a decorator?

I am trying to assign dictionary keys to object functions but for some reason it won't work inside of decorators. When I try to call a.run(), self doesn't seem to be passed into the dictionary func. I also don't have access to f.self in decorator so I know it has to be something wrong in there. I have written a simple example of my code. I want it to be something similar to app.route in flask being that it init the mapping between endpoints and functions.
ERROR:
Traceback (most recent call last):
File "main.py", line 27, in <module>
a.run()
File "main.py", line 14, in run
self.rmap[k](data)
TypeError: one_way() missing 1 required positional argument: 'data'
CODE:
class A (object):
def __init__(self):
self.rmap = {}
def route(self, r):
def decorator(f):
self.rmap[r] = f
return f
return decorator
def run(self):
data = [1,2,3]
for k in self.rmap.keys():
self.rmap[k](data)
a = A()
class B (object):
def __init__(self):
pass
#a.route('/one/way')
def one_way (self, data):
print('A WAY:{}'.format(self))
b = B()
a.run()
At the time it's being decorated, one_way() is a plain function, not a method - it only becomes a method when looked up on a B instance. IOW, you need to explicitely provide a B instance when calling it from A().run() (the fact you have a global b instance in your code is irrelevant - the function object stored in a.rmap knows absolutely nothing about it, nor even about the B class FWIW.
To make a long story short, your current design cannot work as is. If you only ever intend to decorate methods (well, functions) from one single class and call them on one single instance of this class, you could pass an instance of this class to a.run() ie:
class A():
# ...
def run(self, obj):
data = [1,2,3]
for k in self.rmap.keys():
self.rmap[k](obj, data)
b = B()
a.run(b)
but this would be of very limited use.
Or you could just use the decorator to "mark" functions to be used for routing (together with the effective route), add some register() methdo to A and explicitely pass B or whatever else instance to this method ie
def route(r):
def decorator(f):
f._A_route = r
return f
return decorator
class A (object):
def __init__(self):
self.rmap = {}
def register(self, *objects):
for obj in objects:
self._register(obj)
def _register(self, obj):
for name in dir(obj):
if name.startswith("_"):
continue
attr = getattr(obj, name)
if callable(attr) and hasattr(attr, "_A_route"):
self.rmap[attr._A_route] = attr
def run(self):
data = [1,2,3]
for k in self.rmap.keys():
self.rmap[k](data)
class B (object):
def __init__(self):
pass
#route('/one/way')
def one_way (self, data):
print('A WAY:{}'.format(self))
if __name__ == "__main__":
a = A()
b = B()
a.register(b)
a.run()
Now there might be better solutions for your concrete use case, but it's impossible to tell without knowing about the whole context etc.
When calling self.rmap[k](data) you are not passing in the self parameter. This has to be an instance of class B in order to work.
Normally you'd just pass on the parameters with which the decorated function was called, but you seem to want to use your decorated function differently. In your case what would work is:
def run(self):
data = [1,2,3]
b = B()
for k in self.rmap.keys():
self.rmap[k](b, data)
You could of course also instantiate the B instance somewhere else if you want to reuse it between calls.

Python DRY class inititialization [duplicate]

When I define a class, I often want to set a collection of attributes for that class upon object creation. Until now, I have done so by passing the attributes as arguments to the init method. However, I have been unhappy with the repetitive nature of such code:
class Repository(OrderedDict,UserOwnedObject,Describable):
def __init__(self,user,name,gitOriginURI=None,gitCommitHash=None,temporary=False,sourceDir=None):
self.name = name
self.gitOriginURI = gitOriginURI
self.gitCommitHash = gitCommitHash
self.temporary = temporary
self.sourceDir = sourceDir
...
In this example, I have to type name three times, gitOriginURI three times, gitCommitHash three times, temporary three times, and sourceDir three times. Just to set these attributes. This is extremely boring code to write.
I've considered changing classes like this to be along the lines of:
class Foo():
def __init__(self):
self.a = None
self.b = None
self.c = None
And initializing their objects like:
f = Foo()
f.a = whatever
f.b = something_else
f.c = cheese
But from a documentation standpoint, this seems worse, because the user of the class then needs to know which attributes need to be set, rather than simply looking at the autogenerated help() string for the class's initializer.
Are there any better ways to do this?
One thing that I think might be an interesting solution, would be if there was a store_args_to_self() method which would store every argument passed to init as an attribute to self. Does such a method exist?
One thing that makes me pessimistic about this quest for a better way, is that looking at the source code for the date object in cPython's source, for example, I see this same repetitive code:
def __new__(cls, year, month=None, day=None):
...
self._year = year
self._month = month
self._day = day
https://github.com/python/cpython/blob/master/Lib/datetime.py#L705
And urwid, though slightly obfuscated by the use of setters, also has such "take an argument and set it as an attribute to self" hot-potato code:
def __init__(self, caption=u"", edit_text=u"", multiline=False,
align=LEFT, wrap=SPACE, allow_tab=False,
edit_pos=None, layout=None, mask=None):
...
self.__super.__init__("", align, wrap, layout)
self.multiline = multiline
self.allow_tab = allow_tab
self._edit_pos = 0
self.set_caption(caption)
self.set_edit_text(edit_text)
if edit_pos is None:
edit_pos = len(edit_text)
self.set_edit_pos(edit_pos)
self.set_mask(mask)
https://github.com/urwid/urwid/blob/master/urwid/widget.py#L1158
You could use the dataclasses project to have it take care of generating the __init__ method for you; it'll also take care of a representation, hashing and equality testing (and optionally, rich comparisons and immutability):
from dataclasses import dataclass
from typing import Optional
#dataclass
class Repository(OrderedDict, UserOwnedObject, Describable):
name: str
gitOriginURI: Optional[str] = None
gitCommitHash: Optional[str] = None
temporary: bool = False
sourceDir: Optional[str] = None
dataclasses were defined in PEP 557 - Data Classes, which has been accepted for inclusion in Python 3.7. The library will work on Python 3.6 and up (as it relies on the new variable annotation syntax introduced in 3.6).
The project was inspired by the attrs project, which offers some more flexibility and options still, as well as compatibility with Python 2.7 and Python 3.4 and up.
Well, you could do this:
class Foo:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
foo = Foo(a=1, b='two', c='iii')
print(foo.a, foo.b, foo.c)
output
1 two iii
But if you do, it's probably a Good Idea to check that the keys in kwargs are sane before dumping them into your instances __dict__. ;)
Here's a slightly fancier example that does a little bit of checking of the passed-in args.
class Foo:
attrs = ['a', 'b', 'c']
''' Some stuff about a, b, & c '''
def __init__(self, **kwargs):
valid = {key: kwargs.get(key) for key in self.attrs}
self.__dict__.update(valid)
def __repr__(self):
args = ', '.join(['{}={}'.format(key, getattr(self, key)) for key in self.attrs])
return 'Foo({})'.format(args)
foo = Foo(a=1, c='iii', d='four')
print(foo)
output
Foo(a=1, b=None, c=iii)
For Python 2.7 my solution is to inherit from namedtuple and use namedtuple itself as only argument to init. To avoid overloading new every time we can use decorator. The advantage is that we have explicit init signature w/o *args, **kwargs and, so, nice IDE suggestions
def nt_child(c):
def __new__(cls, p): return super(c, cls).__new__(cls, *p)
c.__new__ = staticmethod(__new__)
return c
ClassA_P = namedtuple('ClassA_P', 'a, b, foo, bar')
#nt_child
class ClassA(ClassA_P):
def __init__(self, p):
super(ClassA, self).__init__(*p)
self.something_more = sum(p)
a = ClassA(ClassA_P(1,2,3,4)) # a = ClassA(ClassA_P( <== suggestion a, b, foo, bar
print a.something_more # print a. <== suggesion a, b, foo, bar, something_more
I'll just leave another one recipe here. attrs is useful, but have cons, main of which is lack of IDE suggestions for class __init__.
Also it's fun to have initialization chains, where we use instance of parent class as first arg for __init__ instead of providing all it's attrs one by one.
So I propose the simple decorator. It analyses __init__ signature and automatically adds class attributes, based on it (so approach is opposite to attrs's one). This gave us nice IDE suggestions for __init__ (but lack of suggestions on attributes itself).
Usage:
#data_class
class A:
def __init__(self, foo, bar): pass
#data_class
class B(A):
# noinspection PyMissingConstructor
def __init__(self, a, red, fox):
self.red_plus_fox = red + fox
# do not call parent constructor, decorator will do it for you
a = A(1, 2)
print a.__attrs__ # {'foo': 1, 'bar': 2}
b = B(a, 3, 4) # {'fox': 4, 'foo': 1, 'bar': 2, 'red': 3, 'red_plus_fox': 7}
print b.__attrs__
Source:
from collections import OrderedDict
def make_call_dict(f, is_class_method, *args, **kwargs):
vnames = f.__code__.co_varnames[int(is_class_method):f.__code__.co_argcount]
defs = f.__defaults__ or []
d = OrderedDict(zip(vnames, [None] * len(vnames)))
d.update({vn: d for vn, d in zip(vnames[-len(defs):], defs)})
d.update(kwargs)
d.update({vn: v for vn, v in zip(vnames, args)})
return d
def data_class(cls):
inherited = hasattr(cls, '_fields')
if not inherited: setattr(cls, '_fields', None)
__init__old__ = cls.__init__
def __init__(self, *args, **kwargs):
d = make_call_dict(__init__old__, True, *args, **kwargs)
if inherited:
# tricky call of parent __init__
O = cls.__bases__[0] # put parent dataclass first in inheritance list
o = d.values()[0] # first arg in my __init__ is parent class object
d = OrderedDict(d.items()[1:])
isg = o._fields[O] # parent __init__ signature, [0] shows is he expect data object as first arg
O.__init__(self, *(([o] if isg[0] else []) + [getattr(o, f) for f in isg[1:]]))
else:
self._fields = {}
self.__dict__.update(d)
self._fields.update({cls: [inherited] + d.keys()})
__init__old__(self, *args, **kwargs)
cls.__attrs__ = property(lambda self: {k: v for k, v in self.__dict__.items()
if not k.startswith('_')})
cls.__init__ = __init__
return cls

class instance from nowhere [duplicate]

If I have a class ...
class MyClass:
def method(arg):
print(arg)
... which I use to create an object ...
my_object = MyClass()
... on which I call method("foo") like so ...
>>> my_object.method("foo")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method() takes exactly 1 positional argument (2 given)
... why does Python tell me I gave it two arguments, when I only gave one?
In Python, this:
my_object.method("foo")
... is syntactic sugar, which the interpreter translates behind the scenes into:
MyClass.method(my_object, "foo")
... which, as you can see, does indeed have two arguments - it's just that the first one is implicit, from the point of view of the caller.
This is because most methods do some work with the object they're called on, so there needs to be some way for that object to be referred to inside the method. By convention, this first argument is called self inside the method definition:
class MyNewClass:
def method(self, arg):
print(self)
print(arg)
If you call method("foo") on an instance of MyNewClass, it works as expected:
>>> my_new_object = MyNewClass()
>>> my_new_object.method("foo")
<__main__.MyNewClass object at 0x29045d0>
foo
Occasionally (but not often), you really don't care about the object that your method is bound to, and in that circumstance, you can decorate the method with the builtin staticmethod() function to say so:
class MyOtherClass:
#staticmethod
def method(arg):
print(arg)
... in which case you don't need to add a self argument to the method definition, and it still works:
>>> my_other_object = MyOtherClass()
>>> my_other_object.method("foo")
foo
In simple words
In Python you should add self as the first parameter to all defined methods in classes:
class MyClass:
def method(self, arg):
print(arg)
Then you can use your method according to your intuition:
>>> my_object = MyClass()
>>> my_object.method("foo")
foo
For a better understanding, you can also read the answers to this question: What is the purpose of self?
Something else to consider when this type of error is encountered:
I was running into this error message and found this post helpful. Turns out in my case I had overridden an __init__() where there was object inheritance.
The inherited example is rather long, so I'll skip to a more simple example that doesn't use inheritance:
class MyBadInitClass:
def ___init__(self, name):
self.name = name
def name_foo(self, arg):
print(self)
print(arg)
print("My name is", self.name)
class MyNewClass:
def new_foo(self, arg):
print(self)
print(arg)
my_new_object = MyNewClass()
my_new_object.new_foo("NewFoo")
my_bad_init_object = MyBadInitClass(name="Test Name")
my_bad_init_object.name_foo("name foo")
Result is:
<__main__.MyNewClass object at 0x033C48D0>
NewFoo
Traceback (most recent call last):
File "C:/Users/Orange/PycharmProjects/Chapter9/bad_init_example.py", line 41, in <module>
my_bad_init_object = MyBadInitClass(name="Test Name")
TypeError: object() takes no parameters
PyCharm didn't catch this typo. Nor did Notepad++ (other editors/IDE's might).
Granted, this is a "takes no parameters" TypeError, it isn't much different than "got two" when expecting one, in terms of object initialization in Python.
Addressing the topic: An overloading initializer will be used if syntactically correct, but if not it will be ignored and the built-in used instead. The object won't expect/handle this and the error is thrown.
In the case of the sytax error: The fix is simple, just edit the custom init statement:
def __init__(self, name):
self.name = name
Newcomer to Python, I had this issue when I was using the Python's ** feature in a wrong way. Trying to call this definition from somewhere:
def create_properties_frame(self, parent, **kwargs):
using a call without a double star was causing the problem:
self.create_properties_frame(frame, kw_gsp)
TypeError: create_properties_frame() takes 2 positional arguments but 3 were given
The solution is to add ** to the argument:
self.create_properties_frame(frame, **kw_gsp)
As mentioned in other answers - when you use an instance method you need to pass self as the first argument - this is the source of the error.
With addition to that,it is important to understand that only instance methods take self as the first argument in order to refer to the instance.
In case the method is Static you don't pass self, but a cls argument instead (or class_).
Please see an example below.
class City:
country = "USA" # This is a class level attribute which will be shared across all instances (and not created PER instance)
def __init__(self, name, location, population):
self.name = name
self.location = location
self.population = population
# This is an instance method which takes self as the first argument to refer to the instance
def print_population(self, some_nice_sentence_prefix):
print(some_nice_sentence_prefix +" In " +self.name + " lives " +self.population + " people!")
# This is a static (class) method which is marked with the #classmethod attribute
# All class methods must take a class argument as first param. The convention is to name is "cls" but class_ is also ok
#classmethod
def change_country(cls, new_country):
cls.country = new_country
Some tests just to make things more clear:
# Populate objects
city1 = City("New York", "East", "18,804,000")
city2 = City("Los Angeles", "West", "10,118,800")
#1) Use the instance method: No need to pass "self" - it is passed as the city1 instance
city1.print_population("Did You Know?") # Prints: Did You Know? In New York lives 18,804,000 people!
#2.A) Use the static method in the object
city2.change_country("Canada")
#2.B) Will be reflected in all objects
print("city1.country=",city1.country) # Prints Canada
print("city2.country=",city2.country) # Prints Canada
It occurs when you don't specify the no of parameters the __init__() or any other method looking for.
For example:
class Dog:
def __init__(self):
print("IN INIT METHOD")
def __unicode__(self,):
print("IN UNICODE METHOD")
def __str__(self):
print("IN STR METHOD")
obj = Dog("JIMMY", 1, 2, 3, "WOOF")
When you run the above programme, it gives you an error like that:
TypeError: __init__() takes 1 positional argument but 6 were given
How we can get rid of this thing?
Just pass the parameters, what __init__() method looking for
class Dog:
def __init__(self, dogname, dob_d, dob_m, dob_y, dogSpeakText):
self.name_of_dog = dogname
self.date_of_birth = dob_d
self.month_of_birth = dob_m
self.year_of_birth = dob_y
self.sound_it_make = dogSpeakText
def __unicode__(self, ):
print("IN UNICODE METHOD")
def __str__(self):
print("IN STR METHOD")
obj = Dog("JIMMY", 1, 2, 3, "WOOF")
print(id(obj))
If you want to call method without creating object, you can change method to static method.
class MyClass:
#staticmethod
def method(arg):
print(arg)
MyClass.method("i am a static method")
I get this error when I'm sleep-deprived, and create a class using def instead of class:
def MyClass():
def __init__(self, x):
self.x = x
a = MyClass(3)
-> TypeError: MyClass() takes 0 positional arguments but 1 was given
You should actually create a class:
class accum:
def __init__(self):
self.acc = 0
def accumulator(self, var2add, end):
if not end:
self.acc+=var2add
return self.acc
In my case, I forgot to add the ()
I was calling the method like this
obj = className.myMethod
But it should be is like this
obj = className.myMethod()

Resources