Monkeypatching a Python class - python-3.x

I would like to understand how Python classes and objects work. In Perl it is pretty simple, each sub definied in a package can be called as static, class or object method (CLASS::func, CLASS->func or $obj->func). For the first glance, a Python class looks like a Perl class with a bless-ed HASH (The __dict__ attribute in Python class). But in Python I'm a little bit confused. So, to understand better, I have tried to monkey-patch an empty class, adding 3 attributes which behave exactly like a static, class and object method, but I could not get it.
At first I have created the normal class to get the base result:
def say(msg, x):
print('*', msg, 'x =', x, 'type(x) =', type(x))
class A():
#staticmethod
def stt_m(x):
say('stt_m', x)
#classmethod
def cls_m(x):
say('cls_m', x)
def obj_m(x):
say('obj_m', x)
Then I have created a function (called test) which tries to call all methods with one parameter and if fails (as the first parameter can be the class or object itself), tries to call again with none printing an 'X' in front of the output line, and then prints the detected types:
def test(obj):
# Detect if obj is a class or an instantiated object
what = 'Class' if type(obj) == type else 'Object'
print()
# Try to call static, class and object method getting attributes
for a in ('stt_m', 'cls_m', 'obj_m'):
meth = getattr(obj, a)
try:
meth(111)
except:
print('X', end='')
meth()
print(' ', what, a, meth)
Calling test with the default A class and its object:
test(A)
test(A())
The result is:
* stt_m x = 111 type(x) = <class 'int'>
Class stt_m <function A.stt_m at 0x7fb37e63c8c8>
X* cls_m x = <class '__main__.A'> type(x) = <class 'type'>
Class cls_m <bound method A.cls_m of <class '__main__.A'>>
* obj_m x = 111 type(x) = <class 'int'>
Class obj_m <function A.obj_m at 0x7fb37e63c9d8>
* stt_m x = 111 type(x) = <class 'int'>
Object stt_m <function A.stt_m at 0x7fb37e63c8c8>
X* cls_m x = <class '__main__.A'> type(x) = <class 'type'>
Object cls_m <bound method A.cls_m of <class '__main__.A'>>
X* obj_m x = <__main__.A object at 0x7fb37e871748> type(x) = <class '__main__.A'>
Object obj_m <bound method A.obj_m of <__main__.A object at 0x7fb37e871748>>
So, calling a staticmethod with either class or object prefix, they behaves as normal (namespace) functions (accepting 1 argument). Calling a classmethod with either way, the first argument passed is the class object. Calling objectmethod from a class behaves as a normal function and if called from an object, then the first argument is the object itself. This later looks a bit strange, but I can live with it.
Now, let's try to monkey-patch an empty class:
class B():
pass
B.stt_m = lambda x: say('stt_m', x)
B.cls_m = types.MethodType(lambda x: say('cls_m', x), B)
B.obj_m = types.MethodType(lambda x: say('obj_m', x), B())
test(B)
test(B())
Result is:
* stt_m x = 111 type(x) = <class 'int'>
Class stt_m <function <lambda> at 0x7fbf05ec7840>
X* cls_m x = <class '__main__.B'> type(x) = <class 'type'>
Class cls_m <bound method <lambda> of <class '__main__.B'>>
X* obj_m x = <__main__.B object at 0x7fbf0d7dd978> type(x) = <class '__main__.B'>
Class obj_m <bound method <lambda> of <__main__.B object at 0x7fbf0d7dd978>>
X* stt_m x = <__main__.B object at 0x7fbf06375e80> type(x) = <class '__main__.B'>
Object stt_m <bound method <lambda> of <__main__.B object at 0x7fbf06375e80>>
X* cls_m x = <class '__main__.B'> type(x) = <class 'type'>
Object cls_m <bound method <lambda> of <class '__main__.B'>>
X* obj_m x = <__main__.B object at 0x7fbf0d7dd978> type(x) = <class '__main__.B'>
Object obj_m <bound method <lambda> of <__main__.B object at 0x7fbf0d7dd978>>
According to this pattern, stt_m behaves, like an object method of the normal class and cls_m and obj_m behaves like class method of the normal class.
Can I monkey-patch a static method this way?

You can monkey-patch methods onto a class, but it’s done like this:
B.stt_m = staticmethod(lambda x: say('stt_m', x))
B.cls_m = classmethod(lambda x: say('cls_m', x))
B.obj_m = lambda x: say('obj_m', x)
Your version for B.cls_m is OK, but your B.stt_m creates a normal method, and your B.obj_m attaches an instance method to a newly created B(), but then that B() is thrown away, and you test a new B() without the extra method.
There’s usually no need to use types.MethodType in Python:
types.MethodType(function, object_)
is equivalent to
function.__get__(object_)
which is a bit better, although also very rare.
Also (irrelevant but too neat not to mention), in newish versions of Python, your
print('*', msg, 'x =', x, 'type(x) =', type(x))
can just be written as
print(f"* {msg} {x = } {type(x) = }")

Related

How to convert a type result into a string in python?

I have a code line
emp_id=1
tp = type(emp_id)
print(tp)
print(type(tp))
strg = str(tp)
print(strg)
print(type(strg))
The result is as below
<class 'int'>
<class 'type'>
<class 'int'>
<class 'str'>
**What i need is i want to store in a string.
How to do it? **
The function type(x) returns the class of which the object x is an instance. All classes in python have the property __name__ which returns the actual name (as a string) of that class.
x = 1
tp = type(x).__name__
print(tp)
This will print: int

When the Python __call__ method gets extra first argument?

The following sample
import types
import pprint
class A:
def __call__(self, *args):
pprint.pprint('[A.__call__] self=%r, args=%r'
% (self, list(args)))
class B:
pass
if __name__ == '__main__':
a = A()
print(callable(a))
a(1, 2)
b = B()
b.meth = types.MethodType(a, b)
b.meth(3, 4)
prints
True
'[A.__call__] self=<__main__.A object at 0xb7233c2c>, args=[1, 2]'
('[A.__call__] self=<__main__.A object at 0xb7233c2c>, args=[<__main__.B '
'object at 0xb71687cc>, 3, 4]')
The number of the __call__ method arguments is changed in the
b.meth(3, 4) example. Please explain the first one (__main__.B
object...) and when Python does provide it?
Using Python 3.5.3 on Debian 9.9 Stretch
The important concept here is that a class function is a function that has 'self' bound to it as its first argument.
I'll demonstrate in a couple of examples. The following code will be identical for all examples:
import types
# Class with function
class A:
def func(*args):
print('A.func(%s)'%(', '.join([str(arg) for arg in args])))
# Callable function-style class
class A_callable:
def __call__(*args):
print('A_callable.__call__(%s)'%(', '.join([str(arg) for arg in args])))
# Empty class
class B():
pass
# Function without class
def func(*args):
print('func(%s)'%(', '.join([str(arg) for arg in args])))
Now let's consider a couple of examples:
>>> func(42)
func(42)
This one is obvious. It just calls the function func with argument 42.
The next ones are more interesting:
>>> A().func(42)
A.func(<__main__.A object at 0x7f1ed9ed2908>, 42)
>>> A_callable()(42)
A_callable.__call__(<__main__.A_callable object at 0x7f1ed9ed28d0>, 42)
You can see that the class object self is automatically given to the function as the first argument. It is important to note that the self argument is not added because the function is stored in an object, but because the function was constructed as part of the object, and therefore has the object bound to it.
To demonstrate:
>>> tmp = A().func
>>> tmp
<bound method A.func of <__main__.A object at 0x7f1ed9ed2978>>
>>> tmp(42)
A.func(<__main__.A object at 0x7f1ed9ed2978>, 42)
>>> tmp = A_callable().__call__
>>> tmp
<bound method A_callable.__call__ of <__main__.A_callable object at 0x7f1ed9ed2908>>
>>> tmp(42)
A_callable.__call__(<__main__.A_callable object at 0x7f1ed9ed2908>, 42)
The self argument does not get added because you write a. before it. It is part of the function object itself, storing it in a variable still keeps that binding.
You can also manually bind a class object to a function, like this:
>>> tmp = types.MethodType(func, B)
>>> tmp
<bound method func of <class '__main__.B'>>
>>> tmp(42)
func(<class '__main__.B'>, 42)
On the other hand, just assigning a function to a class does not bind self to the function. As previously mentioned, the argument does not get dynamically added when called, but statically when constructed:
>>> b = B()
>>> b.func = func
>>> b.func
<function func at 0x7f1edb58fe18>
>>> b.func(42)
func(42) # does NOT contain the `self` argument
That is why we need to explicitely bind self to the function if we want to add it to an object:
>>> b = B()
>>> b.func = types.MethodType(func, b)
>>> b.func
<bound method func of <__main__.B object at 0x7f1ed9ed2908>>
>>> b.func(42)
func(<__main__.B object at 0x7f1ed9ed2908>, 42)
The only thing left is to understand how binding works. If a method func has a parameter a bound to it, and gets called with *args, it will add a to the beginning of *args and then pass it to the function. The beginning is important here.
Now we know everything needed to understand your code:
>>> a = A_callable()
>>> b = B()
>>> b.func = types.MethodType(a, b)
>>> b.func
<bound method ? of <__main__.B object at 0x7f1ed97e9fd0>>
>>> b.func(42)
A_callable.__call__(<__main__.A_callable object at 0x7f1ed97fb2b0>, <__main__.B object at 0x7f1ed97e9fd0>, 42)
First of all, we can change the b.func to plain tmp because, as previously discussed, adding a function to an object does not change its type or functionality. Only binding self does.
Then, let's step through the code piece by piece:
>>> a = A_callable()
>>> b = B()
So far so good. We have an empty object b and a callable object a.
>>> tmp = types.MethodType(a,b)
This line is the crux. If you understand this, you will understand everything.
tmp is now the function a with b bound to it. That means, if we call tmp(42), it adds b to the beginning of its arguments. a will therefore receive b, 42. Then, because a is callable, it forwards its arguments to a.__call__.
That means, we are at the point where tmp(42) is equal to a.__call__(b, 42).
Because __call__ is a class function of A_callable, a is automatically bound to the __call__ function during the construction of a. Therefore before the arguments reach A_callable.__call__, a gets added to the beginning of the argument list, meaning the arguments are now a, b, 42.
Now we are at the point where tmp(42) equals A_callable.__call__(a, b, 42). This is exactly what you see:
>>> tmp = types.MethodType(a, b)
>>> tmp(42)
A_callable.__call__(<__main__.A_callable object at 0x7f1ed97fb2b0>, <__main__.B object at 0x7f1ed97e9fd0>, 42)
>>> A_callable.__call__(a, b, 42)
A_callable.__call__(<__main__.A_callable object at 0x7f1ed97fb2b0>, <__main__.B object at 0x7f1ed97e9fd0>, 42)
Now if you split your arguments into self, *args, you basically just take away the first argument and store it in self. Your first argument is a, so self will be a, and your other *args will be b, 42. Again, this is exactly what you see.

Decorators or assertions in setters to check property type?

In a python project, my class has several properties that I need to be of specific type. Users of the class must have the ability to set the property.
What is the best way to do this? Two solutions come to my mind:
1. Have test routines in each setter function.
2. Use decorators for attributes
My current solution is 1 but I am not happy with it due to the code duplication. It looks like this:
class MyClass(object):
#property
def x(self):
return self._x
#x.setter
def x(self, val):
if not isinstance(self, int):
raise Exception("Value must be of type int")
self._x = val
#property
def y(self):
return self._y
#x.setter
def y(self, val):
if not isinstance(self, (tuple, set, list)):
raise Exception("Value must be of type tuple or set or list")
self._y = val
From what I know of decorators, it should be possible to have a decorator before def x(self) handle this job. Alas I fail miserably at this, as all examples I found (like this or this) are not targeted at what I want.
The first question is thus: Is it better to use a decorator to check property types? If yes, the next question is: What is wrong with below decorator (I want to be able write #accepts(int)?
def accepts(types):
"""Decorator to check types of property."""
def outer_wrapper(func):
def check_accepts(prop):
getter = prop.fget
if not isinstance(self[0], types):
msg = "Wrong type."
raise ValueError(msg)
return self
return check_accepts
return outer_wrapper
Appetizer
Callables
This is likely beyond your needs, since it sounds like you're dealing with end-user input, but I figured it may be helpful for others.
Callables include functions defined with def, built-in functions/methods such as open(), lambda expressions, callable classes, and many more. Obviously, if you only want to allow a certain type(s) of callables, you can still use isinstance() with types.FunctionType, types.BuiltinFunctionType, types.LambdaType, etc. But if this is not the case, the best solution to this that I am aware of is demonstrated by the MyDecoratedClass.z property using isinstance() with collections.abc.Callable. It's not perfect, and will return false positives in extraordinary cases (for example, if a class defines a __call__ function that doesn't actually make the class callable). The callable(obj) built-in is the only foolproof check function to my knowledge. The MyClass.z the use property demonstrates this function, but you'd have to write another/modify the existing decorator function in MyDecoratedClass in order to support the use of check functions other than isinstance().
Iterables (and Sequences and Sets)
The y property in the code you provided is supposed to be restricted to tuples, sets, and lists, so the following may be of some use to you.
Instead of checking if arguments are of individual types, you might want to consider using Iterable, Sequence, and Set from the collections.abc module. Please use caution though, as these types are far less restrictive than simply passing (tuple, set, list) as you have. abc.Iterable (as well as the others) work near-perfectly with isinstance(), although it does sometimes return false positives as well (e.g. a class defines an __iter__ function but doesn't actually return an iterator -- who hurt you?). The only foolproof method of determining whether or not an argument is iterable is by calling the iter(obj) built-in and letting it raise a TypeError if it's not iterable, which could work in your case. I don't know of any built-in alternatives to abc.Sequence and abc.Set, but almost every sequence/set object is also iterable as of Python 3, if that helps. The MyClass.y2 property implements iter() as a demonstration, however the decorator function in MyDecoratedClass does not (currently) support functions other than isinstance(); as such, MyDecoratedClass.y2 uses abc.Iterable instead.
For the completeness' sake, here is a quick comparison of their differences:
>>> from collections.abc import Iterable, Sequence, Set
>>> def test(x):
... print((isinstance(x, Iterable),
... isinstance(x, Sequence),
... isinstance(x, Set)))
...
>>> test(123) # int
False, False, False
>>> test("1, 2, 3") # str
True, True, False
>>> test([1, 2, 3]) # list
(True, True, False)
>>> test(range(3)) # range
(True, True, False)
>>> test((1, 2, 3)) # tuple
(True, True, False)
>>> test({1, 2, 3}) # set
(True, False, True)
>>> import numpy as np
>>> test(numpy.arange(3)) # numpy.ndarray
(True, False, False)
>>> test(zip([1, 2, 3],[4, 5, 6])) # zip
(True, False, False)
>>> test({1: 4, 2: 5, 3: 6}) # dict
(True, False, False)
>>> test({1: 4, 2: 5, 3: 6}.keys()) # dict_keys
(True, False, True)
>>> test({1: 4, 2: 5, 3: 6}.values()) # dict_values
(True, False, False)
>>> test({1: 4, 2: 5, 3: 6}.items()) # dict_items
(True, False, True)
Other Restrictions
Virtually all other argument type restrictions that I can think of must use hasattr(), which I'm not going to get into here.
Main Course
This is the part that actually answers your question. assert is definitely the simplest solution, but it has its limits.
class MyClass:
#property
def x(self):
return self._x
#x.setter
def x(self, val):
assert isinstance(val, int) # raises AssertionError if val is not of type 'int'
self._x = val
#property
def y(self):
return self._y
#y.setter
def y(self, val):
assert isinstance(val, (list, set, tuple)) # raises AssertionError if val is not of type 'list', 'set', or 'tuple'
self._y = val
#property
def y2(self):
return self._y2
#y2.setter
def y2(self, val):
iter(val) # raises TypeError if val is not iterable
self._y2 = val
#property
def z(self):
return self._z
#z.setter
def z(self, val):
assert callable(val) # raises AssertionError if val is not callable
self._z = val
def multi_arg_example_fn(self, a, b, c, d, e, f, g):
assert isinstance(a, int)
assert isinstance(b, int)
# let's say 'c' is unrestricted
assert isinstance(d, int)
assert isinstance(e, int)
assert isinstance(f, int)
assert isinstance(g, int)
this._a = a
this._b = b
this._c = c
this._d = d
this._e = e
this._f = f
this._g = g
return a + b * d - e // f + g
Pretty clean overall, besides the multi-argument function I threw in there at the end, demonstrating that asserts can get tedious. However, I'd argue that the biggest drawback here is the lack of Exception messages/variables. If the end-user sees an AssertionError, it has no message and is therefore mostly useless. If you write intermediate code that could except these errors, that code will have no variables/data to be able to explain to the user what went wrong. Enter the decorator function...
from collections.abc import Callable, Iterable
class MyDecoratedClass:
def isinstance_decorator(*classinfo_args, **classinfo_kwargs):
'''
Usage:
Always remember that each classinfo can be a type OR tuple of types.
If the decorated function takes, for example, two positional arguments...
* You only need to provide positional arguments up to the last positional argument that you want to restrict the type of. Take a look:
1. Restrict the type of only the first argument with '#isinstance_decorator(<classinfo_of_arg_1>)'
* Notice that a second positional argument is not required
* Although if you'd like to be explicit for clarity (in exchange for a small amount of efficiency), use '#isinstance_decorator(<classinfo_of_arg_1>, object)'
* Every object in Python must be of type 'object', so restricting the argument to type 'object' is equivalent to no restriction whatsoever
2. Restrict the types of both arguments with '#isinstance_decorator(<classinfo_of_arg_1>, <classinfo_of_arg_2>)'
3. Restrict the type of only the second argument with '#isinstance_decorator(object, <classinfo_of_arg_2>)'
* Every object in Python must be of type 'object', so restricting the argument to type 'object' is equivalent to no restriction whatsoever
Keyword arguments are simpler: #isinstance_decorator(<a_keyword> = <classinfo_of_the_kwarg>, <another_keyword> = <classinfo_of_the_other_kwarg>, ...etc)
* Remember that you only need to include the kwargs that you actually want to restrict the type of (no using 'object' as a keyword argument!)
* Using kwargs is probably more efficient than using example 3 above; I would avoid having to use 'object' as a positional argument as much as possible
Programming-Related Errors:
Raises IndexError if given more positional arguments than decorated function
Raises KeyError if given keyword argument that decorated function isn't expecting
Raises TypeError if given argument that is not of type 'type'
* Raised by 'isinstance()' when fed improper 2nd argument, like 'isinstance(foo, 123)'
* Virtually all UN-instantiated objects are of type 'type'
Examples:
example_instance = ExampleClass(*args)
# Neither 'example_instance' nor 'ExampleClass(*args)' is of type 'type', but 'ExampleClass' itself is
example_int = 100
# Neither 'example_int' nor '100' are of type 'type', but 'int' itself is
def example_fn: pass
# 'example_fn' is not of type 'type'.
print(type(example_fn).__name__) # function
print(type(isinstance).__name__) # builtin_function_or_method
# As you can see, there are also several types of callable objects
# If needed, you can retrieve most function/method/etc. types from the built-in 'types' module
Functional/Intended Errors:
Raises TypeError if a decorated function argument is not an instance of the type(s) specified by the corresponding decorator argument
'''
def isinstance_decorator_wrapper(old_fn):
def new_fn(self, *args, **kwargs):
for i in range(len(classinfo_args)):
classinfo = classinfo_args[i]
arg = args[i]
if not isinstance(arg, classinfo):
raise TypeError("%s() argument %s takes argument of type%s' but argument of type '%s' was given" %
(old_fn.__name__, i,
"s '" + "', '".join([x.__name__ for x in classinfo]) if isinstance(classinfo, tuple) else " '" + classinfo.__name__,
type(arg).__name__))
for k, classinfo in classinfo_kwargs.items():
kwarg = kwargs[k]
if not isinstance(kwarg, classinfo):
raise TypeError("%s() keyword argument '%s' takes argument of type%s' but argument of type '%s' was given" %
(old_fn.__name__, k,
"s '" + "', '".join([x.__name__ for x in classinfo]) if isinstance(classinfo, tuple) else " '" + classinfo.__name__,
type(kwarg).__name__))
return old_fn(self, *args, **kwargs)
return new_fn
return isinstance_decorator_wrapper
#property
def x(self):
return self._x
#x.setter
#isinstance_decorator(int)
def x(self, val):
self._x = val
#property
def y(self):
return self._y
#y.setter
#isinstance_decorator((list, set, tuple))
def y(self, val):
self._y = val
#property
def y2(self):
return self._y2
#y2.setter
#isinstance_decorator(Iterable)
def y2(self, val):
self._y2 = val
#property
def z(self):
return self._z
#z.setter
#isinstance_decorator(Callable)
def z(self, val):
self._z = val
#isinstance_decorator(int, int, e = int, f = int, g = int, d = (int, float, str))
def multi_arg_example_fn(self, a, b, c, d, e, f, g):
# Identical to assertions in MyClass.multi_arg_example_fn
self._a = a
self._b = b
self._c = c
self._d = d
return a + b * e - f // g
Clearly, multi_example_fn is one place where this decorator really shines. The clutter made by assertions has been reduced to a single line. Let's take a look at some example error messages:
>>> test = MyClass()
>>> dtest = MyDecoratedClass()
>>> test.x = 10
>>> dtest.x = 10
>>> print(test.x == dtest.x)
True
>>> test.x = 'Hello'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 7, in x
AssertionError
>>> dtest.x = 'Hello'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 100, in new_fn
TypeError: x() argument 0 takes argument of type 'int' but argument of type 'str' was given
>>> test.y = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 15, in y
AssertionError
>>> test.y2 = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 23, in y2
TypeError: 'int' object is not iterable
>>> dtest.y = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 100, in new_fn
TypeError: y() argument 0 takes argument of types 'list', 'set', 'tuple' but argument of type 'int' was given
>>> dtest.y2 = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 100, in new_fn
TypeError: y2() argument 0 takes argument of type 'Iterable' but argument of type 'int' was given
>>> test.z = open
>>> dtest.z = open
>>> test.z = None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 31, in z
AssertionError
>>> dtest.z = None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 100, in new_fn
TypeError: z() argument 0 takes argument of type 'Callable' but argument of type 'NoneType' was given
Far superior in my opinion. Everything looks good except...
>>> test.multi_arg_example_fn(9,4,[1,2],'hi', g=2,e=1,f=4)
11
>>> dtest.multi_arg_example_fn(9,4,[1,2],'hi', g=2,e=1,f=4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 102, in new_fn
KeyError: 'd'
>>> print('I forgot that you have to merge args and kwargs in order for the decorator to work properly with both but I dont have time to fix it right now. Absolutely safe for properties for the time being though!')
I forgot that you have to merge args and kwargs in order for the decorator to work properly with both but I dont have time to fix it right now. Absolutely safe for properties for the time being though!
Edit Notice: My previous answer was completely incorrect. I was suggesting the use of type hints, forgetting that they aren't actually ensured in any way. They are strictly a development/IDE tool. They still are insanely helpful though; I recommend looking into using them.

python error where __str__ returns a NoneType

class Pixel:
"""Representing a 'pixel' aka one character on the screen
is mostly gonne be used in Map using a tuple location and a
character that can be changed"""
def __init__(self, char='#', location=(0,0)):
assert type(char) == str
assert type(location[0]) == int and type(location[1]) == int
self.location = location
self.x = self.location[0]
self.y = self.location[1]
self.char = char
def __str__(self):
return(self.char)
class Map:
"""Representing a map by having diffferent characters
on different lines and being able to manipulate the
characters, thus playing a game"""
def __init__(self, file=None):
self.pixels = {}
if not file:
self.rows = 3
self.colls = 3
for r in range(self.rows):
for c in range(self.colls):
self.pixels[(r, c)] = Pixel('#', (r, c))
def __str__(self):
print(self.pixels)
for c in range(self.colls):
print('')
for r in range(self.rows):
print(self.pixels[(r, c)], end='')
a = Map()
print(a)
I am trying to make a class that defines a grid where each place in the grid has a character, but when I run the code I get an error that tells me that __str__ returns a NoneType. I know I am not yet handeling file imput when initiating Map but that isn't the problem here, here is the output I got.
{(0, 1): <__main__.Pixel object at 0x7f31612a3080>,
(1, 2): <__main__.Pixel object at 0x7f31612a3470>,
(0, 0): <__main__.Pixel object at 0x7f31612a3048>,
(2, 0): <__main__.Pixel object at 0x7f31612a34a8>,
(1, 0): <__main__.Pixel object at 0x7f31612a32b0>,
(2, 2): <__main__.Pixel object at 0x7f31612a3390>,
(0, 2): <__main__.Pixel object at 0x7f31612a30b8>,
(2, 1): <__main__.Pixel object at 0x7f31612a3358>,
(1, 1): <__main__.Pixel object at 0x7f31612a32e8>}
###
###
###Traceback (most recent call last):
File "main.py", line 45, in <module>
print(a)
TypeError: __str__ returned non-string (type NoneType)
exited with non-zero status
I am also confused why the print in __str__ from Map refered me to the __main__.Pixel objects instead of using they __str__ method, but that is probably just my lack of knowlage
what am I missing?
You should use __repr__. Also in Map.__str__, you are not returning anything.For ex
In [10]: class Test:
....: def __str__(self):
....: return "str"
....: def __repr__(self):
....: return "repr"
....:
In [11]: t=Test()
In [12]: t
Out[12]: repr
In [13]: print(t)
str
I forgot to returnanything, i had the __str__print everything i needed but i didn't return anything for my print(a), thus i got a NoneType error.

Why does type(mock.MagicMock()) == mock.MagicMock returns False?

In Python3.4:
>>> import mock.MagicMock
>>> type(mock.MagicMock()) == mock.MagicMock
False # Huh, why is that?
>>> isinstance(mock.MagicMock(), mock.MagicMock)
True
When I simplify this to class A and B I type(B()) == B returns True:
>>> class A: pass
>>> class B: pass
>>> class C(A, B): pass
>>> type(B()) == B
True # Of course I would say.
Why returns type(mock.MagicMock()) == mock.MagicMock False? I know about the difference between isinstance() and type() in Python. type() doesn't 'understand' subclassing where isinstance does. But I don't see how that is that difference is involved here.
source of mock.MagicMock.
More experiments suggest the answer.
>>> from unittest.mock import MagicMock as mm
>>> mm1 = mm()
>>> mm2 = mm()
>>> type(mm1)
<class 'unittest.mock.MagicMock'>
>>> type(mm2)
<class 'unittest.mock.MagicMock'>
>>> type(mm1) == type(mm2)
False
>>> id(type(mm1))
53511896
>>> id(type(mm2))
53510984
>>> type(mm1) is mm1.__class__
True
>>> mm
<class 'unittest.mock.MagicMock'>
>>> id(mm)
53502776
Conclusion: each instance of MagicMock has a 'class' that looks like MagicMock, but is not. What is the new that creates such instances? MagicMock subclasses Mock, which subclasses NonCallableMock, which has this new method.
def __new__(cls, *args, **kw):
# every instance has its own class
# so we can create magic methods on the
# class without stomping on other mocks
new = type(cls.__name__, (cls,), {'__doc__': cls.__doc__})
instance = object.__new__(new)
return instance
The new = ... statement creates a subclass of the cls argument with the same name and docstring. The next line creates a single instance of this subclass. So Mocks follow a revised equality instead of type(mm()) is mm.
>>> mm.__bases__
(<class 'unittest.mock.MagicMixin'>, <class 'unittest.mock.Mock'>)
>>> type(mm1).__bases__
(<class 'unittest.mock.MagicMock'>,)
>>> type(mm1).__bases__[0] is mm
True

Resources