Python enum.Enum: Create variables which i can assign enum.Enum members - python-3.x

Creating enumerations in Python 3.4+ is pretty easy:
from enum import Enum
class MyEnum(Enum):
A = 10
B = 20
This gets me a typedef MyEnum.
With this i can assign a variable:
x = MyEnum.A
So far so good.
However things start to get complicate if i like to use enum.Enum's as arguments to functions or class methods and want to assure that class attributes only hold enum.Enum members but not other values.
How can i do this? My idea is sth like this, which i consider more as a workaround than a solution:
class EnContainer:
def __init__(self, val: type(MyEnum.A) = MyEnum.A):
assert isinstance(val, type(MyEnum.A))
self._value = val
Do you have any suggestions or do you see any problems with my approach? I have to consider about 10 different enumerations and would like to come to a consistent approach for initialization, setters and getters.

Instead of type(MyEnum.A), just use MyEnum:
def __init__(self, val: MyEnum = MyEnum.A):
assert isinstance(val, MyEnum)
Never use assert for error checking, they are for program validation -- in other words, who is calling EnContainer? If only your own code is calling it with already validated data, then assert is fine; but if code outside your control is calling it, then you should be using proper error checking:
def __init__(self, val: MyEnum = MyEnum.A):
if not isinstance(val, MyEnum):
raise ValueError(
"EnContainer called with %s.%r (should be a 'MyEnum')"
% (type(val), val)
)

Related

Creation of Python unit test

function_one.py
class FunctionOne(Base):
def __init__(self, amount, tax):
super().__init__(amount, tax)
function_two.py
Class FunctionTwo:
def __init__(self, a, b, c):
self.__a = a
self.__b = b
self.__c = c
def _get_info(self):
x = FunctionOne(0, 1)
return x
test_function_two.py
class TestPostProcessingStrategyFactory(unittest.TestCase):
def test__get_info(self):
a = “a”
b = “b”
c = “c”
amount = 0
tax = 1
function_two = FunctionTwo(a, b, c)
assert function_two.__get_info() == FunctionOne(0,1)
I am trying to create unit test for the function_two.py source code. I get the assertion error that the object at ******** != object at *********.
So the two objects address is different. How can make this test pass by correcting the assert statement
assert function_two.__get_info() == FunctionOne(0,1)
You need to understand that equality comparisons depend on the __eq__ method of a class. From the code you provided it appears that simply initializing two objects of FunctionOne with the same arguments does not result in two objects that compare as equal. Whatever implementation of __eq__ underlies that class, only you know that.
However, I would argue the approach is faulty to begin with because unit tests, as the name implies, are supposed to isolate your units (i.e. functions typically) as much as possible, which is not what you are doing here.
When you are testing a function f that calls another of your functions g, strictly speaking, the correct approach is mocking g during the test. You need to ensure that you are testing f and only f. This extends to instances of other classes that you wrote, since their methods are also just functions that you wrote.
Have a look at the following example code.py:
class Foo:
def __init__(self, x, y):
...
class Bar:
def __init__(self, a, b):
self.__a = a
self.__b = b
def get_foo(self):
foo = Foo(self.__a, self.__b)
return foo
Say we want to test Bar.get_foo. That method uses our Foo class inside it, instantiating it and returning that instance. We want to ensure that this is what the method does. We don't want to concern ourselves with anything that relates to the implementation of Foo because that is for another test case.
What we need to do is mock that class entirely. Then we substitute some unique object to be returned by calling our mocked Foo and check that we get that object from calling get_foo.
In addition, we want to check that get_foo called the (mocked) Foo constructor with the arguments we expected, i.e. with its __a and __b attributes.
Here is an example test.py:
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class BarTestCase(TestCase):
#patch.object(code, "Foo")
def test_get_foo(self, mock_foo_cls: MagicMock) -> None:
# Create some random but unique object that should be returned,
# when the mocked class is called;
# this object should be the output of `get_bar`:
mock_foo_cls.return_value = expected_output = object()
# We remember the arguments to initialize `bar` for later:
a, b = "spam", "eggs"
bar = code.Bar(a=a, b=b)
# Run the method under testing:
output = bar.get_foo()
# Check that we get that EXACT object returned:
self.assertIs(expected_output, output)
# Ensure that our mocked class was instantiated as expected:
mock_foo_cls.assert_called_once_with(a, b)
That way we ensure proper isolation from our Foo class during the Bar.get_foo test.
Side note: If we wanted to be super pedantic, we should even isolate our test method from the initialization of Bar, but in this simple example that would be overkill. If your __init__ method does many things aside from just setting some instance attributes, you should definitely mock that during your test as well.
Hope this helps.
References:
The Mock class
The patch decorator
TestCase.assertIs
Mock.assert_called_once_with

Class body visibility in Python

Even after reading the answer of #ncoghlan in
Python nonlocal statement in a class definition
(that I didn't get well, by the way),
I'm not able to understand this behavior.
# The script that fails and I can't explain such fail
class DW:
def r(): return 3
def rr(): return 2 + r()
y = r()
x = rr()
# a solution that I don't like: I don't want high order functions
class W:
def r(): return 3
def rr(r): return 2 + r()
y = r()
x = rr(r)
Class bodies acts more or less like a scripts. This feature is some what strange to me yet. Around this, I have some newbie questions. I thank you in advance if you help me to get them.
Can I define functions inside the body of a class, use them and delete them before the ending of the class definition? (In this case, naturally, the deleted functions will not exist to instances of such class)
How can I promote a better visibility between functions and properties inside a class definition, avoiding the usage of arguments to create access to them?
You can use the keyword self to indicate the method of the object itself
class W:
def r(): return 3
def rr(self): return 2 + self.r()
y = r()
x = rr(r)

Is there any problem in calling a static method from type name in python?

Should I be aware of any problem that could arise from doing this?
Example:
class A(object):
def __init__(self, a):
self.a = a
#staticmethod
def add1(a):
return a+1
x = A(1)
y = type(x).add1(2)
My use case would be calling a static method that processes data that was generated by an object that we cannot use anymore.
Simple test for identity gives us:
x = A(1)
print(type(x) is A)
True
print(type(x).add is A.add)
True
So based on that there should not be any problem, but I am not 100% sure. Although I would probably go with accessing x.__class__ property, which is in my opinion more intuitive.
EDIT: From Python documentation regarding the type function:
With one argument, return the type of an object. The return value is a type object and generally the same object as returned by object.__class__.

Multiprocessing on a Dictionary of Class Instances

In the generic example below I use Foobar_Collection to manage a dictionary of Foo instances. Additionaly, Foobar_Collection carries a method which will sequentially call myMethod()shared by all insances of Foo. It works fine so far. However, I wonder wether I could take advantage
of multiprocessing, so that run_myMethodForAllfoobars() could divide the work for several chunks of instances? The instance methods are "independent" of each other ( I think this case is called embarrassingly parallel). Any help would be great!
class Foobar_Collection(dict):
def __init__(self, *arg, **kw):
super(Foobar_Collection, self).__init__(*arg,**kw)
def foobar(self,*arg,**kw):
foo = Foo(*arg,**kw)
self[foo.name] = foo
return foo
def run_myMethodForAllfoobars(self):
for name in self:
self[name].myMethod(10)
return None
class Foo(object):
def __init__(self,name):
self.name = name
self.result = 0
# just some toy example method
def myMethod(self,x):
self.result += x
return None
Foobar = Foobar_Collection()
Foobar.foobar('A')
Foobar.foobar('B')
Foobar.foobar('C')
Foobar.run_myMethodForAllfoobars()
You can use multiprocessing for this situation, but it's not great because the method that you're trying to parallelize is useful for its side effects rather than its return value. This means you'll need to serialize the Foo object in both directions (sending it to the child process, then sending the modified version back). If your real objects are more complex than the Foo objects in your example, the overhead of copying all of each the object's data may make this slower than just doing everything in one process.
def worker(foo):
foo.myMethod(10)
return foo
class Foobar_Collection(dict):
#...
def run_myMethodForAllfoobars(self):
with multiprocessing.Pool() as pool:
results = pool.map(worker, self.values())
self.update((foo.name, foo) for foo in results)
A better design might let you only serialize the information you need to do the calculation. In your example, the only thing you need from the Foo object is its result (which you'll add 10 to), which you could extract and process without passing around the rest of the object:
def worker(num):
return num + 10
class Foobar_Collection(dict):
#...
def run_myMethodForAllfoobars(self):
with multiprocessing.Pool() as pool:
results = pool.map(worker, (foo.result for foo in self.values()))
for foo, new_result in zip(self.values(), results):
foo.result = new_result
Now obviously this doesn't actually run myMethod on the foo objects any more (though it's equivalent to doing so). If you can't decouple the method from the object like this, it may be hard to get good performance.

PyCharm: 'Function Doesn't Return Anything'

I just started working with PyCharm Community Edition 2016.3.2 today. Every time I assign a value from my function at_square, it warns me that 'Function at_square doesn't return anything,' but it definitely does in every instance unless an error is raised during execution, and every use of the function is behaving as expected. I want to know why PyCharm thinks it doesn't and if there's anything I can do to correct it. (I know there is an option to suppress the warning for that particular function, but it does so by inserting a commented line in my code above the function, and I find it just as annoying to have to remember to take that out at the end of the project.)
This is the function in question:
def at_square(self, square):
""" Return the value at the given square """
if type(square) == str:
file, rank = Board.tup_from_an(square)
elif type(square) == tuple:
file, rank = square
else:
raise ValueError("Expected tuple or AN str, got " + str(type(square)))
if not 0 <= file <= 7:
raise ValueError("File out of range: " + str(file))
if not 0 <= rank <= 7:
raise ValueError("Rank out of range: " + str(rank))
return self.board[file][rank]
If it matters, this is more precisely a method of an object. I stuck with the term 'function' because that is the language PyCharm is using.
My only thought is that my use of error raising might be confusing PyCharm, but that seems too simple. (Please feel free to critique my error raising, as I'm not sure this is the idiomatic way to do it.)
Update: Humorously, if I remove the return line altogether, the warning goes away and returns immediately when I put it back. It also goes away if I replace self.board[file][rank] with a constant value like 8. Changing file or rank to constant values does not remove the warning, so I gather that PyCharm is somehow confused about the nature of self.board, which is a list of 8 other lists.
Update: Per the suggestion of #StephenRauch, I created a minimal example that reflects everything relevant to data assignment done by at_square:
class Obj:
def __init__(self):
self.nested_list = [[0],[1]]
#staticmethod
def tup_method(data):
return tuple(data)
def method(self,data):
x,y = Obj.tup_method(data)
return self.nested_list[x][y]
def other_method(self,data):
value = self.method(data)
print(value)
x = Obj()
x.other_method([1,2])
PyCharm doesn't give any warnings for this. In at_square, I've tried commenting out every single line down to the two following:
def at_square(self, square):
file, rank = Board.tup_from_an(square)
return self.board[file][rank]
PyCharm gives the same warning. If I leave only the return line, then and only then does the warning disappear. PyCharm appears to be confused by the simultaneous assignment of file and rank via tup_from_an. Here is the code for that method:
#staticmethod
def tup_from_an(an):
""" Convert a square in algebraic notation into a coordinate tuple """
if an[0] in Board.a_file_dict:
file = Board.a_file_dict[an[0]]
else:
raise ValueError("Invalid an syntax (file out of range a-h): " + str(an))
if not an[1].isnumeric():
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
elif int(an[1]) - 1 in Board.n_file_dict:
rank = int(an[1]) - 1
else:
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
return file, rank
Update: In its constructor, the class Board (which is the parent class for all these methods) saves a reference to the instance in a static variable instance. self.at_square(square) gives the warning, while Board.instance.at_square(square) does not. I'm still going to use the former where appropriate, but that could shed some light on what PyCharm is thinking.
PyCharm assumes a missing return value if the return value statically evaluates to None. This can happen if initialising values using None, and changing their type later on.
class Foo:
def __init__(self):
self.qux = [None] # infers type for Foo().qux as List[None]
def bar(self):
return self.qux[0] # infers return type as None
At this point, Foo.bar is statically inferred as (self: Foo) -> None. Dynamically changing the type of qux via side-effects does not update this:
foo = Foo()
foo.qux = [2] # *dynamic* type of foo.bar() is now ``(self: Foo) -> int``
foo_bar = foo.bar() # Function 'bar' still has same *static* type
The problem is that you are overwriting a statically inferred class attribute by means of a dynamically assigned instance attribute. That is simply not feasible for static analysis to catch in general.
You can fix this with an explicit type hint.
import typing
class Foo:
def __init__(self):
self.qux = [None] # type: typing.List[int]
def bar(self):
return self.qux[0] # infers return type as int
Since Python 3.5, you can also use inline type hints. These are especially useful for return types.
import typing
class Foo:
def __init__(self):
# initial type hint to enable inference
self.qux: typing.List[int] = [None]
# explicit return type hint to override inference
def bar(self) -> int:
return self.qux[0] # infers return type as int
Note that it is still a good idea to rely on inference where it works! Annotating only self.qux makes it easier to change the type later on. Annotating bar is mostly useful for documentation and to override incorrect inference.
If you need to support pre-3.5, you can also use stub files. Say your class is in foomodule.py, create a file called foomodule.pyi. Inside, just add the annotated fields and function signatures; you can (and should) leave out the bodies.
import typing
class Foo:
# type hint for fields
qux: typing.List[int]
# explicit return type hint to override inference
def bar(self) -> int:
...
Type hinting as of Python 3.6
The style in the example below is now recommended:
from typing import typing
class Board:
def __init__(self):
self.board: List[List[int]] = []
Quick Documentation
Pycharm's 'Quick Documentation' show if you got the typing right. Place the cursor in the middle of the object of interest and hit Ctrl+Q. I suspect the types from tup_from_an(an) is not going to be as desired. You could, try and type hint all the args and internal objects, but it may be better value to type hint just the function return types. Type hinting means I don't need to trawl external documentation, so I focus effort on objects that'll be used by external users and try not to do too much internal stuff. Here's both arg and return type hinting:
#staticmethod
def tup_from_an(an: List[int]) -> (int, int):
...
Clear Cache
Pycharm can lock onto out dated definitions. Doesn't hurt to go help>find action...>clear caches
No bodies perfect
Python is constantly improving (Type hinting was updated in 3.7) Pycharm is also constantly improving. The price for the fast pace of development on these relatively immature advanced features means checking or submitting to their issue tracker may be the next call.

Resources