mypy with numbers.Number - python-3.x

I was hoping to use mypy's static duck typing facility to write a function that can process a sequence of number-like objects, where "number-like" is defined by numbers.Number:
from numbers import Number
from typing import Sequence
def something_numerical(xs: Sequence[Number]) -> Number:
...
print(something_numerical([1., 2., 3.]))
However, when I call this code with a list of floats or ints, I get a mypy error:
$ print(multiply([1., 2., 3.]))
foo/foo.py:9: error: List item 0 has incompatible type "float"; expected "Number"
foo/foo.py:9: error: List item 1 has incompatible type "float"; expected "Number"
foo/foo.py:9: error: List item 2 has incompatible type "float"; expected "Number"
I realize that the float type is not a subclass of numbers.Number. However, the numbers module provides a set of abstract base classes that are intended to be used to check whether an object has the requisite methods to do numerical operations. How might I rewrite this code so that (1) it can still process ints, floats, fractios.Fraction, and so on, and (2) so that it passes type checking by mypy?

The answer by #SUTerliakov is informative and accurate; however, while working on this problem I came across another way of solving the problem that I will post here for others who encounter this. I simply defined my own "Number" protocol from scratch:
from typing import Protocol
class Number(Protocol):
def __abs__(self) -> 'Number':
...
def __mul__(self, Number) -> 'Number':
...
def __add__(self, Number) -> 'Number':
...
def __lt__(self, Number) -> bool:
...
def __gt__(self, Number) -> bool:
...
def __truediv__(self, Number) -> 'Number':
...
def __rtruediv__(self, Number) -> 'Number':
...
Here the choice to include __mul__, __abs__, __truediv__ and so on were based on the operations I needed to perform inside my function, rather than based on any abstract notion of what a number ought or ought not to be.
It seems that the "number" concept in python is quite complicated and hard to fit all cases, so it really might make logical sense to define a per-project protocol like this.

It was stated in #bzu answer, but I'd like to add some explanation to it.
First thing to note: issubclass(int, Number) and issubclass(float, Number) both evaluate to True. This is very surprising type-checking behavior, but it was standardized in PEP484:
Rather than requiring that users write import numbers and then use numbers.Float etc., this PEP proposes a straightforward shortcut that is almost as effective: when an argument is annotated as having type float, an argument of type int is acceptable; similar, for an argument annotated as having type complex, arguments of type float or int are acceptable. This does not handle classes implementing the corresponding ABCs or the fractions.Fraction class, but we believe those use cases are exceedingly rare.
So to support built-in numbers you can use just int, float or complex. To handle other ABC's you should use appropriate numbers member. I don't know why float was not made compatible with numbers.Number.
For almost all cases you can use a type alias (TypeAlias was backported with typing_extensions module for python<3.10):
from fractions import Fraction
from numbers import Number
from typing import TypeAlias
AnyNumber: TypeAlias = Number | float
def f(x: AnyNumber) -> bool:
return x == 0
f(1)
f(1.0)
f(Fraction(1, 3))
This typechecks. One incompatible class I'm aware of is decimal.Decimal: it is not compatible (it would be expected, if Number were made compatible with float, because Decimal is not and Decimal(1) / 2 fails - but it is not the case, as we'll see later).
If your function uses AnyNumber and int together, everything dies:
def f(x: AnyNumber) -> float:
return x / 2 + 1 # E: Unsupported operand types for / ("Number" and "int")
Although you can, for example, do Fraction(1,2) / 2, Number does not guarantee int or float compatibility. You can use numbers.Real or numbers.Complex instead - they are compatible with float:
AnyReal: TypeAlias = Real | float
This allows x / 2 + 1 and remains incompatible with decimal.Decimal, but now it is intended behavior.
You can use this playground to investigate the topic further. Also having look at numbers in typeshed may help.

It seems using Union will be the way to go:
from numbers import Number
from typing import Sequence, Union
from fractions import Fraction
def something_numerical(xs: Sequence[Union[Number, float]]) -> Union[Number, float]:
return sum(xs)
if __name__ == '__main__':
print(something_numerical([1.2, 2, Fraction(1, 2)]))

Related

Is Python 3.x dictionary lint type check (typing) set at instantiation

I have a configuration dictionary which I load with values in a hierarchy of dunder init calls, each call instantiates part of the configuration. Trying to add typing to this I have received a strange behavior (or maybe I'm doing something wrong). The behavior I'm getting is consistent with the dictionary giving out types that were only inserted to it in its first declaration, updating extending and adding keys does not seem to change the types a dictionary declares as possible when accessing it, this is a simple code I wrote to illustrate the problem:
import re
def foo(a: int = 1, b: str = "b"):
d = {"a": a}
d.update({"b": b})
print(re.findall(d["b"], "baba"))
foo()
The code of course works and outputs ['b', 'b'] (no pun intended) but my pycharm gives out two warnings:
on the update line: Unexpected type(s):<br>(Dict[str, str])<br>Possible types:<br>(Mapping[str, int])<br>(Iterable[Tuple[str, int]])
on the finditer line: Expected type 'Union[bytes, str, __Regex]', got 'int' instead
My questions are, is my analysis of the reason for this true (dict setting its typing on instantiation)? and is there a pythonic way to fix these warnings?
updating extending and adding keys does not seem to change the types a dictionary declares
This is by design. If you have some variable declared to be of type Dict[str, int], you presumably want mypy to complain very loudly if you accidentally try running code like var['foo'] = 'bar'.
In this case, since you assigned d to a dict of string to ints, mypy assumes you meant for that type to be just Dict[str, int].
If you want your code to type-check, you have several options:
Explicitly declare what types you expect your dictionary's values to be and use asserts or casts to confirm that the types of certain keys are what you expect them to be:
def foo(a: int = 1, b: str = "b") -> None:
d: Dict[str, Union[int, str]] = {"a": a}
d.update({"b": b})
# If you want to check your assumption at runtime
b_regex = d["b"]
assert isinstance(b_regex, str)
print(re.findall(b_regex, "baba"))
# If you don't want/don't need to check the type
print(re.findall(cast(str, d["b"]), "baba"))
Give up on typing your dict statically and make the value be the dynamic 'Any':
def foo(a: int = 1, b: str = "b") -> None:
d: Dict[str, Any] = {"a": a}
d.update({"b": b})
print(re.findall(d["b"], "baba"))
Use the TypedDict mypy extension to indicate that the dict will only contain certain string keys, where each key has a corresponding value of some specific type.
Note that this is currently a mypy-only extension, though there are plans to add it to PEP 484 as a full-fledged type in the near future. Once that happens, PyCharm is effectively obligated to understand TypedDict (though it's possible they've already gone ahead and added support early, not sure).

Python 3 types, custom variadic generic type with arbitrary number of contained types, how?

The class typing.Tuple can be used as with arbitrary number of type arguments, like Tuple[int, str, MyClass] or Tuple[str, float]. How do I implement my own class that can be used like that? I understand how to inherit from typing.Generic. The following code demonstrates this.
from typing import TypeVar, Generic
T = TypeVar("T")
class Thing(Generic[T]):
def __init__(self, value: T):
self.value = value
def f(thing: Thing[int]):
print(thing.value)
if __name__ == '__main__':
t = Thing("WTF")
f(t)
The above code would work but the type checker (in my case PyCharm) would catch the fact that t should be of type Thing[int] and not Thing[str]. That's all fine, but how do I make the class Thing support arbitrary number of type arguments, like Tuple does?
In your example, t is of type Thing, not Thing[str]. So this object accepts anything for T.
You can parametrize it like this : t2 = Thing[str]("WTF")
now, for your question, I think you want to use typing.Union like this: t3=Thing[Union[str,int,float]]("WTF")
By the way, you can check the type of generics by using get_generic_type() from typing_inspect
>>> get_generic_type(t)
__main__.Thing
>>> get_generic_type(t2)
__main__.Thing[str]
>>> get_generic_type(t3)
__main__.Thing[typing.Union[str, int, float]]

Mypy reports an incompatible supertype error with overridden method

Below is a simplified example of a problem I've encountered with mypy.
The A.transform method takes an iterable of objects, transforms each one (defined in the subclass B, and potentially other subclasses) and returns an iterable of transformed objects.
from typing import Iterable, TypeVar
T = TypeVar('T')
class A:
def transform(self, x: Iterable[T]) -> Iterable[T]:
raise NotImplementedError()
class B(A):
def transform(self, x: Iterable[str]) -> Iterable[str]:
return [x.upper() for x in x]
However mypy says:
error: Argument 1 of "transform" incompatible with supertype "A"
error: Return type of "transform" incompatible with supertype "A"
If I remove [T] from A.transform(), then the error goes away. But that seems like the wrong solution.
After reading about covariance and contravariance, I thought that setting
T = TypeVar('T', covariant=True) might be a solution, but this produces the same error.
How can I fix this? I have considered binning the design altogether and replacing the A class with a higher order function.
Making T covariant or contravariant isn't really going to help you in this case. Suppose that the code you had in your question was allowed by mypy, and suppose a user wrote the following snippet of code:
def uses_a_or_subclass(foo: A) -> None:
# This is perfectly typesafe (though it'll crash at runtime)
print(a.transform(3))
# Uh-oh! B.transform expects a str, so we just broke typesafety!
uses_a_or_subclass(B())
The golden rule to remember is that when you need to overwrite or redefine a function (when subclassing, like you're doing, for example), that functions are contravariant in parameters, and covariant in their return type. This means that when you're redefining a function, it's legal to make the parameters more broad/a superclass of the original parameter type, but not a subtype.
One possible fix is to make your entire class generic with respect to T. Then, instead of subclassing A (which is now equivalent to subclassing A[Any] and is probably not what you want if you'd like to stay perfectly typesafe), you'd subclass A[str].
Now, your code is perfectly typesafe, and your redefined function respects function variance:
from typing import Iterable, TypeVar, Generic
T = TypeVar('T')
class A(Generic[T]):
def transform(self, x: Iterable[T]) -> Iterable[T]:
raise NotImplementedError()
class B(A[str]):
def transform(self, x: Iterable[str]) -> Iterable[str]:
return [x.upper() for x in x]
Now, our uses_a_or_subclass function from up above should be rewritten to either be generic, or to accept specifically classes that subtype A[str]. Either way works, depending on what you're trying to do.

PyCharm: 'Function Doesn't Return Anything'

I just started working with PyCharm Community Edition 2016.3.2 today. Every time I assign a value from my function at_square, it warns me that 'Function at_square doesn't return anything,' but it definitely does in every instance unless an error is raised during execution, and every use of the function is behaving as expected. I want to know why PyCharm thinks it doesn't and if there's anything I can do to correct it. (I know there is an option to suppress the warning for that particular function, but it does so by inserting a commented line in my code above the function, and I find it just as annoying to have to remember to take that out at the end of the project.)
This is the function in question:
def at_square(self, square):
""" Return the value at the given square """
if type(square) == str:
file, rank = Board.tup_from_an(square)
elif type(square) == tuple:
file, rank = square
else:
raise ValueError("Expected tuple or AN str, got " + str(type(square)))
if not 0 <= file <= 7:
raise ValueError("File out of range: " + str(file))
if not 0 <= rank <= 7:
raise ValueError("Rank out of range: " + str(rank))
return self.board[file][rank]
If it matters, this is more precisely a method of an object. I stuck with the term 'function' because that is the language PyCharm is using.
My only thought is that my use of error raising might be confusing PyCharm, but that seems too simple. (Please feel free to critique my error raising, as I'm not sure this is the idiomatic way to do it.)
Update: Humorously, if I remove the return line altogether, the warning goes away and returns immediately when I put it back. It also goes away if I replace self.board[file][rank] with a constant value like 8. Changing file or rank to constant values does not remove the warning, so I gather that PyCharm is somehow confused about the nature of self.board, which is a list of 8 other lists.
Update: Per the suggestion of #StephenRauch, I created a minimal example that reflects everything relevant to data assignment done by at_square:
class Obj:
def __init__(self):
self.nested_list = [[0],[1]]
#staticmethod
def tup_method(data):
return tuple(data)
def method(self,data):
x,y = Obj.tup_method(data)
return self.nested_list[x][y]
def other_method(self,data):
value = self.method(data)
print(value)
x = Obj()
x.other_method([1,2])
PyCharm doesn't give any warnings for this. In at_square, I've tried commenting out every single line down to the two following:
def at_square(self, square):
file, rank = Board.tup_from_an(square)
return self.board[file][rank]
PyCharm gives the same warning. If I leave only the return line, then and only then does the warning disappear. PyCharm appears to be confused by the simultaneous assignment of file and rank via tup_from_an. Here is the code for that method:
#staticmethod
def tup_from_an(an):
""" Convert a square in algebraic notation into a coordinate tuple """
if an[0] in Board.a_file_dict:
file = Board.a_file_dict[an[0]]
else:
raise ValueError("Invalid an syntax (file out of range a-h): " + str(an))
if not an[1].isnumeric():
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
elif int(an[1]) - 1 in Board.n_file_dict:
rank = int(an[1]) - 1
else:
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
return file, rank
Update: In its constructor, the class Board (which is the parent class for all these methods) saves a reference to the instance in a static variable instance. self.at_square(square) gives the warning, while Board.instance.at_square(square) does not. I'm still going to use the former where appropriate, but that could shed some light on what PyCharm is thinking.
PyCharm assumes a missing return value if the return value statically evaluates to None. This can happen if initialising values using None, and changing their type later on.
class Foo:
def __init__(self):
self.qux = [None] # infers type for Foo().qux as List[None]
def bar(self):
return self.qux[0] # infers return type as None
At this point, Foo.bar is statically inferred as (self: Foo) -> None. Dynamically changing the type of qux via side-effects does not update this:
foo = Foo()
foo.qux = [2] # *dynamic* type of foo.bar() is now ``(self: Foo) -> int``
foo_bar = foo.bar() # Function 'bar' still has same *static* type
The problem is that you are overwriting a statically inferred class attribute by means of a dynamically assigned instance attribute. That is simply not feasible for static analysis to catch in general.
You can fix this with an explicit type hint.
import typing
class Foo:
def __init__(self):
self.qux = [None] # type: typing.List[int]
def bar(self):
return self.qux[0] # infers return type as int
Since Python 3.5, you can also use inline type hints. These are especially useful for return types.
import typing
class Foo:
def __init__(self):
# initial type hint to enable inference
self.qux: typing.List[int] = [None]
# explicit return type hint to override inference
def bar(self) -> int:
return self.qux[0] # infers return type as int
Note that it is still a good idea to rely on inference where it works! Annotating only self.qux makes it easier to change the type later on. Annotating bar is mostly useful for documentation and to override incorrect inference.
If you need to support pre-3.5, you can also use stub files. Say your class is in foomodule.py, create a file called foomodule.pyi. Inside, just add the annotated fields and function signatures; you can (and should) leave out the bodies.
import typing
class Foo:
# type hint for fields
qux: typing.List[int]
# explicit return type hint to override inference
def bar(self) -> int:
...
Type hinting as of Python 3.6
The style in the example below is now recommended:
from typing import typing
class Board:
def __init__(self):
self.board: List[List[int]] = []
Quick Documentation
Pycharm's 'Quick Documentation' show if you got the typing right. Place the cursor in the middle of the object of interest and hit Ctrl+Q. I suspect the types from tup_from_an(an) is not going to be as desired. You could, try and type hint all the args and internal objects, but it may be better value to type hint just the function return types. Type hinting means I don't need to trawl external documentation, so I focus effort on objects that'll be used by external users and try not to do too much internal stuff. Here's both arg and return type hinting:
#staticmethod
def tup_from_an(an: List[int]) -> (int, int):
...
Clear Cache
Pycharm can lock onto out dated definitions. Doesn't hurt to go help>find action...>clear caches
No bodies perfect
Python is constantly improving (Type hinting was updated in 3.7) Pycharm is also constantly improving. The price for the fast pace of development on these relatively immature advanced features means checking or submitting to their issue tracker may be the next call.

What type hint to specify when a function returns either an object of a certain type or None?

Since I discovered type hints in Python, I started using them, because I think they are useful for the reader of the code. I'm looking forward that these type hints will eventually become type static checkers in the future, because it would make code a lot more reliable, and that's very important for a serious project.
Apart from that, I've a lot of functions where the return type can either be BSTNode or None. It can be None for example because in a search function, if no BSTNode is found, None is simply returned. Is there a way of specifying that the return type could also be None?
Currently I'm using something as follows
def search(self, x) -> BSTNode:
pass
I know I can use strings as type hints, but I'm not sure it's the correct or best way of doing it.
def search(self, x) -> "BSTNode | None":
pass
In Python prior to 3.10, the pipe doesn't have any special meaning in string-typed type hints. In Python ≤ 3.9, I'd thus stick to the ways PEP 0484 documents for union types:
Either
from typing import Union
def search(self, x) -> Union[BSTNode, None]:
pass
or, more concisely but equivalent
from typing import Optional
def search(self, x) -> Optional[BSTNode]:
pass
In Python ≥ 3.10, you can indeed use | to build representations of a union of types, even without the quotation marks. See Jean Monet's answer, PEP 604 -- Allow writing union types as X | Y, What’s New In Python 3.10 or the Python 3.10 documentation of the new types.Union type for details.
Thanks to PEP 604 it is possible to use the pipe | notation, equivalent to Union:
def func() -> int | None:
...
my_var: int | None = None
# same as
from typing import Optional, Union
def func() -> Optional[int]:
...
def func() -> Union[int, None]:
...
my_var: Optional[int] = None
my_var: Union[int, None] = None
The feature is introduced in Python 3.10, but you can start using it in Python 3.8 and 3.9 (not sure for 3.7?) by importing annotations at the beginning of the file:
from __future__ import annotations

Resources