How to make a animation in PyQt5? - pyqt

I'm trying to make a animation but at run it raise a error.
self.__animation = QPropertyAnimation(self, "geometry")
and raises this :/ anyone knows how to fix?
self.__animation = QPropertyAnimation(self, "geometry")
TypeError: arguments did not match any overloaded call:
QPropertyAnimation(parent: QObject = None): too many arguments
QPropertyAnimation(QObject, Union[QByteArray, bytes, bytearray], parent: QObject = None): argument 2 has unexpected type 'str'

You must use bytearray, not str:
self.__animation = QPropertyAnimation(self, b'geometry')

Related

How to type annotate a multi-level decorator

I'm trying to annotate an injector decorator that injects a value from a global dictionary as a keyword argument into the decorated function when the function is called.
Can anyone experienced with annotating decorators with parameters help me out?
Tried annotating but got stuck on the errors below:
import functools
import inspect
from typing import Any, Callable, TypeVar, ParamSpec
Type = TypeVar('Type')
Param = ParamSpec('Param')
_INSTANCES = {}
def make_injectable(instance_name: str, instance: object) -> None:
_INSTANCES[instance_name] = instance
def inject(*instances: str) -> Callable[Param, Type]:
def get_function_with_instances(fn: Callable[Param, Type]) -> Callable[Param, Type]:
# This attribute is to easily access which arguments of fn are injectable
fn._injectable_args = instances
def handler(*args: Param.args, **kwargs: Param.kwargs) -> Type:
new_kwargs: dict[str, Any] = dict(kwargs).copy()
for instance in instances:
if instance in new_kwargs:
continue
if instance not in _INSTANCES:
raise ValueError(f"Instance {instance} was not initialized yet")
new_kwargs[instance] = _INSTANCES[instance]
return fn(*args, **new_kwargs)
if inspect.iscoroutinefunction(fn):
#functools.wraps(fn)
async def wrapper(*args: Param.args, **kwargs: Param.kwargs) -> Callable[Param, Type]:
return await handler(*args, **kwargs)
else:
#functools.wraps(fn)
def wrapper(*args: Param.args, **kwargs: Param.kwargs) -> Callable[Param, Type]:
return handler(*args, **kwargs)
return wrapper
return get_function_with_instances
If I run mypy with these annotations I get these errors I cannot circumvent without creating new ones:
mypy injector.py --strict --warn-unreachable --allow-subclassing-any --ignore-missing-imports --show-error-codes --install-types --non-interactive
injector.py:33: error: "Callable[Param, Type]" has no attribute "_injectable_args" [attr-defined]
injector.py:48: error: Returning Any from function declared to return "Callable[Param, Type]" [no-any-return]
injector.py:48: error: Incompatible types in "await" (actual type "Type", expected type "Awaitable[Any]") [misc]
injector.py:53: error: Incompatible return value type (got "Type", expected "Callable[Param, Type]") [return-value]
injector.py:55: error: Incompatible return value type (got "Callable[Param, Coroutine[Any, Any, Callable[Param, Type]]]", expected "Callable[Param, Type]") [return-value]
injector.py:57: error: Incompatible return value type (got "Callable[[Callable[Param, Type]], Callable[Param, Type]]", expected "Callable[Param, Type]") [return-value]
Thank you for your time.
The first [attr-defined] error is unavoidable IMO and should be simply explicitly ignored. There is no point in reinventing the wheel and defining your own special callable protocol with that special attribute.
The second and third errors with the codes [no-any-return]/misc I'll come back to later.
That fourth error with the [return-value] code comes up because the return annotation of your wrapper should be T, not Callable[P, T]. It is supposed to return whatever the decorated function returns after all.
The fifth error (also [return-value]) tells you that wrapper may be a coroutine that can be awaited to yield T, but you declared get_function_with_instances to return a callable that returns T (not a coroutine to await T from).
The very last [return-value] error comes up because inject returns a decorator that takes an argument of type Callable[P, T] and returns an object of that same type again. So the return annotation for inject should indeed be Callable[[Callable[P, T]], Callable[P, T]], just as mypy says.
Now for the [no-any-return]/misc errors. This gets a bit confusing since your intention was to cover both the case of fn being a coroutine function and the case of it being a regular function.
You annotate handler to return T just like fn. But what that T is, is not further narrowed. The type guard given by iscoroutinefunction applies to fn and does not automatically extend to handler. From the point of view of the static type checker, handler returns some object. And that cannot be safely assumed to be awaitable. Therefore you cannot safely use it with await (the [misc] error).
Since the type checker doesn't even allow the await expression in that line, it obviously cannot verify that the returned value actually matches the annotation of wrapper (which should be T just like with the error mentioned earlier, but in this case it doesn't matter either way).
I am not 100 % sure about the root cause those two errors though.
If I were you, I would make my life a whole lot easier by not even inspecting the decorated function in the first place. The behavior of your decorator does not change. The only difference is that one call needs to be followed by an await to get the value. You can let the decorator be agnostic as to whether fn returns an awaitable or not and leave it up to the caller to handle it.
Here is my suggestion:
from collections.abc import Callable
from functools import wraps
from typing import TypeVar, ParamSpec
T = TypeVar('T')
P = ParamSpec('P')
_INSTANCES = {}
def make_injectable(instance_name: str, instance: object) -> None:
_INSTANCES[instance_name] = instance
def inject(*instances: str) -> Callable[[Callable[P, T]], Callable[P, T]]:
def get_function_with_instances(fn: Callable[P, T]) -> Callable[P, T]:
# This attribute is to easily access which arguments of fn are injectable
fn._injectable_args = instances # type: ignore[attr-defined]
#wraps(fn)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
for instance in instances:
if instance in kwargs:
continue
if instance not in _INSTANCES:
raise ValueError(f"Instance {instance} was not initialized yet")
kwargs[instance] = _INSTANCES[instance]
return fn(*args, **kwargs)
return wrapper
return get_function_with_instances
Here is quick test to show that the types are all correctly inferred:
make_injectable("foo", object())
#inject("foo")
def f(**kwargs: object) -> int:
print(kwargs)
return 1
#inject("foo")
async def g(**kwargs: object) -> int:
print(kwargs)
return 2
async def main() -> tuple[int, int]:
x = f()
y = await g()
return x, y
if __name__ == '__main__':
from asyncio import run
print(run(main()))
Sample output:
{'foo': <object object at 0x7fe39fea0b20>}
{'foo': <object object at 0x7fe39fea0b20>}
(1, 2)
There are no complaints from mypy --strict. The way main is written, we can see that the return types are all inferred correctly, but if we wanted to check explicitly, we could add reveal_type(f) and reveal_type(g) at the end of the script. Then mypy would tell us:
Revealed type is "def (**kwargs: builtins.object) -> builtins.int"
Revealed type is "def (**kwargs: builtins.object) -> typing.Coroutine[Any, Any, builtins.int]"
This is what we expected.

Python coding errors are not caught in linter or nor during execution rather they are thrown only when it is mapped or class methods invoked

I am relatively new to Python. Using Python 3.7 in this example below. The Linter is not catching any of the coding errors nor, it throws any exception when wrong return types are returned. What is the best and formal way of handing such issues?
from typing import Tuple
from abc import ABC, abstractmethod
class MyAbc(ABC):
#abstractmethod
def get_hello(self) -> Tuple[bool, str, str]:
# Need the return to be a Tuple of 3 values: bool, str, str
pass
class ImplementedClass(MyAbc):
def get_hello(self):
return True, "Hello"
# But in the implementation I am returning only 2 values: bool, str
# This coding error is not caught here
ic: MyAbc = ImplementedClass()
print(ic.get_hello()) # Error escaped
resp1, resp2, resp3 = ic.get_hello()
# The issue is caught only here
# Pylint: Possible unbalanced tuple unpacking with sequence defined at line 15: left side has 3 label(s), right side has 2 value(s)
print(resp1, resp2, resp3)
print(ImplementedClass().get_hello())
def three_returns() -> Tuple[str, str, str]:
return "one", "two"
print(three_returns()) # Error escaped
def something(data: str) -> str:
print(type(data), data)
return 1 # Supposed to return str, returning int, not caught
value: str = something(2) # Expected str but int returned
print(value.upper()) # Pylint: Instance of 'int' has no 'upper' member
As mentioned in the code block, when incorrect object is returned, pylint or python will never throw any error. It is only when it is explicitly mapped or any class methods are invoked like str.upper() that's when the error is thrown. This would lead to testing all the paths thoroughly else, it can be sure that code block would work.
Is it how it is and we have live with it or there any better ways to handle it like what we get compile time errors in C++, Java?
What you seem to be searching for is a type checker like mypy, pyright or pyre.

Typing and pint

I'm using pint to use and convert units. I wanted to create classes which restricts the quantities only to "[time]" or "[length]" dimensions, so as a first approach I did the following:
from pint import Quantity, DimensionalityError
class Time(Quantity):
def __new__(cls, v: str | Quantity) -> Quantity:
obj = Quantity(v)
if not obj.check("[time]"):
raise DimensionalityError(v, "[time]")
return obj
class Length(Quantity):
def __new__(cls, v: str | Quantity) -> Quantity:
obj = Quantity(v)
if not obj.check("[length]"):
raise DimensionalityError(v, "[length]")
return obj
At runtime it works as expected, i.e: I can do the following:
1hour = Time("1h") # Works ok, variable 1hour contains `<Quantity(1, 'hour')>`
bad = Time("1meter") # As expected, raises pint.errors.DimensionalityError: Cannot convert from '1meter' to '[time]'
1meter = Length("1meter") # Ok
bad_again = Length("1h") # Ok, raises DimensionalityError
However, from a typing perspective, something is wrong:
def myfunc(t: Time) -> str:
return f"The duration is {t}"
print(myfunc(Time("1h"))) # Ok
print(myfunc(Length("1m"))) # Type error?
The second call to myfunc() is a type error, since I'm passing a Length instead of a Time. However mypy is happy with the code. So I have some questions:
Why doesn't mypy catches the error?
How to do it properly?
For 1. I guess that something fishy is happening in pint's Quantity implementation. I tried:
foo = Quantity("3 pounds")
reveal_type(foo)
and the revealed type is Any instead of Quantity which is very suspicious.
So I tried removing the base class Quantity from my Time and Length classes (i.e: they derive now from object instead of Quantity), and in this case, mypy correctly manages the typing errors.
But it fails again as soon as I try something like Length("60km")/Time("1h"). mypy complains that Length object does not implement the required method for performing that division (although the code works ok at runtime because, after all, Length and Time __new__() method is returning a Quantity object which does implement the arithmetical operations).
So, again, is there any workaround for making the idea work both at run-time and for mypy?

Error "NameError: name 'self' is not defined" even though I declare "self"

I'm coding the AdaBoost from scratch in Python. Could you please elaborate on why the line self.functions[0] = f_0 causes an error?
class AdaBoost_regressor():
def __init__(self, n_estimators, functions):
# n_estimators is the number of weak regressors
self.n_estimators = n_estimators
# We will store the sequence of functions in object "functions"
self.functions = np.array([None] * n_estimators, dtype = 'f')
# We set f_0 = 0
def f_0(x):
return 0
self.functions[0] = f_0
The result is NameError: name 'self' is not defined.
Indeed when self is outside it is read as a variable. If you declare:
self = 0
self.functions[0] = f_0
the error is gone but "self" would be considered a variable and it is not recommended to declare it. It is the same as if you set the following code:
my = 0
my.functions[0] = f_0
If you remove "my = 0" it would throw error again.
I think that the reason for you error is that you cannot use self inside a class outside the methods, since, in order to use self an instance of the class have to be passed as a parameter to some function.
Notice that until you initialize your class, there’s no meaning for the expression self.

PyCharm: 'Function Doesn't Return Anything'

I just started working with PyCharm Community Edition 2016.3.2 today. Every time I assign a value from my function at_square, it warns me that 'Function at_square doesn't return anything,' but it definitely does in every instance unless an error is raised during execution, and every use of the function is behaving as expected. I want to know why PyCharm thinks it doesn't and if there's anything I can do to correct it. (I know there is an option to suppress the warning for that particular function, but it does so by inserting a commented line in my code above the function, and I find it just as annoying to have to remember to take that out at the end of the project.)
This is the function in question:
def at_square(self, square):
""" Return the value at the given square """
if type(square) == str:
file, rank = Board.tup_from_an(square)
elif type(square) == tuple:
file, rank = square
else:
raise ValueError("Expected tuple or AN str, got " + str(type(square)))
if not 0 <= file <= 7:
raise ValueError("File out of range: " + str(file))
if not 0 <= rank <= 7:
raise ValueError("Rank out of range: " + str(rank))
return self.board[file][rank]
If it matters, this is more precisely a method of an object. I stuck with the term 'function' because that is the language PyCharm is using.
My only thought is that my use of error raising might be confusing PyCharm, but that seems too simple. (Please feel free to critique my error raising, as I'm not sure this is the idiomatic way to do it.)
Update: Humorously, if I remove the return line altogether, the warning goes away and returns immediately when I put it back. It also goes away if I replace self.board[file][rank] with a constant value like 8. Changing file or rank to constant values does not remove the warning, so I gather that PyCharm is somehow confused about the nature of self.board, which is a list of 8 other lists.
Update: Per the suggestion of #StephenRauch, I created a minimal example that reflects everything relevant to data assignment done by at_square:
class Obj:
def __init__(self):
self.nested_list = [[0],[1]]
#staticmethod
def tup_method(data):
return tuple(data)
def method(self,data):
x,y = Obj.tup_method(data)
return self.nested_list[x][y]
def other_method(self,data):
value = self.method(data)
print(value)
x = Obj()
x.other_method([1,2])
PyCharm doesn't give any warnings for this. In at_square, I've tried commenting out every single line down to the two following:
def at_square(self, square):
file, rank = Board.tup_from_an(square)
return self.board[file][rank]
PyCharm gives the same warning. If I leave only the return line, then and only then does the warning disappear. PyCharm appears to be confused by the simultaneous assignment of file and rank via tup_from_an. Here is the code for that method:
#staticmethod
def tup_from_an(an):
""" Convert a square in algebraic notation into a coordinate tuple """
if an[0] in Board.a_file_dict:
file = Board.a_file_dict[an[0]]
else:
raise ValueError("Invalid an syntax (file out of range a-h): " + str(an))
if not an[1].isnumeric():
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
elif int(an[1]) - 1 in Board.n_file_dict:
rank = int(an[1]) - 1
else:
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
return file, rank
Update: In its constructor, the class Board (which is the parent class for all these methods) saves a reference to the instance in a static variable instance. self.at_square(square) gives the warning, while Board.instance.at_square(square) does not. I'm still going to use the former where appropriate, but that could shed some light on what PyCharm is thinking.
PyCharm assumes a missing return value if the return value statically evaluates to None. This can happen if initialising values using None, and changing their type later on.
class Foo:
def __init__(self):
self.qux = [None] # infers type for Foo().qux as List[None]
def bar(self):
return self.qux[0] # infers return type as None
At this point, Foo.bar is statically inferred as (self: Foo) -> None. Dynamically changing the type of qux via side-effects does not update this:
foo = Foo()
foo.qux = [2] # *dynamic* type of foo.bar() is now ``(self: Foo) -> int``
foo_bar = foo.bar() # Function 'bar' still has same *static* type
The problem is that you are overwriting a statically inferred class attribute by means of a dynamically assigned instance attribute. That is simply not feasible for static analysis to catch in general.
You can fix this with an explicit type hint.
import typing
class Foo:
def __init__(self):
self.qux = [None] # type: typing.List[int]
def bar(self):
return self.qux[0] # infers return type as int
Since Python 3.5, you can also use inline type hints. These are especially useful for return types.
import typing
class Foo:
def __init__(self):
# initial type hint to enable inference
self.qux: typing.List[int] = [None]
# explicit return type hint to override inference
def bar(self) -> int:
return self.qux[0] # infers return type as int
Note that it is still a good idea to rely on inference where it works! Annotating only self.qux makes it easier to change the type later on. Annotating bar is mostly useful for documentation and to override incorrect inference.
If you need to support pre-3.5, you can also use stub files. Say your class is in foomodule.py, create a file called foomodule.pyi. Inside, just add the annotated fields and function signatures; you can (and should) leave out the bodies.
import typing
class Foo:
# type hint for fields
qux: typing.List[int]
# explicit return type hint to override inference
def bar(self) -> int:
...
Type hinting as of Python 3.6
The style in the example below is now recommended:
from typing import typing
class Board:
def __init__(self):
self.board: List[List[int]] = []
Quick Documentation
Pycharm's 'Quick Documentation' show if you got the typing right. Place the cursor in the middle of the object of interest and hit Ctrl+Q. I suspect the types from tup_from_an(an) is not going to be as desired. You could, try and type hint all the args and internal objects, but it may be better value to type hint just the function return types. Type hinting means I don't need to trawl external documentation, so I focus effort on objects that'll be used by external users and try not to do too much internal stuff. Here's both arg and return type hinting:
#staticmethod
def tup_from_an(an: List[int]) -> (int, int):
...
Clear Cache
Pycharm can lock onto out dated definitions. Doesn't hurt to go help>find action...>clear caches
No bodies perfect
Python is constantly improving (Type hinting was updated in 3.7) Pycharm is also constantly improving. The price for the fast pace of development on these relatively immature advanced features means checking or submitting to their issue tracker may be the next call.

Resources