Creating custom slots and signals old style - pyqt4

I'm trying to create my own custom signal but for whatever reason it's not going into the function. Any ideas where I went wrong?
def __init__(self):
...
self.connect( self, QtCore.SIGNAL( 'dagEvent(type, *args)' ), self.OnDagEvent)
def someFunc(self, ...):
...
self.emit(QtCore.SIGNAL('dagEvent()'), type, *args)
def OnDagEvent( self, eventType, eventData ):
print 'Test'

The problem is with the way you are creating the signature for you custom signal. There are a few ways you are allow to do it, but the way you have it now isn't one of them.
When you define your connection, the way you are doing it should be causing an error to be raised:
# not a valid type of signature
QtCore.SIGNAL( 'dagEvent(type, *args)' )
And even if that were allowed to be created, when you emit later, you are not referencing the same signature anyways:
# if it were even allowed, would have to be: dagEvent(type, *args)
self.emit(QtCore.SIGNAL('dagEvent()'), type, *args)
Old-style Signal and Slot Support
The easiest way to create a custom signal from PyQt is to simply use the callable name only:
self.connect(self, QtCore.SIGNAL('dagEvent'), self.OnDagEvent)
...
# "type" is a builtin. I renamed it to type_
self.emit(QtCore.SIGNAL('dagEvent'), type_, *args)
This approach does not care what args you decide to pass it. You can pass anything you want.
If you want to specifically control the signature, you can define builtin types:
self.connect(self, QtCore.SIGNAL('dagEvent(int,str,int)'), self.OnDagEvent)
...
self.emit(QtCore.SIGNAL('dagEvent(int,str,int)'), i1, s2, i3)
If you fail to use the right signature in the emit, it will not be called, and passing the wrong types when emitting will raise an error.
Now if you want to somewhat define a signature, but not limit it to any basic type, and allow any python object, you can do this:
self.connect(self, QtCore.SIGNAL('dagEvent(PyQt_PyObject)'), self.OnDagEvent)
...
self.emit(QtCore.SIGNAL('dagEvent(PyQt_PyObject)'), foo)
This will allow any single python object to be passed, but specifically says it expects 1 argument.

Related

How to set log level with structlog when not using event parameter?

The idiomatic way (I think) to create a logger in structlog that only prints up to a certain log level is to use the following:
wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
This works fine, but it breaks with the following pattern:
l = logger.bind(event="get_tar", key=value)
l.info(status="download_start")
buf = f.read()
l.info(status="download_finish")
by default, when using the logfmt format -- structlog will print the "message" as the event key, so I just like to set it directly.
Anyways, this breaks though b/c under the hood make_filtering_bound_logger calls this:
def make_method(level: int) -> Callable[..., Any]:
if level < min_level:
return _nop
name = _LEVEL_TO_NAME[level]
def meth(self: Any, event: str, **kw: Any) -> Any:
return self._proxy_to_logger(name, event, **kw)
meth.__name__ = name
return meth
which requires an event kwarg to exist. Is there a workaround?
event is the only (reasonable :)) key that cannot be bound – it’s always the log message. That’s not a matter of make_… but all structlog internals.
You can get something similar-ish by renaming a key-value pair using the EventRenamer processor.
See also https://github.com/hynek/structlog/issues/35
It’s good you brought this up, I’m currently rewriting the docs and am looking for common recipes.

How to deal with type checking for optionally None variable, Python3?

In the example below. The init method of MyClass defined the attribute self._user has optionally the type of UserInput and is initialized as None. The actual user input should be provided by the method set_user. For some practical reason, the user input cannot be provided to the method __init__. After giving user input, other methods method_1 and method_2 can be called.
Question to professional Python programmers: do I really need to add assert ... not None in every method that uses self._user? Otherwise, VS Code Pylance type checking will complain that self._user might be None.
However, I tried the same code in PyCharm with its built-in type checking. This issue is not raised there.
And as professional Python programmers, do you prefer the Pylance type checking in VS Code, or the built-in type checking in PyCharm?
Thanks in advance.
class UserInput:
name: str
age: int
class MyClass:
def __init__(self):
self._user: UserInput | None = None
def set_user(self, user: UserInput): # This method should be called before calling any methods.
self._user = user
def method_1(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its age.
return self._user.age # Will get warning without the assert above.
def method_2(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its name.
I think it's safest and cleanest if you keep the asserts in. After all, it is up to the user of your class in which order he calls the instance methods. Therefore, you cannot guarantee that self._user is not None.
I think it's bad practice to use assert in production code. When things go wrong, you get lots of AssertionError, but you don't have any context about why that assertion is being made.
Instead I would catch the issue early, and not handle it later. If set_user() should be called earlier, I'd be tempted to put the user in the __init__ method, but the same principle applies.
#dataclass
class UserInput:
name: str
age: int
class NoUserException(TypeError):
pass
class MyClass:
# Or this could be in the __init__ method
def set_user(self, user: UserInput | None):
if not user:
raise NoUserException()
self._user: user
def method_1(self):
# do something using self._user, for example return its age.
return self._user.age
def method_2(self):
# do something using self._user, for example return its name.
return self._user.name
You already stated that set_user will be called first, so when that happens you'll get a NoUserException if the user is None.
But I'd be tempted to not even do that. If I were writing this, I'd have no NoneType checking in MyClass, and instead not call set_user if the user was None.
m = MyClass()
user = ...
if user:
m.set_user(user)
... # anything else with `m`
else:
# here is where you would get an error

Simplifying Init Method Python

Is there a better way of doing this?
def __init__(self,**kwargs):
self.ServiceNo = kwargs["ServiceNo"]
self.Operator = kwargs["Operator"]
self.NextBus = kwargs["NextBus"]
self.NextBus2 = kwargs["NextBus2"]
self.NextBus3 = kwargs["NextBus3"]
The attributes (ServiceNo,Operator,...) always exist
That depends on what you mean by "simpler".
For example, is what you wrote simpler than what I would write, namely
def __init__(self,ServiceNo, Operator, NextBus, NextBus2, NextBus3):
self.ServiceNo = ServiceNo
self.Operator = Operator
self.NextBus = NextBus
self.NextBus2 = NextBus2
self.NextBus3 = NextBus3
True, I've repeated each attribute name an additional time, but I've made it much clearer which arguments are legal for __init__. The caller is not free to add any additional keyword argument they like, only to see it silently ignored.
Of course, there's a lot of boilerplate here; that's something a dataclass can address:
from dataclasses import dataclass
#dataclass
class Foo:
ServiceNo: int
Operator: str
NextBus: Bus
NextBus2: Bus
NextBus3: Bus
(Adjust the types as necessary.)
Now each attribute is mentioned once, and you get the __init__ method shown above for free.
Better how? You don’t really describe what problem you’re trying to solve.
If it’s error handling, you can use the dictionary .get() method in the event that key doesn’t exist.
If you just want a more succinct way of initializing variables, you could remove the ** and have the dictionary as a variable itself, then use it elsewhere in your code, but that depends on what your other methods are doing.
A hacky solution available since the attributes and the argument names match exactly is to directly copy from the kwargs dict to the instance's dict, then check that you got all the keys you expected, e.g.:
def __init__(self,**kwargs):
vars(self).update(kwargs)
if vars(self).keys() != {"ServiceNo", "Operator", "NextBus", "NextBus2", "NextBus3"}:
raise TypeError(f"{type(self).__name__} missing required arguments")
I don't recommend this; chepner's options are all superior to this sort of hackery, and they're more reliable (for example, this solution fails if you use __slots__ to prevent autovivication of attributes, as the instance won't having a backing dict you can pull with vars).

MyPy type annotation for a bound method?

I'm writing some tests for a library using pytest. I want to try a number of test cases for each function exposed by the library, so I've found it convenient to group the tests for each method in a class. All of the functions I want to test have the same signature and return similar results, so I'd like to use a helper method defined in a superclass to do some assertions on the results. A simplified version would run like so:
class MyTestCase:
function_under_test: Optional[Callable[[str], Any]] = None
def assert_something(self, input_str: str, expected_result: Any) -> None:
if self.function_under_test is None:
raise AssertionError(
"To use this helper method, you must set the function_under_test"
"class variable within your test class to the function to be called.")
result = self.function_under_test.__func__(input_str)
assert result == expected_result
# various other assertions on result...
class FunctionATest(MyTestCase):
function_under_test = mymodule.myfunction
def test_whatever(self):
self.assert_something("foo bar baz")
In assert_something, It's necessary to call __func__() on the function since assigning a function to a class attribute makes it a bound method of that class -- otherwise self will be passed through as the first argument to the external library function, where it doesn't make any sense.
This code works as intended. However, it yields the MyPy error:
"Callable[[str], Any]" has no attribute "__func__"
Based on my annotation, it's correct that this isn't a safe operation: an arbitrary Callable may not have a __func__ attribute. However, I can't find any type annotation that would indicate that the function_under_test variable refers to a method and thus will always have __func__. Am I overlooking one, or is there another way to tweak my annotations or accesses to get this working with type-checking?
Certainly, there are plenty of other ways I could get around this, some of which might even be cleaner (use an Any type, skip type checking, use a private method to return the function under test rather than making it a class variable, make the helper method a function, etc.). I'm more interested in whether there's an annotation or other mypy trick that would get this code working.
Callable only makes sure that your object has the __call__ method.
You problem is your call self.function_under_test.__func__(input_str) you should just call your function self.function_under_test(input_str)
See below your example without mypy complaints (v0.910)
from typing import Any, Callable, Optional
class MyTestCase:
function_under_test: Optional[Callable] = None
def myfunction_wrap(self, *args, **kwargs):
raise NotImplementedError
def assert_something(self, input_str: str, expected_result: Any) -> None:
if self.function_under_test is None:
raise AssertionError(
"To use this helper method, you must set the function_under_test"
"class variable within your test class to the function to be called.")
result = self.myfunction_wrap(input_str)
assert result == expected_result
# various other assertions on result...
def myfunction(a: str) -> None:
...
class FunctionATest(MyTestCase):
def myfunction_wrap(self, *args, **kwargs):
myfunction(*args, **kwargs)
def test_whatever(self):
self.assert_something("foo bar baz")
Edit1: missed the point of the questio, moved function inside a wrapper function

Mypy Optional dict error with an expected type that's not optional

I have a __init__ function that's able to build a correct object from three different paths. Since some of the arguments can be reused they have defaults in the top level init.
I'm unsure how to tell mypy that the given arguments are alright being optional at the top level init function and required for the given correct paths.
common_record.py:138: error: Argument 1 to "_init_from_common_record" of "CommonRecord" has incompatible type "Optional[Dict[Any, Any]]"; expected "Dict[Any, Any]"
common_record.py:142: error: Argument 1 to "_init_from_raw_data" of "CommonRecord" has incompatible type "Optional[Dict[Any, Any]]"; expected "Dict[Any, Any]"
Makefile:74: recipe for target 'type-check' failed
make: *** [type-check] Error 1
class CommonRecord:
"""A Common Record type. This is a json serializable object that contains
sections for required and known fields that are common among data sources.
"""
def __init__(
self,
record: Dict = None,
raw_record: Dict = None,
*,
system: System = None,
domain: Domain = None) -> None:
"""Initialization for the Common Record Class
Three valid creation cases:
* A single dictionary indicating a dictionary that's of the Common
Record type
* A normalized record and a raw record that will be turned into a
Common Record.
* A System object, a Domain object, and a raw record dictionary.
"""
if not raw_record:
self._init_from_common_record(record)
elif (system and domain and raw_record):
self._init_from_objects(system, domain, raw_record)
else:
self._init_from_raw_data(record, raw_record)
With the signatures of the init functions being
def _init_from_raw_data(self, record: Dict, raw_record: Dict) -> None:
def _init_from_objects(
self,
system: System,
domain: Domain,
raw_record: Dict) -> None:
def _init_from_common_record(self, common_record: Dict) -> None:
There are three different approaches you can take.
First, you could modify your conditionals to explicitly check if record is None and do something like the following.
if not raw_record and not record:
self._init_from_common_record(record)
elif (system and domain and raw_record):
self._init_from_objects(system, domain, raw_record)
elif not record:
self._init_from_raw_data(record, raw_record)
else:
# Raise an exception here or something
Second, you could add in asserts that check to make sure record is not None.
if not raw_record:
assert record is not None
self._init_from_common_record(record)
elif (system and domain and raw_record):
self._init_from_objects(system, domain, raw_record)
else:
assert record is not None
self._init_from_raw_data(record, raw_record)
The third solution is to cast record to the correct type and skip the checks altogether. I don't recommend this approach though -- verifying your object is being used correctly is going to be the more robust thing to do.
One additional but somewhat unrelated improvement you could also make is to refine the signature of your constructor using overloads -- basically create one overload per each method of constructing your CommonRecord. This would help verify that you're always constructing your object correctly and "teach" mypy how to verify some of the runtime checks we're doing above at type check time.
But you'd still need to do one of the three methods suggested above if you want your actual implementation to typecheck properly.
One additional thing you could do is sidestep the problem entirely by converting two of your private initialization methods into static methods that will construct and return a new instance of your CommonRecord.
This would let you potentially simplify the constructor and help you make your types more precise. The main downside, of course, is that instantiating a new CommonRecord now becomes slightly more clunky.

Resources