assert wrapper function - python-3.x

So I came across this interesting problem while reviewing code:
class Foo:
def __init__(self, foo_name):
self.foo_doo = getattr(foo_name, 'foo_lists', None)
def assert_foo(self, varname):
assert hasattr(self, 'foo_%s' % varname)
def foobar(self):
assert_foo('doo')
Wonder if wrapping assert to a customized version of your own is faster/better solution then using assert hasattr(...) everytime you need to make sure the attribute is present and not None?

The last line will raise NameError unless changed to
self.assert_foo('doo')
That aside, I do not think assert should be used in the above code with or without the wrapper. The corrected line only checks that self has .foo_doo set, but not that it is not None.
if self.foo_doo is not None:
does both.
If one wants an abbreviated look-first attribute check, one could write
def has_foo(self, name):
return hasattr(self, 'foo_'+name)
def foobar(self):
if has_foo('doo'):
If you also want a non-None check, change the has_foo return to:
return getattr(self, 'foo_'+name, None) is not None
Beyond this, assert in production code should only be used to check internal logic and not runtime conditions affected by users of the code. Users can delete or disable assertions, so code should not depend on assert for its proper operation once released.
In the code above, __init__ sets self.foo_doo to something, but the caller can subsequently delete the attribute. So both the existence and value of the attribute are user-determined run time conditions and not appropriate subjects for assertions.
The TestCase.assertXxx methods of unittest are only used for testing, and when they fail, they do more than just wrap a simple assert.

Related

How to deal with type checking for optionally None variable, Python3?

In the example below. The init method of MyClass defined the attribute self._user has optionally the type of UserInput and is initialized as None. The actual user input should be provided by the method set_user. For some practical reason, the user input cannot be provided to the method __init__. After giving user input, other methods method_1 and method_2 can be called.
Question to professional Python programmers: do I really need to add assert ... not None in every method that uses self._user? Otherwise, VS Code Pylance type checking will complain that self._user might be None.
However, I tried the same code in PyCharm with its built-in type checking. This issue is not raised there.
And as professional Python programmers, do you prefer the Pylance type checking in VS Code, or the built-in type checking in PyCharm?
Thanks in advance.
class UserInput:
name: str
age: int
class MyClass:
def __init__(self):
self._user: UserInput | None = None
def set_user(self, user: UserInput): # This method should be called before calling any methods.
self._user = user
def method_1(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its age.
return self._user.age # Will get warning without the assert above.
def method_2(self):
assert self._user is not None # do I actually need it
# do something using self._user, for example return its name.
I think it's safest and cleanest if you keep the asserts in. After all, it is up to the user of your class in which order he calls the instance methods. Therefore, you cannot guarantee that self._user is not None.
I think it's bad practice to use assert in production code. When things go wrong, you get lots of AssertionError, but you don't have any context about why that assertion is being made.
Instead I would catch the issue early, and not handle it later. If set_user() should be called earlier, I'd be tempted to put the user in the __init__ method, but the same principle applies.
#dataclass
class UserInput:
name: str
age: int
class NoUserException(TypeError):
pass
class MyClass:
# Or this could be in the __init__ method
def set_user(self, user: UserInput | None):
if not user:
raise NoUserException()
self._user: user
def method_1(self):
# do something using self._user, for example return its age.
return self._user.age
def method_2(self):
# do something using self._user, for example return its name.
return self._user.name
You already stated that set_user will be called first, so when that happens you'll get a NoUserException if the user is None.
But I'd be tempted to not even do that. If I were writing this, I'd have no NoneType checking in MyClass, and instead not call set_user if the user was None.
m = MyClass()
user = ...
if user:
m.set_user(user)
... # anything else with `m`
else:
# here is where you would get an error

Using singledispatch with custom class(CPython 3.8.2)

Let's say I want to set functions for each classes in module Named 'MacroMethods'. So I've set up singledispatch after seeing it in 'Fluent Python' like this:
#singledispatch
def addMethod(self, obj):
print(f'Wrong Object {str(obj)} supplied.')
return obj
...
#addMethod.register(MacroMethods.Wait)
def _(self, obj):
print('adding object wait')
obj.delay = self.waitSpin.value
obj.onFail = None
obj.onSuccess = None
return obj
Desired behavior is - when instance of class 'MacroMethods.Wait' is given as argument, singledispatch runs registered function with that class type.
Instead, it runs default function rather than registered one.
>>> Wrong Object <MacroMethods.Wait object at 0x0936D1A8> supplied.
However, type() clearly shows instance is class 'MacroMethods.Wait', and dict_keys property also contains it.
>>> dict_keys([<class 'object'>, ..., <class 'MacroMethods.Wait'>])
I suspect all custom classes I made count as 'object' type and don't run desired functions in result.
Any way to solve this problem? Entire codes are here.
Update
I've managed to mimic singledispatch's actions as following:
from functools import wraps
def state_deco(func_main):
"""
Decorator that mimics singledispatch for ease of interaction expansions.
"""
# assuming no args are needed for interaction functions.
func_main.dispatch_list = {} # collect decorated functions
#wraps(func_main)
def wrapper(target):
# dispatch target to destination interaction function.
nonlocal func_main
try:
# find and run callable for target
return func_main.dispatch_list[type(target)]()
except KeyError:
# If no matching case found, main decorated function will run instead.
func_main()
def register(target):
# A decorator that register decorated function to main decorated function.
def decorate(func_sub):
nonlocal func_main
func_main.dispatch_list[target] = func_sub
def register_wrapper(*args, **kwargs):
return func_sub(*args, **kwargs)
return register_wrapper
return decorate
wrapper.register = register
return wrapper
Used like:
#state_deco
def general():
return "A's reaction to undefined others."
#general.register(StateA)
def _():
return "A's reaction of another A"
#general.register(StateB)
def _():
return "A's reaction of B"
But still it's not singledispatch, so I find this might be inappropriate to post this as answer.
I wanted to do similar and had the same trouble. Looks like we have bumped into a python bug. Found a write-up that describes this situation.
Here is the link to the Python Bug Tracker.
Python 3.7 breaks on singledispatch_function.register(pseudo_type), which Python 3.6 accepted

MyPy type annotation for a bound method?

I'm writing some tests for a library using pytest. I want to try a number of test cases for each function exposed by the library, so I've found it convenient to group the tests for each method in a class. All of the functions I want to test have the same signature and return similar results, so I'd like to use a helper method defined in a superclass to do some assertions on the results. A simplified version would run like so:
class MyTestCase:
function_under_test: Optional[Callable[[str], Any]] = None
def assert_something(self, input_str: str, expected_result: Any) -> None:
if self.function_under_test is None:
raise AssertionError(
"To use this helper method, you must set the function_under_test"
"class variable within your test class to the function to be called.")
result = self.function_under_test.__func__(input_str)
assert result == expected_result
# various other assertions on result...
class FunctionATest(MyTestCase):
function_under_test = mymodule.myfunction
def test_whatever(self):
self.assert_something("foo bar baz")
In assert_something, It's necessary to call __func__() on the function since assigning a function to a class attribute makes it a bound method of that class -- otherwise self will be passed through as the first argument to the external library function, where it doesn't make any sense.
This code works as intended. However, it yields the MyPy error:
"Callable[[str], Any]" has no attribute "__func__"
Based on my annotation, it's correct that this isn't a safe operation: an arbitrary Callable may not have a __func__ attribute. However, I can't find any type annotation that would indicate that the function_under_test variable refers to a method and thus will always have __func__. Am I overlooking one, or is there another way to tweak my annotations or accesses to get this working with type-checking?
Certainly, there are plenty of other ways I could get around this, some of which might even be cleaner (use an Any type, skip type checking, use a private method to return the function under test rather than making it a class variable, make the helper method a function, etc.). I'm more interested in whether there's an annotation or other mypy trick that would get this code working.
Callable only makes sure that your object has the __call__ method.
You problem is your call self.function_under_test.__func__(input_str) you should just call your function self.function_under_test(input_str)
See below your example without mypy complaints (v0.910)
from typing import Any, Callable, Optional
class MyTestCase:
function_under_test: Optional[Callable] = None
def myfunction_wrap(self, *args, **kwargs):
raise NotImplementedError
def assert_something(self, input_str: str, expected_result: Any) -> None:
if self.function_under_test is None:
raise AssertionError(
"To use this helper method, you must set the function_under_test"
"class variable within your test class to the function to be called.")
result = self.myfunction_wrap(input_str)
assert result == expected_result
# various other assertions on result...
def myfunction(a: str) -> None:
...
class FunctionATest(MyTestCase):
def myfunction_wrap(self, *args, **kwargs):
myfunction(*args, **kwargs)
def test_whatever(self):
self.assert_something("foo bar baz")
Edit1: missed the point of the questio, moved function inside a wrapper function

Get a user's keyboard input that was requested by another function

I am using a python package for database managing. The provided class has a method delete() that deletes a record from the database. Before deleting, it asks a user to verify the operation from a console, e.g. Proceed? [yes, No]:
My function needs to perform other actions depending on whether a user chose to delete a record. Can I get user's input requested by the function from the package?
Toy example:
def ModuleFunc():
while True:
a=input('Proceed? [yes, No]:')
if a in ['yes','No']:
#Perform some actions behind a hood
return
This function will wait for one of the two responses and return None once it gets either. After calling this function, can I determine the User's response (without modifying this function)? I think a modification of the Package's source code is not a good idea in general.
Why not just patch the class at runtime? Say you had a file ./lib/db.py defining a class DB like this:
class DB:
def __init__(self):
pass
def confirm(self, msg):
a=input(msg + ' [Y, N]:')
if a == 'Y':
return True
return False
def delete(self):
if self.confirm('Delete?'):
print ('Deleted!')
return
Then in main.py you could do:
from lib.db import DB
def newDelete(self):
if self.confirm('Delete?'):
print('Do some more stuff!')
print('Deleted!')
return
DB.delete = newDelete
test = DB()
test.delete()
See it working here
I would save key events to somewhere(file or memory) with something like Keylogger. Then, you will be able to reuse last one.
However, if you can modify module package 📦 and redistribute, it would be easier.
Return
To
Return a

Show docstrings on every function call

Let's say I have a code like this:
class NewTestCase(unittest.TestCase, CommonMethods):
def setUp(self):
self.shortDescription()
def test_01_sample test(self):
"""Testing something"""
self.create_account(self.arg['account'])
assert ...
...
class CommonMethods():
def create_account(self, account):
"""Creating account"""
...
if __name__ == '__main__':
unittest.main(verbosity=2, warnings='ignore')
I want to show the docstrings of all methods defined / created by me ('Testing something' and 'Creating account'), but the execution shows 'Testing something' only. Any tip?
Maybe there is an option for that in the unittest module, but I doubt it; otherwise, how would that module distinguish between your methods and functions and all sorts of library functions?
What you could do is to use another function to modify the existing functions to print their Docstring and/or other useful information whenever they are called. You could make this a decorator, or just call the function manually before running the tests.
This one should 'verbosify' all the methods of a given class (only slightly tested!), and you could make similar ones for individual functions or entire modules.
def verbosify(clazz):
for name in dir(clazz):
attr = getattr(clazz, name)
if not name.startswith("__") and callable(attr):
def attr_verbose(*args, **kwargs):
print("Calling", name, args, kwargs)
print(attr.__doc__)
return attr(*args, **kwargs)
setattr(clazz, name, attr_verbose)
Just call verbosify(CommonMethods) in your main block.

Resources