How can I use pytest.raises with multiple exceptions? - python-3.x

I'm testing code where one of two exceptions can be raised: MachineError or NotImplementedError. I would like to use pytest.raises to make sure that at least one of them is raised when I run my test code, but it only seems to accept one exception type as an argument.
This is the signature for pytest.raises:
raises(expected_exception, *args, **kwargs)
I tried using the or keyword inside a context manager:
with pytest.raises(MachineError) or pytest.raises(NotImplementedError):
verb = Verb("donner<IND><FUT><REL><SG><1>")
verb.conjugate()
but I assume this only checks whether the first pytest.raises is None and sets the second one as the context manager if it is.
Passing multiple exceptions as positional arguments doesn't work, because pytest.raises takes its second argument to be a callable. Every subsequent positional argument is passed as an argument to that callable.
From the documentation:
>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>
>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>
Passing the exceptions as a list doesn't work either:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
with pytest.raises([MachineError, NotImplementedError]):
File "/usr/local/lib/python3.4/dist-packages/_pytest/python.py", line 1290, in raises
raise TypeError(msg % type(expected_exception))
TypeError: exceptions must be old-style classes or derived from BaseException, not <class 'list'>
Is there a workaround for this? It doesn't have to use a context manager.

Pass the exceptions as a tuple to raises:
with pytest.raises( (MachineError, NotImplementedError) ):
verb = ...
In the source for pytest, pytest.raises may:
catch expected_exception; or
pass expected_exception to a RaisesContext instance, which then uses issubclass to check whether the exception was one you wanted.
In Python 3, except statements can take a tuple of exceptions. The issubclass function can also take a tuple. Therefore, using a tuple should be acceptable in either situation.

Related

How to get parameter names in python format function?

func = "Hello {name}. how are you doing {time}!".format
For example, let's assume func is defined as above.
we don't have a definition of func at hand but we have an instance of it.
how can I get all arguments to this function?
apparently inspect.getargspec(func) does not work here!
if I just run this with empty parameters it returns an error with one missing parameter at a time, but I don't know how to get them directly:
a()
-------
KeyError Traceback (most recent call last)
<ipython-input-228-8d7b4527e81d> in <module>
----> 1 a()
KeyError: 'name'
what exactly are you trying to do? what is your expected output? as far as i know, there is no possibility to insert name and time, after you define func. because "string".format("call variables") <---- it cant call variables that are not defined. As far as i know the calling process happens first, independent of the formatting type. (f string, %, .format().. whatever you use)

Decorators in python: Difference in the args interpretation in class and function based decorators

I am trying to create a generic function lifecycle handler in python.
Working in brief:
Logs the signature and return values.
Handles the exception with or without retry.
Provide cleanup in case of exception.
The issue which I encountered while doing the cleanup is as follows:
Function-based decorator:
def handler(exception=Exception,cleanup=None):
def func_wrapper(func):
def wrapper(*args,**kwargs):
response=None
try:
print(args)
response=func(*args,**kwargs)
except exception as ex:
print(f'Exception occurred:{str(ex)}\nCleaning up resources')
#Passing the object for cleanup, it fails for class based decorators as it does not passes self as argument
cleanup(args[0])
return response
return wrapper
return func_wrapper
The data which is supposed to be cleaned up is stored in the class instance and is cleaned based on the method provided.
For example:
Store some information using third party API.
In case of exception, the cleanup operation passed would invoke a
delete API.
class Test:
def __init__(self,data):
self.main_data=data
#handler(exception=Exception,cleanup=lambda x:print(f"Cleaning data:{x.main_data}"))
def test_data(self):
print(f'Data is :{self.main_data}')
i=1/0
Output:
Exception occurred:division by zero
Cleaning up resources
Cleaning:John Doe
I was more inclinded towards Class based decorator.
class LifeCycleHandler:
def __init__(self,*,func=None,exception=Exception,cleanup=None):
self.__exception=exception
self.__cleanup=cleanup
self.__func=func
def __call__(self,*args,**kwargs):
response=None
try:
print(args)
response=self.__func(*args,**kwargs)
except self.__exception as ex:
print(f'Exception occurred:{str(ex)}\n cleaning up resources')
#Passing the object for cleanup
self.__cleanup(args[0])
return response
def lifecycle_handler(exception=Exception,cleanup=None):
def wrapper(func):
response=LifeCycleHandler(func=func,exception=exception,cleanup=cleanup)
return response
return wrapper
With class based decorator with similar functionality i faced the following error:
()
Exception occurred:test_data() missing 1 required positional argument: 'self'
cleaning up resources
Traceback (most recent call last):
File "test.py", line 27, in __call__
response=self.__func(*args,**kwargs)
TypeError: test_data() missing 1 required positional argument: 'self'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 54, in <module>
test.test_data()
File "test.py", line 31, in __call__
self.__cleanup(args[0])
IndexError: tuple index out of range
Can someone guide me regarding the the argument interpretation for callable classes?
If my comment correct, you could add __get__ to LifeCycleHandler.
def __get__(self, obj, type=None):
return functools.partial(self.__call__, obj)
This will make test_data become a non-data descriptor. I assume you already know descriptor. If not, it's definitely worth to check it.
Back to your question, from trace back, your assumed that python will help you pass the caller instance which is instance of Test as second argument to __call__. That's not true. However, that's true in __get__.
Your core logic (try/except block) needs are:
instance of Test, because you need access to main_data.
instance of LifeCycleHandler in, because you need access to your self.__func.
args in which isn't acceptable in __get__, but you could have them in __call__.
For example, you have test code below:
t = Test(123)
t.test_data()
t.test_data will invoke __get__. In its arguments, self is an instance of LifeCycleHandler and obj is t (instance of Test). __get__ returned a callable function(__call__) in which its first argument is partially feed by obj.

Get method's name using __getattribute__ without type error

I'm trying to print method's name using __getattribute__
but I get typeerror everytime I called the method, and the method is not excuted, is there anyway to get rid of the type error and have the method excuted ?
class Person(object):
def __init__(self):
super()
def test(self):
print(1)
def __getattribute__(self, attr):
print(attr)
p = Person()
p.test()
The above code gives the error
test
Traceback (most recent call last):
File "test1.py", line 15, in <module>
p.test()
TypeError: 'NoneType' object is not callable
Is there anyway to print the method's name only without giving the error ?
I tried to catch typeError inside __getattribute__ method, but it doesn't work
Another question is, why it says None Type object is not callable here
Thank you !
Ps. I know I can catch the error when I call the method, I mean is there anyway to deal this error inside __getattribute method? since my goal is to print method's name everytime a method is called
Answering your second question first, why is it saying NoneType not callable.
When you call p.test() Python tries to lookup the test attribute of the p instance. It calls the __getattribute__ method that you've overridden which prints 'test' and then returns. Because you're not returning anything it implicitly returns None. p.test is therefore None and calling it gives the error you get.
So how do we fix this? Once you've printed the attribute name, you need to return the attribute you're after. You can't just call getattr(self, attr) or you'll end up in an infinite loop so you have to call the method that would have been called if you hadn't overridden it.
def __getattribute__(self, attr):
print(attr)
return super().__getattribute__(attr) # calls the original method

what are the use cases of type() function in python? specifically when i passed 3 arguments to it

I was studying python's type() function.
Its first application is to return the type of any python's object as follows:-
a = 5
type(a)
But in documentation there is another way of calling type() function by passing it three arguments as follows :-
X = type('X', (object,), dict(a=1))
This second call returning "type" class's object.
What are some use cases of this "type" class object?
Pleas elaborate.
Here is the documentation link, which i have followed, but could not get any help regarding it's use cases
Programiz's link, I have explored that as well, but unable to find any relevant stuff there as well
We use three arguments to create a new class.
The first argument is the class name that we are inheriting from. The second argument is the bases attribute (tuple containing all the base class) and in the third argument, we provide all declarations made in the class.
Now with an example,
>>> class X:
a = 1
>>> Y = type('X', (object,), dict(a=10))
Here, 'Y' inherits from class 'X'.
The second argument only means that the object we are creating is of 'object' type.
The third argument is just declaration of definitions that were made in class 'X'. If you don't mention anything inside dict(), then there will be no new attribute in class 'Y'.
>>> Y = type('X', (object,), dict())
Now if you try,
>>> Y.a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'X' has no attribute 'a'
Go through this to understand the predefined attributes in Python.

hasattr telling lies? (AttributeError: 'method' object has no attribute '__annotations__')

The following code
class Foo:
def bar(self) -> None:
pass
foo = Foo()
if hasattr(foo.bar, '__annotations__'):
foo.bar.__annotations__ = 'hi'
crashes with
AttributeError: 'method' object has no attribute '__annotations__'
How can this happen?
The attribute error here is raised because you can't set any attribute on a method object:
>>> foo.bar.baz = 42
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'method' object has no attribute 'baz'
The exception here is perhaps confusing because method objects wrap a function object and proxy attribute read access to that underlying function object. So when attributes on the function exist, then hasattr() on the method will return True:
>>> hasattr(foo.bar, 'baz')
False
>>> foo.bar.__func__.baz = 42
>>> hasattr(foo.bar, 'baz')
True
>>> foo.bar.baz
42
However, you still can't set those attributes via the method, regardless:
>>> hasattr(foo.bar, 'baz')
True
>>> foo.bar.baz = 42
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'method' object has no attribute 'baz'
So, just because the attribute can be read doesn't mean you can set it. hasattr() is speaking the truth, you just interpreted it to mean something different.
Now, if you tried to set the __annotations__ attribute directly on the underlying function object you'd get another error message:
>>> foo.bar.__func__.__annotations__ = 'hi'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __annotations__ must be set to a dict object
You would want to use a dictionary object here:
>>> foo.bar.__func__.__annotations__ = {'return': 'hi'}
>>> foo.bar.__annotations__
{'return': 'hi'}
However, because __annotations__ is a mutable dictionary, it is just easier to directly manipulate the keys and values to that object, which is perfectly feasible to do via the method wrapper:
>>> foo.bar.__annotations__['return'] = 'int'
>>> foo.bar.__annotations__
{'return': 'int'}
Now, if you were hoping to set per instance annotations, you can't get away with setting attributes on method objects, because method objects are ephemeral, they are created just for the call, then usually discarded right after.
You would have to use custom method descriptor objects via a metaclass and re-create the __annotations__ attribute for those each time, or you could instead pre-bind methods with a new function object that would be given their own attributes. You then have to pay a larger memory price:
import functools
foo.bar = lambda *args, **kwargs: Foo.bar(foo, *args, **kwargs)
functools.update_wrapper(foo.bar, Foo.bar) # copy everything over to the new wrapper
foo.bar.__annotations__['return'] = 'hi'
Either way you completely kill important speed optimisations made in Python 3.7 this way.
And tools that operate on the most important use case for __annatotions__, type hints, do not actually execute code, they read code statically and would completely miss these runtime alterations.
You're getting an error. because __annotations__ is a dictionary. If you want to change values you'll have to do it like this:
if hasattr(foo.bar, '__annotations__'):
foo.bar.__annotations__['return'] = 'hi'
This will make the return value of your foo.bar be hi instead of None. The only thing I'm not sure about is how the __annotations__ are protected, not allowing you to change them from a dict to string, but I suppose it's some internal check in the source.
UPDATE
For more control over the signature you can use the inspect module and get the Signature object of your class(or method) and edit it from there. For example
import inspect
sig = inspect.signature(foo.bar)
sig.return_annotation # prints None (before modifying)
sig.replace(return_annotation="anything you want")
More on that here

Resources