I am trying to write a #assert_logged_in decorator.
On paper, it's easy:
def assert_logged_in(meth: Callable):
logged_in = False
def wrapper(self, *args, **kwargs):
if not logged_in:
try:
test_login()
except as err: # bare except only for this example
raise NotLoggedIn from err
logged_in = True # <= that is what I do not understand ‽‽‽
return meth(self, *args, **kwargs)
return wrapper
# somewhere else
#assert_logged_in
def do_something(self):
pass
As long as I do not try to update logged_in, the code runs, but would test loggedinness every time.
If I try to update logged_in as in my example, I receive: UnboundLocalError: local variable 'logged_in' referenced before assignment. Even if I try to use global. Note that I do not want to keep the state in self (which works), because this decorator will be used across multiple classes.
I am very confused because this example:
def memoize(f):
memo = {}
def memoized_func(n):
if n not in memo:
memo[n] = f(n)
return memo[n]
return memoized_func
does work as intended and keeps state.
There is obviously something I do not understand. What could it be, and how could I get my decorator working?
From the comments, there are 2 options:
Cleanest option:
Declare logged_in normally outside the wrapper, and then again as nonlocal inside the wrapper.
Other possibility
from a wrapper, you cannot not change an external variable
but if this variable happens to be a reference (dict, collections...) then the wrapper can update its content without changing its reference.
For instance, have a dict instead of a boolean:
logged_in = {"logged_in": False}
instead of
logged_in = False
Anotherj comment five
Related
I have a Class that takes in **kwargs. I want to be able to use it within a function inside the class.
I've looked at other solutions that say one must use return self.kwargs, but I'm not sure where I must do it (outside of the function, or inside the function but at the end)
class Foo():
def __init__(self, name:str, **kwargs):
self.name = name
self.kwargs = kwargs
# print(kwargs/self.kwargs) returns the dictionary as I expected
## There's an __enter__ and __exit__ that does nothing, just returns self
def show():
if kwargs:
print(name, f"{key} and {value}" for key, value in self.kwargs.items()) # Using kwargs.items() didn't work too
else:
print(name)
return self.kwargs
Then I did:
with Foo('some_name', foo='bar', foo_bar='baz') as f:
f.show()
The error I'm getting is:
File "/...", line 73, in <module>
f.show()
File "/...", line 96, in show
if kwargs:
UnboundLocalError: local variable 'kwargs' referenced before assignment
What am I doing wrong here?
Thanks.
A few things are wrong. First the function doesn’t have the class instance self passed into it. Second the if statement should be if self.kwargs. Which is accessing the instance’s kwargs variable.
def show(self):
if self.kwargs:
print(name, f"{key} and {value}" for key, value in self.kwargs.items()) # Using kwargs.items() didn't work too
else:
print(name)
return self.kwargs
Looks like two errors:
Your method definition needs to include self, and thereafter you can access instance variables (your kwargs) from 'self'.
def show(self):
if self.kwargs...
When you instantiate a new object instance FooInstance = Foo('John',...), you create new variables for that instance. All methods of the Foo class which require instance variables need to have a argument passed in, which is termed self by convention. You can then access instance variables using the self keyword in those methods.
The print statement looks wrong too, you'd probably want this instead
print(f"{self.name} {key} and {value}" for key, value in self.kwargs.items())
Lastly, I don't believe you need the with ... as context manager here. Something like this would be more appropriate.
myobj = Foo(...)
myobj.show()
I tried to do some argument modification on a decorator which decorates a function.
The original code looks like
#original_decorator(arg=some_object)
def calculate(a, b):
# complex business logic
raise Exception()
where original_decorator is responsible for exception handling.
What I want to achieve is to do some temporary modification on some_object and restore it's property after function returned.
And I've tried the following
def replace_arg(arg, add_some_property):
def decorator_wrapper(decorator_func):
def decorator_inner(*decorator_args, **decorator_kwargs):
def actual_wrapper(actual_func):
def actual_inner(*actual_args, **actual_kwargs):
original = arg['func']
arg['func'] = add_some_property
decorator_kwargs['arg'] = arg
result = actual_func(*actual_args, **actual_kwargs)
arg['func'] = original
return result
return actual_inner
return actual_wrapper
return retry_inner
return retry_wrapper
Also tried to place the modification logic in decorator_inner, but neither worked.
My Questions:
Is it possible to modify a decorator's argument?
If true, then how can I achieve it?
I have an issue with mocking multiple calls of a class function. My goal is to raise an Exception on a second function call. What's happening is that mocked function returns just a mocked object, without calling it. So in a result on first function call it returns just a function.
First example:
with mock.patch.object(FunctionClass, "function_to_mock") as mocked_function:
mocked_function.side_effect = [mock.DEFAULT, Exception()]
Second example:
original_function = FunctionClass.function_to_mock # to store reference to the original function
with mock.patch.object(FunctionClass, "function_to_mock") as mocked_function:
mocked_function.side_effect = [original_function, Exception()]
So I ended up with writing:
def _is_exception(obj):
return (
isinstance(obj, BaseException) or
isinstance(obj, type) and issubclass(obj, BaseException)
)
class MultipleSideEffectsMock:
def __init__(self, side_effects_collection):
self.side_effects_collection = side_effects_collection
def side_effects(self, *args, **kwargs):
side_effect = self.side_effects_collection.pop()
if callable(side_effect):
return side_effect(*args, **kwargs)
elif _is_exception(side_effect):
raise side_effect
return side_effect
which can be used as:
original_function = FunctionClass.function_to_mock # to store reference to the original function
with mock.patch.object(FunctionClass, "function_to_mock") as mocked_function:
mocked_function.side_effect = MultipleSideEffectsMock(side_effects_collection=[Exception(), original_function]).side_effects
I'm pretty sure that it could be archived by some cleaner solution or by pure mocks usage, but I didn't have time to investigate it deeper.
I am trying to add a custom field in my logging using LogRecordFactory. I am repeatedly calling a class and every time I do that, I want to set the custom_attribute in the init module so the remainder of the code within the class will have this attribute. But I cannot get this to work. I found the following which works, but its static.
import logging
old_factory = logging.getLogRecordFactory()
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.custom_attribute = "whatever"
return record
logging.basicConfig(format="%(custom_attribute)s - %(message)s")
logging.setLogRecordFactory(record_factory)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
This will output correctly:
whatever - test
However, my use case is that the custom_attribute will vary. Every time I call a specific function, I want to change this. So it seems like record_factory needs another parameter passed to it so it can then return the correct record with the new parameter. But I cant figure it out. I have tried adding a parameter to the function, but when I make the call I get:
TypeError: __init__() missing 7 required positional arguments: 'name', 'level', 'pathname', 'lineno', 'msg', 'args', and 'exc_info'
I think this has something to do with the *args and **kwargs but I don't really know. Also, why are there no parenthesis after record_factory when its called by logging.setLogRecordFactory? I have never seen a function work like this.
You can try to use closure:
import logging
old_factory = logging.getLogRecordFactory()
def record_factory_factory(context_id):
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.custom_attribute = context_id
return record
return record_factory
logging.basicConfig(format="%(custom_attribute)s - %(message)s")
logging.setLogRecordFactory(record_factory_factory("whatever"))
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
logging.setLogRecordFactory(record_factory_factory("whatever2"))
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
result:
$ python3 log_test.py
whatever - test
whatever2 - test
I stumbled upon this question while I was trying to do something similar. This is how I solved it, assuming that you want to add something called xyz to every log line (further explanation below):
import logging
import threading
thread_local = threading.local()
def add_xyz_to_logrecords(xyz):
factory = logging.getLogRecordFactory()
if isinstance(factory, XYZLogFactory):
factory.set_xyz(xyz)
else:
logging.setLogRecordFactory(XYZLogFactory(factory, xyz))
class XYZLogFactory():
def __init__(self, original_factory, xyz):
self.original_factory = original_factory
thread_local.xyz = xyz
def __call__(self, *args, **kwargs):
record = self.original_factory(*args, **kwargs)
try:
record.xyz = thread_local.xyz
except AttributeError:
pass
return record
def set_xyz(self, xyz):
thread_local.xyz = xyz
Here I've created a callable class XYZLogFactory, that remembers what the current value of xyz is, and also remembers what the original LogRecordFactory was. When called as a function, it creates a record using the original LogRecordFactory, and adds an xyz attribute with the current value.
The thread_local is to make it thread-safe, but for an easier version, you could just use an attribute on the XYZLogFactory:
class XYZLogFactory():
def __init__(self, original_factory, xyz):
self.original_factory = original_factory
self.xyz = xyz
def __call__(self, *args, **kwargs):
record = self.original_factory(*args, **kwargs)
record.xyz = self.xyz
return record
def set_xyz(self, xyz):
self.xyz = xyz
In my very first attempt (not shown here), I did not store the original factory, but stored it implicitly in the new LogRecordFactury using a closure. However, after a while that led to a RecursionError, because it kept calling the previous factory, which called the previous factory, etc.
Regarding your last question: there are no parentheses because the function is not called here. Instead it's passed to the logging.setLogRecordFactory, which saves it in a variable somewhere, and then calls that someplace else. If you want more information you can google something like 'functions as first class citizens'.
Easy example:
x = str # Assign to x the function that gives string representation of object
x(1) # outputs the string representation of 1, same as if you'd called str(1)
> '1'
So I came across this interesting problem while reviewing code:
class Foo:
def __init__(self, foo_name):
self.foo_doo = getattr(foo_name, 'foo_lists', None)
def assert_foo(self, varname):
assert hasattr(self, 'foo_%s' % varname)
def foobar(self):
assert_foo('doo')
Wonder if wrapping assert to a customized version of your own is faster/better solution then using assert hasattr(...) everytime you need to make sure the attribute is present and not None?
The last line will raise NameError unless changed to
self.assert_foo('doo')
That aside, I do not think assert should be used in the above code with or without the wrapper. The corrected line only checks that self has .foo_doo set, but not that it is not None.
if self.foo_doo is not None:
does both.
If one wants an abbreviated look-first attribute check, one could write
def has_foo(self, name):
return hasattr(self, 'foo_'+name)
def foobar(self):
if has_foo('doo'):
If you also want a non-None check, change the has_foo return to:
return getattr(self, 'foo_'+name, None) is not None
Beyond this, assert in production code should only be used to check internal logic and not runtime conditions affected by users of the code. Users can delete or disable assertions, so code should not depend on assert for its proper operation once released.
In the code above, __init__ sets self.foo_doo to something, but the caller can subsequently delete the attribute. So both the existence and value of the attribute are user-determined run time conditions and not appropriate subjects for assertions.
The TestCase.assertXxx methods of unittest are only used for testing, and when they fail, they do more than just wrap a simple assert.