Unit Tests for Retry Decoration method Python Nosetest - python-3.x

I have a retry funtion in my code.
def retry(retry_times=4, wait_time=1):
"""
Function to use as a decorator to retry any method certain number of times
:param retry_times: number of times to retry a function
:param wait_time: delay between each retry
:return: returns expected return value from the function on success else raises exception
"""
def decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
for _ in range(retry_times):
try:
if func(*args, **kwargs):
return
except Exception as e:
raise e
time.sleep(secs=wait_time)
return wrapper
return decorator
And I am using it on some function like this:-
retry(retry_times=RETRY_TIMES, wait_time=RETRY_WAIT_TIME)
def get_something(self, some_id):
....
return something or raise exception( just assume)
Where RETRY_TIMES, and WAIT_TIME are some constants.
Now my funtion get_something() either returns a value or raises an exception.
Now my question is that I want to write a test case to test my retry function with nose, How can I write unit test to test my retry funtion?

Finally got the answer:-
class RetryTest(BaseTestCase):
def test_retry(self):
random_value = {'some_key': 5}
class TestRetry:
def __init__(self):
self.call_count = 0
#retry(retry_times=3, wait_time=1)
def some_function(self):
try:
self.call_count += 1
if random_value.get('non_existing_key'):
return True
except KeyError:
return False
retry_class_object = TestRetry()
retry_class_object.some_function()
self.assertEqual(retry_class_object.call_count, 3)

Related

Pytest - patched function returns mock instead of value

I've the following very simply testcase, which for some unknown reason returns mock (for do2() ) instead of a json:
def return_json():
return json.dumps({'a': 5})
def test_700():
with patch('app.main.Pack') as mocked_pack:
mocked_pack.do2 = return_json() #####
with app.test_client() as c:
resp = c.get('/popular/')
assert resp.status_code == 200
I tried to use return_value and similar things, but I always get the traceback: the JSON object must be str, bytes or bytearray, not MagicMock . I think the problem should be how I patch it, but I can't find out.
Could anyone point out how can I return a real value for do2(), not only a mock?
My setup is as this:
#main.py
from persen import Pack
def popular_view():
my_pack = Pack(name='peter', age='25')
response = my_pack.do2()
try:
json.loads(response)
except Exception as e:
print('E1 = ', e)
#persen.py
class Pack:
def __init__(self, name, age):
...
def do1(self):
...
def do2(self):
return '600a'

How to raise an error with decorator for unit testing?

I would like to do something in my unit testing before it fails and using decorator to do so
Here is my code :
import requests
import unittest
import test
class ExceptionHandler(object):
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
try:
self.f(*args, **kwargs)
except Exception as err:
print('do smth')
raise err
class Testing(unittest.TestCase):
#ExceptionHandler
def test_connection_200(self):
r = requests.get("http://www.google.com")
self.assertEqual(r.status_code, 400)
if __name__ == '__main__':
unittest.main(verbosity=2)
But it throws me an :
TypeError: test_connection_200() missing 1 required positional argument: 'self'
How can I do something when my test fail, then having the normal failing behavior of unitest?
Edit :
I would like to do something before my test fail like write a log and then continue the normal process of failing.
If possible with a decorator.
Edit bis the solution thanks to #Thymen :
import requests
import unittest
import test
class ExceptionHandler(object):
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
try:
self.f(*args, **kwargs)
except Exception as err:
print('do smth')
raise err
class Testing(unittest.TestCase):
def test_connection_200(self):
#ExceptionHandler
def test_connection_bis(self):
r = requests.get("https://www.google.com")
print(r.status_code)
self.assertEqual(r.status_code, 400)
test_connection_bis(self)
if __name__ == '__main__':
unittest.main(verbosity=2)
My comment may not have been clear, so hereby the solution in code
class Testing(unittest.TestCase):
def test_connection_200(self):
#ExceptionHandler
def test_connection():
r = requests.get("http://www.google.com")
self.assertEqual(r.status_code, 400)
with self.assertRaises(AssertionError):
test_connection()
The reason that this works is that there is no dependency on the call for the test (test_connection_200) , and the actual functionality that you are trying to test (the ExceptionHandler).
Edit
The line
with self.assertRaises(AssertionError):
test_connection()
Checks if test_connection() raises an AssertionError. If it does not raises this error, it will fail the test.
If you want the test to fail (because of the AssertionError), you can remove the with statement, and only call test_connection(). This will make the test fail directly.

Inheritance in iterable implementation of python's multiprocessing.Queue

I found the default implementation of python's multiprocessing.Queue lacking, in that it's not iterable like any other collection. So I went about the effort of creating a 'subclass' of it, adding the feature in. As you can see from the code below, it's not a proper subclass, as multiprocess.Queue isn't a direct class itself, but a factory function, and the real underlying class is multiprocess.queues.Queue. I don't have the understanding nor effort to expend necessary to go about mimicking the factory function just so I can inherit from the class properly, so I simply had the new class create it's own instance from the factory and treat it as the superclass. Here is the code;
from multiprocessing import Queue, Value, Lock
import queue
class QueueClosed(Exception):
pass
class IterableQueue:
def __init__(self, maxsize=0):
self.closed = Value('b', False)
self.close_lock = Lock()
self.queue = Queue(maxsize)
def close(self):
with self.close_lock:
self.closed.value = True
self.queue.close()
def put(self, elem, block=True, timeout=None):
with self.close_lock:
if self.closed.value:
raise QueueClosed()
else:
self.queue.put(elem, block, timeout)
def put_nowait(self, elem):
self.put(elem, False)
def get(self, block=True):
if not block:
return self.queue.get_nowait()
elif self.closed.value:
try:
return self.queue.get_nowait()
except queue.Empty:
return None
else:
val = None
while not self.closed.value:
try:
val = self.queue.get_nowait()
break
except queue.Empty:
pass
return val
def get_nowait(self):
return self.queue.get_nowait()
def join_thread(self):
return self.queue.join_thread()
def __iter__(self):
return self
def __next__(self):
val = self.get()
if val == None:
raise StopIteration()
else:
return val
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
This allows me to instantiate an IterableQueue object just like a normal multiprocessing.Queue, put elements into it like normal, and then inside child consumers, simply loop over it like so;
from iterable_queue import IterableQueue
from multiprocessing import Process, cpu_count
import os
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
def consumer(queue):
print(f"[{os.getpid()}] Consuming")
for i in queue:
print(f"[{os.getpid()}] < {i}")
n = fib(i)
print(f"[{os.getpid()}] {i} > {n}")
print(f"[{os.getpid()}] Closing")
def producer():
print("Enqueueing")
with IterableQueue() as queue:
procs = [Process(target=consumer, args=(queue,)) for _ in range(cpu_count())]
[p.start() for p in procs]
[queue.put(i) for i in range(36)]
print("Finished")
if __name__ == "__main__":
producer()
and it works almost seamlessly; the consumers exit the loop once the queue has been closed, but only after exhausting all remaining elements. However, I was unsatisfied with the lack of inherited methods. In an attempt to mimic actual inheritance behavior, I tried adding the following meta function call to the class;
def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
else:
return self.queue.__getattr__[name]
However, this fails when instances of the IterableQueue class are manipulated inside child multiprocessing.Process threads, as the class's __dict__ property is not preserved within them. I attempted to remedy this in a hacky manner by replacing the class's default __dict__ with a multiprocessing.Manager().dict(), like so;
def __init__(self, maxsize=0):
self.closed = Value('b', False)
self.close_lock = Lock()
self.queue = Queue(maxsize)
self.__dict__ = Manager().dict(self.__dict__)
However on doing so, I received an error stating RuntimeError: Synchronized objects should only be shared between processes through inheritance. So my question is, how should I go about inheriting from the Queue class properly such that the subclass has inherited access to all of it's properties? In addition, while the queue is empty but not closed, the consumers all sit in a busy loop instead of a true IO block, taking up valuable cpu resources. If you have any suggestions on concurrency and race condition issues I might run into with this code, or how I might solve the busy loop issue, I'd be willing to take suggestions therein as well.
Based on code provided by MisterMiyagi, I created this general purpose IterableQueue class which can accept arbitrary input, blocks properly, and does not hang on queue close;
from multiprocessing.queues import Queue
from multiprocessing import get_context
class QueueClosed(Exception):
pass
class IterableQueue(Queue):
def __init__(self, maxsize=0, *, ctx=None):
super().__init__(
maxsize=maxsize,
ctx=ctx if ctx is not None else get_context()
)
def close(self):
super().put((None, False))
super().close()
def __iter__(self):
return self
def __next__(self):
try:
return self.get()
except QueueClosed:
raise StopIteration
def get(self, *args, **kwargs):
result, is_open = super().get(*args, **kwargs)
if not is_open:
super().put((None, False))
raise QueueClosed
return result
def put(self, val, *args, **kwargs):
super().put((val, True), *args, **kwargs)
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
The multiprocess.Queue wrapper only serves to use the default context.
def Queue(self, maxsize=0):
'''Returns a queue object'''
from .queues import Queue
return Queue(maxsize, ctx=self.get_context())
When inheriting, you can replicate this in the __init__ method. This allows you to inherit the entire Queue behaviour. You only need to add the iterator methods:
from multiprocessing.queues import Queue
from multiprocessing import get_context
class IterableQueue(Queue):
"""
``multiprocessing.Queue`` that can be iterated to ``get`` values
:param sentinel: signal that no more items will be received
"""
def __init__(self, maxsize=0, *, ctx=None, sentinel=None):
self.sentinel = sentinel
super().__init__(
maxsize=maxsize,
ctx=ctx if ctx is not None else get_context()
)
def close(self):
self.put(self.sentinel)
# wait until buffer is flushed...
while self._buffer:
time.sleep(0.01)
# before shutting down the sender
super().close()
def __iter__(self):
return self
def __next__(self):
result = self.get()
if result == self.sentinel:
# re-queue sentinel for other listeners
self.put(result)
raise StopIteration
return result
Note that the sentinel to indicate end-of-queue is compared by equality, because identity is not preserved across processes. The often-used queue.Queue sentinel object() does not work properly with this.

Pytests with context manager

I am trying to understand how to test Context managert with pytests.
I created some Class and need to count how much times was called static method do_dome_stuff
class Iterator():
def __init__(self):
pass
#staticmethod
def do_some_stuff():
pass
def __enter__(self):
return [i for i in range(10)]
def __exit__(self, *args):
return True
iterator = Iterator()
def f(iterator):
with iterator as i:
for _ in i:
iterator.do_some_stuff()
I have created py.test file and need to check if function was called 10 times. But my solution isn't working:
#pytest.fixture
def iterator():
return MagicMock(spec=Iterator)
def test_f(iterator):
f(iterator)
assert (iterator.do_some_stuff.call_count == 10)
Thanks in advance
The reason your code doesn't work, is that MagicMock(spec=Iterator) replaces the __enter__ method of your Iterator class by a MagicMock object, see the MagicMock documentation. This means that in your test, the value of i in function f is a MagicMock object instead of list(range(10)), so the code inside the for loop is never executed.
To make it work, you will probably only want to mock the do_some_stuff method:
#pytest.fixture
def iterator():
it = Iterator()
it.do_some_stuff = Mock()
return it
def test_f(iterator):
f(iterator)
assert (iterator.do_some_stuff.call_count == 10)

How to make a decorator that handles failover with motor and tornado?

I am trying to write a decorator, that takes a function which interacts with mongodb and if an exception occurs it retries the interaction.
I have the following code:
def handle_failover(f):
def wrapper(*args):
for i in range(40):
try:
yield f(*args)
break
except pymongo.errors.AutoReconnect:
loop = IOLoop.instance()
yield gen.Task(loop.add_timeout, time.time() + 0.25)
return wrapper
class CreateHandler(DatabaseHandler):
#handle_failover
def create_counter(self, collection):
object_id = yield self.db[collection].insert({'n': 0})
return object_id
#gen.coroutine
def post(self, collection):
object_id = yield self.create_counter(collection)
self.finish({'id': object_id})
But this doesn't work. It gives an error that create_counter yields a generator. I've tried making all the functions #gen.coroutines and it didn't help.
How can I make handle_failover decorator work?
edit:
No decorators for now. This should create a counter reliably and return object_id to the user. If exception is raised 500 page gets displayed.
class CreateHandler(DatabaseHandler):
#gen.coroutine
def create_counter(self, collection, data):
for i in range(FAILOVER_TRIES):
try:
yield self.db[collection].insert(data)
break
except pymongo.errors.AutoReconnect:
loop = IOLoop.instance()
yield gen.Task(loop.add_timeout, time.time() + FAILOVER_SLEEP)
except pymongo.errors.DuplicateKeyError:
break
else:
raise Exception("Can't create new counter.")
#gen.coroutine
def post(self, collection):
object_id = bson.objectid.ObjectId()
data = {
'_id': object_id,
'n': 0
}
yield self.create_counter(collection, data)
self.set_status(201)
self.set_header('Location', '/%s/%s' % (collection, str(object_id)))
self.finish({})
Although I still don't know how to make increment of the counter idempotent because the trick with DuplicateKeyError is not applicable here:
class CounterHandler(CounterIDHandler):
def increment(self, collection, object_id, n):
result = yield self.db[collection].update({'_id': object_id}, {'$inc': {'n': int(n)}})
return result
#gen.coroutine
def post(self, collection, counter_id, n):
object_id = self.get_object_id(counter_id)
if not n or not int(n):
n = 1
result = yield self.increment(collection, object_id, n)
self.finish({'resp': result['updatedExisting']})
You most likely don't want to do this. Better to show an error to your user than to retry an operation.
Blindly retrying any insert that raises AutoReconnect is a bad idea, because you don't know if MongoDB executed the insert before you lost connectivity or not. In this case you don't know whether you'll end up with one or two records with {'n': 0}. Thus you should ensure that any operation you retry this way is idempotent. See my "save the monkey" article for detailed information.
If you definitely want to make a wrapper like this, you need to make sure that f and wrapper are both coroutines. Additionally, if f throws an error 40 times you must re-raise the final error. If f succeeds you must return its return value:
def handle_failover(f):
#gen.coroutine
def wrapper(*args):
retries = 40
i = 0
while True:
try:
ret = yield gen.coroutine(f)(*args)
raise gen.Return(ret)
except pymongo.errors.AutoReconnect:
if i < retries:
i += 1
loop = IOLoop.instance()
yield gen.Task(loop.add_timeout, time.time() + 0.25)
else:
raise
return wrapper
But only do this for idempotent operations!

Resources