StopIteration from "yield from" - python-3.x

Newbie to generator here. My understanding is that when breaking from a generator function ( total_average), it will implicitly trigger StopIteration in wrap_average. But wrap_average will return None back to the caller, and the program should not see StopIteration.
def total_average():
total = 0.0
count = 0
avg = None
print("starting average generator")
while True:
num = yield avg
if num is None:
break
total += num
count += 1
avg = total/count
def wrap_average(average_generator):
"""This is just a pipe to the generator"""
print("starting wrap_average")
avg = yield from average_generator
# Note: total_average() is the generator object. total_average is generator function
w = wrap_average(total_average())
# Execute everthing until hitting the first iterator. None being returned
print("starting the generator: ", next(w))
print(w.send(3))
print(w.send(4))
# Finish Coroutine
# ? Not sure why w.send(None) is giving me stop iteration?
w.send(None)
However, Python 3.8 shows an StopIteration error. I'm not sure why?

The yield from in wrap_average does catch the StopIteration from total_average, but that's not the only StopIteration here, because total_average isn't the only generator.
wrap_average is also a generator. When it ends, it too raises a StopIteration. That's the StopIteration you get from the final send.

Related

Why does having a return statement in my try block make the 'else' statement unreachable?

I'm learning exception handling in Python3 and can't seem to understand why the else statement in my if-else block is unreachable when using return in the try block.
def simple_division() -> float:
"""
:return: the quotient of a / b
"""
a, b = _git_int()
if a == 0:
return 0
else:
try:
return a / b
except ZeroDivisionError:
print("Uh, you can't divide by zero, bud.")
else:
print("This should execute if no exceptions are raised, right? but it doesn't")
finally:
print("This should execute either way.")
I spent some time debugging in order to figure out that the return statement in the try block was at fault... and I now understand that doing something like this circumvents the problem:
def simple_division() -> float:
"""
:return: the quotient of a / b
"""
a, b = _git_int()
if a == 0:
return 0
else:
try:
answer = a / b
except ZeroDivisionError:
print("Uh, you can't divide by zero, bud.")
else:
print("This DOES work")
return answer
finally:
print("This should execute either way.")
But I haven't yet found any documentation explaining why you can't have a return statement in a try block. Could someone explain this to me? What would be the best practice here?

Multiprocessing prime number check exhibits weird behavior

I have the following parallelized code that checks if a number is a prime number.
import math
from multiprocessing import Pool, Manager
import time
from itertools import product
SERIAL_CHECK_CUTOFF = 21
CHECK_EVERY = 1000
FLAG_CLEAR = b'0'
FLAG_SET = b'1'
print("CHECK_EVERY", CHECK_EVERY)
def create_range(from_i, to_i, nbr_processes):
if from_i == to_i:
return from_i
else:
nbr_ranges = []
count = from_i
while(count < to_i + 1):
nbr_ranges.append(count)
count+=1
k, m = divmod(len(nbr_ranges), nbr_processes)
subranges = list((nbr_ranges[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(nbr_processes)))
subranges = [arr[::len(arr) - 1] if len(arr) > 1 else arr for arr in subranges]
return subranges
def check_prime_in_range(n_from_i_to_i, value):
(n, _range) = n_from_i_to_i
if len(_range) > 1:
(from_i, to_i) = _range
else:
return True
if n % 2 == 0:
return False
# check at every 1000 iterations.
# At every check, see if value.value has been set to FLAG_SET
# If so, exit the search.
# If in the search a process finds the factor, set the flag and exit the process.
# NOTE: check_every flag is suboptimal
check_every = CHECK_EVERY
for i in range(from_i, math.floor(to_i), 2):
check_every = -1
if not check_every:
if value.value == FLAG_SET:
return False
check_every = CHECK_EVERY
if n % i == 0:
value.value = FLAG_SET
return False
return True
def check_prime(n, nbr_processes, value):
# serial check to quickly check for small factors. if none are found, then a
# parallel search is started
from_i = 3
to_i = SERIAL_CHECK_CUTOFF
value.value = FLAG_CLEAR
if not check_prime_in_range((n, (from_i, to_i)), value):
print("Found small non-prime factor")
return False
# continue to check for larger factors in parallel
from_i = to_i
to_i = int(math.sqrt(n)) + 1
ranges_to_check = create_range(from_i, to_i, nbr_processes)
ranges_to_check = zip(len(ranges_to_check) * [n], ranges_to_check)
with Pool() as pool:
args = ((arg, value) for arg in product(list(ranges_to_check)))
# print(list(args)) # comment out this line and the code breaks
results = pool.map(check_prime_in_range, args)
if False in results:
return False
return True
if __name__ == "__main__":
start = time.time()
nbr_processes = 4
manager = Manager()
value = manager.Value(b'c', FLAG_CLEAR) # 1-byte character
n = 98_823_199_699_242_79
isprime = check_prime(n, nbr_processes, value)
end = time.time()
if isprime:
print(f"{n} is a prime number")
else:
print(f"{n} is not a prime number")
print(f"Duration: {end - start}s")
check_prime() finds a range of factors and tries to determine if there is a non-prime factor. Each range is sent to a process to find a non-prime factor. A multiprocessing.Manager object is used as a flag, so that if a process found a non-prime factor, it sets a flag. The flag is checked periodically. If the flag is set, all processes are terminated.
Because multiprocessing.map only accepts function with one argument, I used itertools.product to create an argument generator that contains the range and the manager object.
If I run the code as it is, I get the following error.
Traceback (most recent call last):
File "/home/briansia/projects/python/multiprocess/prime_manager.py", line 118, in <module>
isprime = check_prime(n, nbr_processes, value)
File "/home/briansia/projects/python/multiprocess/prime_manager.py", line 103, in check_prime
results = pool.map(check_prime_in_range, list(args))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 367, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.10/multiprocessing/pool.py", line 774, in get
raise self._value
TypeError: check_prime_in_range() missing 1 required positional argument: 'value'
If I print the args generator above the map function, then the code runs correctly. How exactly did the print statement modify the generator such that it works with the map function?
To illustrate the problem, change the with Pool() block in your program to this:
with Pool() as pool:
args = ((arg, value) for arg in product(list(ranges_to_check)))
x = list(args)
# print(x)
results = pool.map(check_prime_in_range, x)
This will always crash whether you comment out the print statement or not.
The print statement in your code is not the issue: it is list(args), which causes the generator expression to run. Generator expressions run once and once only. After you've done list(args), the generator still exists and it's still named args, but it is now exhausted. When you run your program with the line containing list(args), you have already used up the generator; therefore you effectively pass an empty iterator to Pool.map. Your program doesn't actually work. It doesn't produce a traceback because it doesn't really do anything.
The problem with your code is this function:
def check_prime_in_range(n_from_i_to_i, value)
It takes two arguments. The first time you call it, you indeed pass two arguments:
if not check_prime_in_range((n, (from_i, to_i)), value):
But when you call it indirectly through Pool.map, it is called with only one argument. As you stated yourself, Pool.map only passes a single argument to its function. It's true that your generator has made two objects into a tuple, but that tuple is only one object and therefore it's the only argument that gets passed to check_prime_in_range. There is no second argument, as the traceback indicates.
I don't know how you want to fix the problem, but it might be a good idea to define check_prime_in_range as a function of one argument, and call it consistently. You can unpack the single argument inside the function, for example:
def check_prime_in_range(x):
n_from_i_to_i, value = x
(n, _range) = n_from_i_to_i
# etc.
Your first call would now be:
if not check_prime_in_range(((n, (from_i, to_i)), value)):
That's rather clumsy so I would consider defining a little class to hold all the variables in a single object. But that's a style issue.

Using a condition with `__next__` on a Python class

Is it possible to implement a condition in a class iterator with the __next__ method? In a case like the one below, if the condition is not met the iterator returns None, but I'd like to omit these and only receive the actual values. For example:
class Foo2:
def __init__(self, _min=1, _max=5):
self._max = _max
self._min = _min
self.it = -1
def __iter__(self):
return self
def __next__(self):
self.it += 1
if self.it == self._max:
raise StopIteration
if self.it > self._min:
return self.it
fu = Foo2(_min=2, _max=5)
for x in fu:
# if x: -> can this be controlled by the __next__ method in the class?
print(x)
# prints
None
None
None
3
4
I'd like to only print the actual values 3 and 4 but instead of testing for None in the loop, would be nicer to have the class only emit those. Is it possible?
I don't know if this is the most correct approach, as there may be problems/drawbacks that I may be overlooking (would actually appreciate if someone could point these out to me), but for this particular case, if the interest is to only ignore values based on some condition, calling next recursively seems to do the trick:
def __next__(self):
self.it += 1
if self.it == self._max:
raise StopIteration
elif self.it > self._min:
return self.it
else:
return self.__next__()
Works in a for loop and by calling next directly:
fu = Foo2(_min=2, _max=5)
for x in fu:
print(x)
# prints
3
4
fu = Foo2(_min=2, _max=5)
print(next(fu)) # 3
print(next(fu)) # 4
print(next(fu)) # raise StopIteration

is there a way to track the number of times a function is called in n seconds?

Perhaps an odd question but here it goes: I am trying to write an function which
take no arguments
return true if this function has been called 3 times or fewer in the last 1 second
return false otherwise
def myfunction():
myfunction.counter += 1
myfunction.counter = 0
This above code keeps track how many times this function is called but how to modify this so it satisfy above requirements?
I know that I can use time module in python but how to use it to solve this problem?
First keep track of when the function was called with a decorator:
import time
def counted(f):
def wrapped(*args, **kwargs):
wrapped.calls.append(int(round(time.time() * 1000))) # append the ms it was called
return f(*args, **kwargs)
wrapped.calls = []
return wrapped
This decorator can be used like so:
#counted
def foo():
print(2)
time.sleep(.3)
Then have a function to group the timestamps within a certain range:
def group_by(lst, seconds):
"""
Groups a list of ms times into the {seconds}
range it was called. Most recent grouping will
be in the first element of the list.
"""
ms = 1000 * seconds
result = []
if lst:
start = lst[-1]
count = 1
for ele in reversed(lst[:-1]):
if ele > start - ms:
count += 1
else:
result.append(count)
count = 1
start = ele
result.append(count)
return result
Finally test it:
for _ in range(5):
foo()
data = foo.calls
number_of_calls_last_second = group_by(data, 1)
print(f"foo called {number_of_calls_last_second[0]} times within the last second")
print(number_of_calls_last_second[0] <= 3) # Here is your True False output
Output:
2
2
2
2
2
foo called 4 times within the last second
False
I would use a decorator like this:
import time
def call_counter(calls_number, max_time):
def _decorator(function):
def helper():
helper.calls.append(time.time())
function()
if len(helper.calls) > calls_number:
helper.calls = helper.calls[calls_number:]
return time.time() - helper.calls[0] > max_time
return True
helper.calls = []
return helper
return _decorator
#call_counter(3, 1000)
def my_function():
pass
for _ in range(3):
print(my_function()) # Prints True three times
print(my_function()) # Prints False: the function has been called four times in less than one second.
time.sleep(1)
print(my_function()) # Prints True
I used parameters in the decorator so that you can reuse it with different values. If you have any question, ask me in the comments.

Issue with iterating through Linked List in Python

I am trying to implement a Linked List from scratch in Python 3.7 and after many attempts, I can't seem to get the print_values() method to print all node values in the order that is expected (sequential). At this point I am not sure if the problem is with the append() method or the print_values() method.
class Node:
def __init__(self, node_value):
self.node_value = node_value
self.nextNode = None
class SinglyLinkedList:
# methods that should be available: get_size, insert_at, append, remove,
# update_node_value
def __init__(self):
self.head_node = None
self.tail_node = None
self.size = 0
def get_list_size(self):
"""This returns the value for the size variable which get incremented
every time a new node is added.
This implementation is better because it has a running time of O(1)
as opposed to iterating through the
whole list which has a running time of O(n)"""
return self.size
def append(self, value):
new_node = Node(value)
if self.head_node is None:
self.head_node = new_node
self.size += 1
else:
while self.head_node.nextNode is not None:
self.head_node = self.head_node.nextNode
self.head_node.nextNode = new_node
self.size += 1
def print_values(self):
current_node = self.head_node
list_values = []
while current_node.nextNode is not None:
list_values.append(current_node.node_value)
if current_node.nextNode.nextNode is None:
list_values.append(current_node.nextNode.node_value)
current_node = current_node.nextNode
if list_values is not None:
print("Linked list: " + str(list_values))
else:
print("Linked List is currently empty.")
# Helper code below.
new_ll = SinglyLinkedList()
new_ll.append("alpha")
print(new_ll.get_list_size())
new_ll.append("beta")
print(new_ll.get_list_size())
new_ll.append("gamma")
print(new_ll.get_list_size())
new_ll.append("delta")
print(new_ll.get_list_size())
new_ll.append("epsilon")
print(new_ll.get_list_size())
new_ll.append("zeta")
print(new_ll.get_list_size())
new_ll.print_values()
And all I am getting in the output is this:
1
2
3
4
5
6
Linked list: ['epsilon', 'zeta']
Typically a singly linked list only keeps track of the head. (Not the tail as well). So self.tail_node = None is not normally used.
When working with a linkedlist or a tree it will make your life a lot easier to work with recursion instead of using loops. Loops work fine if you just want to go over the list but if you want to change it then I would recommend a recursive solution.
That being said the issue isn't with your print it is with your append.
You can NEVER move the head node. You must always make a pointer so this caused the issue:
self.head_node = self.head_node.nextNode
Fix:
def append(self, value):
new_node = Node(value)
if self.head_node is None:
self.head_node = new_node
self.size += 1
else:
temp_head = self.head_node
while temp_head.nextNode is not None:
temp_head = temp_head.nextNode
temp_head.nextNode = new_node
self.size += 1
Recursive Solution:
def append(self, value):
new_node = Node(value)
self.size += 1
self.head_node = self.__recursive_append(self.head_node, new_node)
def __recursive_append(self, node, new_node):
if not node:
node = new_node
elif not node.nextNode:
node.nextNode = new_node
else:
node.nextNode = self.__recursive_append(node.nextNode, new_node)
return node
That being said I didn't realize this till after I redid your print so here is a cleaner print method using a python generator that may help you.
Generators is something you can use with python that you can't normally use with other programming languages and it makes something like turning a linked list into a list of values really easy to do:
def print_values(self, reverse=False):
values = [val for val in self.__list_generator()]
if values:
print("Linked list: " + str(values))
else:
print("Linked List is currently empty.")
def __list_generator(self):
'''
A Generator remembers its state.
When `yield` is hit it will return like a `return` statement would
however the next time the method is called it will
start at the yield statment instead of starting at the beginning
of the method.
'''
cur_node = self.head_node
while cur_node != None:
yield cur_node.node_value # return the current node value
# Next time this method is called
# Go to the next node
cur_node = cur_node.nextNode
Disclaimer:
The generator is good but I only did it that way to match how you did it (i.e. getting a list from the linkedlist). If a list is not important but you just want to output each element in the linkedlist then I would just do it this way:
def print_values(self, reverse=False):
cur_node = self.head_node
if cur_node:
print('Linked list: ', end='')
while cur_node:
print("'{}' ".format(cur_node.node_value), end='')
cur_node = cur_node.nextNode
else:
print("Linked List is currently empty.")
I agree with Error - Syntactical Remorse's answer in that the problem is in append and the body of the while loop...here is the example in pseudo code:
append 0:
head = alpha
append 1:
//skip the while loop
head = alpha
next = beta
append 2:
//one time through the while loop because next = beta
head = beta
//beta beta just got assigned to head, and beta doesn't have next yet...
next = gamma
append 3:
//head is beta, next is gamma...the linked list can only store 2 nodes
head = gamma //head gets next from itself
//then has no next
next = delta
...etc.

Resources