a = iter([1])
next(a)
next(a)
raises StopIteration
a = iter([1])
next(a)
next(a, None)
there is no StopIteration
however the defination of nextis
def next(iterator, default=None):
How does Python distinguish if the parameter is default or user-given?
The builtin function next doesn't use None as its default. In fact, as you can see in the C source code (for cpython, the official Python interpreter), its argument handling is very low level. The pure Python equivalent would be using *args in the function definition and and manually checking that you got either one or two arguments. Here's what that might look like:
def next(*args):
assert(1 <= len(args) <= 2)
try:
return args[0].__next__()
except StopIteration:
if len(args) > 1:
return args[1]
else:
raise
A better way to replicate similar behavior in Python is to use a default value that is not possible for outside code to accidentally supply. Here's one good way to do it:
_sentinel = object() # a unique object that no user should ever pass to us
def next(iterator, default=_sentinel):
try:
return iterator.__next__()
except StopIteration:
if default is not _sentinel:
return default
else:
raise
Related
I have a tuple of values as follows:
commands = ("time", "weather", "note")
I get an input from the user and I check if the input matches any value in the tuple as follows:
if user_input.startswith(commands):
# Do stuff based on the command
What I would like to do is exactly as the above, but with the matched item returned. I tried many methods, but nothing really worked. Thank you in advance.
Edit:
At some point I thought I could use the Walrus operator, but you would figure out it wouldn't work.
if user_input.startswith(returned_command := commands):
command = returned_command
# actually command only gets the commands variable.
This function takes a function of one argument and a list of arguments, and will return the first argument that makes the function return a truthy value. Otherwise, it will raise an error:
def first_matching(matches, candidates):
try:
return next(filter(matches, candidates))
except StopIteration:
raise ValueError("No matching candidate")
result = first_matching(user_input.startswith, commands)
Try this. You can store your functions inside a dictionary and call them.
def print_time():
print("Time")
def exit_now():
exit()
def print_something(*args):
for item in args:
print(item)
command_dict = {
"time": print_time,
"something": print_something,
"exit": exit_now
}
while True:
user_input = input("Input command: ")
command, *args = user_input.split()
command_dict[command](*args)
output:
Input command: time
Time
Input command: something 1 2 3
1
2
3
Input command: exit
You can use any.
user_input = input()
if any(user_input.startswith(s) for s in commands):
# The input is valid.
If you want to ask for user inputs until their reply is valid, shove that in a while loop.
match = None
while True:
user_input = input()
if any(user_input.startswith(s) for s in commands): # Make sure there is a match at all.
for c in commands:
if user_input.startswith(c):
match = c # Find which command matched in particular.
break # Exit the loop.
else:
print(f"Your selection should be one of {commands}.")
# ...
# Do something with the now valid input and matched element.
# ...
With Python 3.8 or later this should work:
if any(user_input.startswith(returned_command := c) for c in commands):
print(returned_command)
I developed some code for a problem on leetcode.com. There was a class and a function and I added another function matchingBrackets. Yet, when I make the code run I have a NameError on this function. Indeed, it seems it is not defined.
class Solution:
def matchingBrackets(self, s:str) -> bool:
lefts = ['(','{','[']
rights = [')',']','}']
if s[0] in lefts:
function(s[1:],type)
elif s[0] in rights:
if s[0] == bracket:
return True
else:
return False
else:
print("different from brackets")
s = s[1:]
def isValid(self, s: str) -> bool:
return matchingBrackets(s[1:],bracket)
When running it the code on leetcode console, it returns:
NameError: name 'matchingBrackets' is not defined
Line 19 in isValid (Solution.py)
Line 30 in __helper__ (Solution.py)
Line 44 in _driver (Solution.py)
Line 55 in <module> (Solution.py)
I think there are a couple of issues.
When you are referring to methods defined in the same class as def method(self, arg1, arg2):, you need to further use that method as self.method(arg1, arg2) if you are using it from the same class. That is, your isValid method needs to return self.matchingBrackets(s[1:],bracket) instead.
Also, you define matchingBrackets as a method taking only one argument apart from self, but then you pass to it two arguments s[1:] and bracket. This is also why it is not clear what the variable bracket is referring to.
Also, I don't really understand what function(s[1:],type) is referring to. Did it define it outside the snippet of the code that you posted?
Finally, I am not sure that the logic of the function does what the Leetcode question is asking.
I have a situation in which I need to hook certain functions so that I can inspect the return values and track them. This is useful for tracking for example running averages of values returned by methods/functions. However, these methods/function can also be generators.
However, if i'm not wrong, python detects generators when parsing and when the function is called at runtime it always returns a generator. Thus I can't simply do something like:
import types
def decorator(func):
average = None # assume average can be accessed by other means
def wrap(*args, **kwargs):
nonlocal average
ret_value = func(*args, **kwargs)
#if False wrap is still a generator
if isinstance(ret_value, types.GeneratorType):
for value in ret_value:
# update average
yield value
else:
# update average
return ret_value # ret_value can't ever be fetched
return wrap
And yielding in this decorator is necessary, since I need to track the values as the caller iterates this decorated generator (i.e. "real-time"). Meaning, I can't simply replace the for and yield with values = list(ret_value), and return values. (i.e.) If the func is a generator it needs to remain a generator once decorated. But if func is a pure function/method, even if the else is executed, wrap still remains a generator. Meaning, the ret_value can't ever be fetched.
A toy example of using such a generator would be:
#decorated
def some_gen(some_list):
for _ in range(10):
if some_list[0] % 2 == 0:
yield 1
else:
yield 0
def caller():
some_list = [0]
for i in some_gen(some_list):
print(i)
some_list[0] += 1 # changes what some_gen yields
For the toy example, there may be simpler solutions, but it's just to prove a point.
Maybe I'm missing something obvious, but I did some research and didn't find anything. The closest thing I found was this. However, that still doesn't let the decorator inspect every value returned by the wrapped generator (just the first). Does this have a solution, or are two types of decorators (one for functions and one for decorators) necessary?
Once solution I realized is:
def as_generator(gen, avg_update):
for i in gen:
avg_update(i)
yield i
import types
def decorator(func):
average = None # assume average can be accessed by other means
def wrap(*args, **kwargs):
def avg_update(ret_value):
nonlocal average
#update average
pass
ret_value = func(*args, **kwargs)
#if False wrap is still a generator
if isinstance(ret_value, types.GeneratorType):
return as_generator(ret_value, avg_update)
else:
avg_update(ret_value)
return ret_value # ret_value can't ever be fetched
return wrap
I don't know if this is the only one, or if there exists one without making a separate function for the generator case.
How do you override the result of unpacking syntax *obj and **obj?
For example, can you somehow create an object thing which behaves like this:
>>> [*thing]
['a', 'b', 'c']
>>> [x for x in thing]
['d', 'e', 'f']
>>> {**thing}
{'hello world': 'I am a potato!!'}
Note: the iteration via __iter__ ("for x in thing") returns different elements from the *splat unpack.
I had a look inoperator.mul and operator.pow, but those functions only concern usages with two operands, like a*b and a**b, and seem unrelated to splat operations.
* iterates over an object and uses its elements as arguments. ** iterates over an object's keys and uses __getitem__ (equivalent to bracket notation) to fetch key-value pairs. To customize *, simply make your object iterable, and to customize **, make your object a mapping:
class MyIterable(object):
def __iter__(self):
return iter([1, 2, 3])
class MyMapping(collections.Mapping):
def __iter__(self):
return iter('123')
def __getitem__(self, item):
return int(item)
def __len__(self):
return 3
If you want * and ** to do something besides what's described above, you can't. I don't have a documentation reference for that statement (since it's easier to find documentation for "you can do this" than "you can't do this"), but I have a source quote. The bytecode interpreter loop in PyEval_EvalFrameEx calls ext_do_call to implement function calls with * or ** arguments. ext_do_call contains the following code:
if (!PyDict_Check(kwdict)) {
PyObject *d;
d = PyDict_New();
if (d == NULL)
goto ext_call_fail;
if (PyDict_Update(d, kwdict) != 0) {
which, if the ** argument is not a dict, creates a dict and performs an ordinary update to initialize it from the keyword arguments (except that PyDict_Update won't accept a list of key-value pairs). Thus, you can't customize ** separately from implementing the mapping protocol.
Similarly, for * arguments, ext_do_call performs
if (!PyTuple_Check(stararg)) {
PyObject *t = NULL;
t = PySequence_Tuple(stararg);
which is equivalent to tuple(args). Thus, you can't customize * separately from ordinary iteration.
It'd be horribly confusing if f(*thing) and f(*iter(thing)) did different things. In any case, * and ** are part of the function call syntax, not separate operators, so customizing them (if possible) would be the callable's job, not the argument's. I suppose there could be use cases for allowing the callable to customize them, perhaps to pass dict subclasses like defaultdict through...
I did succeed in making an object that behaves how I described in my question, but I really had to cheat. So just posting this here for fun, really -
class Thing:
def __init__(self):
self.mode = 'abc'
def __iter__(self):
if self.mode == 'abc':
yield 'a'
yield 'b'
yield 'c'
self.mode = 'def'
else:
yield 'd'
yield 'e'
yield 'f'
self.mode = 'abc'
def __getitem__(self, item):
return 'I am a potato!!'
def keys(self):
return ['hello world']
The iterator protocol is satisfied by a generator object returned from __iter__ (note that a Thing() instance itself is not an iterator, though it is iterable). The mapping protocol is satisfied by the presence of keys() and __getitem__. Yet, in case it wasn't already obvious, you can't call *thing twice in a row and have it unpack a,b,c twice in a row - so it's not really overriding splat like it pretends to be doing.
I was recently coding a few Python 3.x programs and I wonder what is the best way of handling simple exceptions in python function args. In this example I'm going with checking if inserted value can be converted to int. I've come up with two ways of doing this:
def test_err_try(x, y, z):
try:
int(x)
int(y)
int(z)
except ValueError or TypeError: #try handler
return 'Bad args.'
##code##
or
def test_err_if(x, y, z):
if type(x) != int or type(y) != int or type(z) != int: #if handler
raise ValueError('Bad args.')
else:
##code##
I know that there is a difference in what the handlers are returning - in the first case it's just string 'Bad args.' and in the second it is ValueError exception.
What is the best (or rather simplest and shortest) way? First, second or neither and there is a better one?
The answer depends on your use case. If you are building a function which will be exposed to an end user, then the try except block offers more functionality because it will allow any variable which can be converted to an int. In this case I would suggest raising an error rather than having the function return a string:
try:
x = int(x)
y = int(y)
z = int(z)
except ValueError:
raise TypeError("Input arguments should be convertible to int")
If the function is meant for internal use within your program, then it is best to use the assert statement because its evaluation can be disabled once you are finished debugging your program.
assert type(x) is int
assert type(y) is int
assert type(z) is int