pytest test needs parametrization at collection phase and at setup time - parameterization

I have some tests which I'd like to parametrize using some arguments which need the parametrization to happen during collection phase and some which need it to happen at setup time. I'm unable to use
metafunc.parametrize in a pytest_generate_test hook since I need some fixtures to have indirect=True to pass the argname as a request.param, but the other arguments need to have indirect=False.
Any ideas how to do this?
Here's an example of what my tests look like and what I want to do:
def pytest_generate_tests(metafunc):
if metafunc.function.__name__ == 'test_example':
argnames = []
argvalues = []
parameters = getattr(metafunc.function, 'paramlist', ())
for p in parameters:
if type(p) == list:
argnames = tuple(['myfixture'] + p)
else:
argvalues.append = tuple(['std'] + p['argvalues'])
argvalues.append = tuple(['pro'] + p['argvalues'])
# I want to do the following, but it won't work since some of the
# args need indirect set to true and some need indirect set to false.
metafunc.parametrize(argnames, argvalues, indirect=True)
elif 'myfixture' in metafunc.fixturenames:
# we have existing tests which use the fixture, but only with
standard
metafunc.parametrize("myfixture", "std")
else:
# we have existing tests which use older style parametrization,
non-fixture
for p in getattr(metafunc.function, 'paramlist', ()):
metafunc.addcall(funcargs=p)
def params(decolist):
def wrapper(function):
function.paramlist = decolist
return function
return wrapper
#pytest.fixture
def myfixture(request):
If request.param == 'std':
myfix = SomeObject()
elif request.param == 'pro':
myfix = SomeOtherObject()
def fin():
myfix.close()
request.addfinalizer(fin)
return myfix
#params([
['color', 'type'],
{ 'argvalues': [ 'blue', 'cat'] },
{ 'argvalues': ['pink', 'dog'] }
])
def test_example(myfixture, color, type):
# this is the new test we want to add
def test_something(myfixture):
# existing test which only uses std fixture
#params([
{'arg1': 1, 'arg2': 2},
{'arg1': 3, 'arg2': 5}
])
def test_old_style(arg1, arg2):
# existing tests which don't use fixtures
Thanks for reading through this! I know it's rather long.

per design all parametization happens at collection time

Related

switch case substitute for inside a python 3.9 class [duplicate]

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I want to write a function in Python that returns different fixed values based on the value of an input index.
In other languages I would use a switch or case statement, but Python does not appear to have a switch statement. What are the recommended Python solutions in this scenario?
Python 3.10 (2021) introduced the match-case statement which provides a first-class implementation of a "switch" for Python. For example:
def f(x):
match x:
case 'a':
return 1
case 'b':
return 2
case _:
return 0 # 0 is the default case if x is not found
The match-case statement is considerably more powerful than this simple example.
The original answer below was written in 2008, before match-case was available:
You could use a dictionary:
def f(x):
return {
'a': 1,
'b': 2,
}[x]
If you'd like defaults, you could use the dictionary get(key[, default]) function:
def f(x):
return {
'a': 1,
'b': 2
}.get(x, 9) # 9 will be returned default if x is not found
I've always liked doing it this way
result = {
'a': lambda x: x * 5,
'b': lambda x: x + 7,
'c': lambda x: x - 2
}[value](x)
From here
In addition to the dictionary methods (which I really like, BTW), you can also use if-elif-else to obtain the switch/case/default functionality:
if x == 'a':
# Do the thing
elif x == 'b':
# Do the other thing
if x in 'bc':
# Fall-through by not using elif, but now the default case includes case 'a'!
elif x in 'xyz':
# Do yet another thing
else:
# Do the default
This of course is not identical to switch/case - you cannot have fall-through as easily as leaving off the break statement, but you can have a more complicated test. Its formatting is nicer than a series of nested ifs, even though functionally that's what it is closer to.
Python >= 3.10
Wow, Python 3.10+ now has a match/case syntax which is like switch/case and more!
PEP 634 -- Structural Pattern Matching
Selected features of match/case
1 - Match values:
Matching values is similar to a simple switch/case in another language:
match something:
case 1 | 2 | 3:
# Match 1-3.
case _:
# Anything else.
#
# Match will throw an error if this is omitted
# and it doesn't match any of the other patterns.
2 - Match structural patterns:
match something:
case str() | bytes():
# Match a string like object.
case [str(), int()]:
# Match a `str` and an `int` sequence
# (`list` or a `tuple` but not a `set` or an iterator).
case [_, _]:
# Match a sequence of 2 variables.
# To prevent a common mistake, sequence patterns don’t match strings.
case {"bandwidth": 100, "latency": 300}:
# Match this dict. Extra keys are ignored.
3 - Capture variables
Parse an object; saving it as variables:
match something:
case [name, count]
# Match a sequence of any two objects and parse them into the two variables.
case [x, y, *rest]:
# Match a sequence of two or more objects,
# binding object #3 and on into the rest variable.
case bytes() | str() as text:
# Match any string like object and save it to the text variable.
Capture variables can be useful when parsing data (such as JSON or HTML) that may come in one of a number of different patterns.
Capture variables is a feature. But it also means that you need to use dotted constants (ex: COLOR.RED) only. Otherwise, the constant will be treated as a capture variable and overwritten.
More sample usage:
match something:
case 0 | 1 | 2:
# Matches 0, 1 or 2 (value).
print("Small number")
case [] | [_]:
# Matches an empty or single value sequence (structure).
# Matches lists and tuples but not sets.
print("A short sequence")
case str() | bytes():
# Something of `str` or `bytes` type (data type).
print("Something string-like")
case _:
# Anything not matched by the above.
print("Something else")
Python <= 3.9
My favorite Python recipe for switch/case was:
choices = {'a': 1, 'b': 2}
result = choices.get(key, 'default')
Short and simple for simple scenarios.
Compare to 11+ lines of C code:
// C Language version of a simple 'switch/case'.
switch( key )
{
case 'a' :
result = 1;
break;
case 'b' :
result = 2;
break;
default :
result = -1;
}
You can even assign multiple variables by using tuples:
choices = {'a': (1, 2, 3), 'b': (4, 5, 6)}
(result1, result2, result3) = choices.get(key, ('default1', 'default2', 'default3'))
class switch(object):
value = None
def __new__(class_, value):
class_.value = value
return True
def case(*args):
return any((arg == switch.value for arg in args))
Usage:
while switch(n):
if case(0):
print "You typed zero."
break
if case(1, 4, 9):
print "n is a perfect square."
break
if case(2):
print "n is an even number."
if case(2, 3, 5, 7):
print "n is a prime number."
break
if case(6, 8):
print "n is an even number."
break
print "Only single-digit numbers are allowed."
break
Tests:
n = 2
#Result:
#n is an even number.
#n is a prime number.
n = 11
#Result:
#Only single-digit numbers are allowed.
My favorite one is a really nice recipe. It's the closest one I've seen to actual switch case statements, especially in features.
class switch(object):
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args: # changed for v1.5, see below
self.fall = True
return True
else:
return False
Here's an example:
# The following example is pretty much the exact use-case of a dictionary,
# but is included for its simplicity. Note that you can include statements
# in each suite.
v = 'ten'
for case in switch(v):
if case('one'):
print 1
break
if case('two'):
print 2
break
if case('ten'):
print 10
break
if case('eleven'):
print 11
break
if case(): # default, could also just omit condition or 'if True'
print "something else!"
# No need to break here, it'll stop anyway
# break is used here to look as much like the real thing as possible, but
# elif is generally just as good and more concise.
# Empty suites are considered syntax errors, so intentional fall-throughs
# should contain 'pass'
c = 'z'
for case in switch(c):
if case('a'): pass # only necessary if the rest of the suite is empty
if case('b'): pass
# ...
if case('y'): pass
if case('z'):
print "c is lowercase!"
break
if case('A'): pass
# ...
if case('Z'):
print "c is uppercase!"
break
if case(): # default
print "I dunno what c was!"
# As suggested by Pierre Quentel, you can even expand upon the
# functionality of the classic 'case' statement by matching multiple
# cases in a single shot. This greatly benefits operations such as the
# uppercase/lowercase example above:
import string
c = 'A'
for case in switch(c):
if case(*string.lowercase): # note the * for unpacking as arguments
print "c is lowercase!"
break
if case(*string.uppercase):
print "c is uppercase!"
break
if case('!', '?', '.'): # normal argument passing style also applies
print "c is a sentence terminator!"
break
if case(): # default
print "I dunno what c was!"
Some of the comments indicated that a context manager solution using with foo as case rather than for case in foo might be cleaner, and for large switch statements the linear rather than quadratic behavior might be a nice touch. Part of the value in this answer with a for loop is the ability to have breaks and fallthrough, and if we're willing to play with our choice of keywords a little bit we can get that in a context manager too:
class Switch:
def __init__(self, value):
self.value = value
self._entered = False
self._broken = False
self._prev = None
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
return False # Allows a traceback to occur
def __call__(self, *values):
if self._broken:
return False
if not self._entered:
if values and self.value not in values:
return False
self._entered, self._prev = True, values
return True
if self._prev is None:
self._prev = values
return True
if self._prev != values:
self._broken = True
return False
if self._prev == values:
self._prev = None
return False
#property
def default(self):
return self()
Here's an example:
# Prints 'bar' then 'baz'.
with Switch(2) as case:
while case(0):
print('foo')
while case(1, 2, 3):
print('bar')
while case(4, 5):
print('baz')
break
while case.default:
print('default')
break
class Switch:
def __init__(self, value):
self.value = value
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
return False # Allows a traceback to occur
def __call__(self, *values):
return self.value in values
from datetime import datetime
with Switch(datetime.today().weekday()) as case:
if case(0):
# Basic usage of switch
print("I hate mondays so much.")
# Note there is no break needed here
elif case(1,2):
# This switch also supports multiple conditions (in one line)
print("When is the weekend going to be here?")
elif case(3,4):
print("The weekend is near.")
else:
# Default would occur here
print("Let's go have fun!") # Didn't use case for example purposes
There's a pattern that I learned from Twisted Python code.
class SMTP:
def lookupMethod(self, command):
return getattr(self, 'do_' + command.upper(), None)
def do_HELO(self, rest):
return 'Howdy ' + rest
def do_QUIT(self, rest):
return 'Bye'
SMTP().lookupMethod('HELO')('foo.bar.com') # => 'Howdy foo.bar.com'
SMTP().lookupMethod('QUIT')('') # => 'Bye'
You can use it any time you need to dispatch on a token and execute extended piece of code. In a state machine you would have state_ methods, and dispatch on self.state. This switch can be cleanly extended by inheriting from base class and defining your own do_ methods. Often times you won't even have do_ methods in the base class.
Edit: how exactly is that used
In case of SMTP you will receive HELO from the wire. The relevant code (from twisted/mail/smtp.py, modified for our case) looks like this
class SMTP:
# ...
def do_UNKNOWN(self, rest):
raise NotImplementedError, 'received unknown command'
def state_COMMAND(self, line):
line = line.strip()
parts = line.split(None, 1)
if parts:
method = self.lookupMethod(parts[0]) or self.do_UNKNOWN
if len(parts) == 2:
return method(parts[1])
else:
return method('')
else:
raise SyntaxError, 'bad syntax'
SMTP().state_COMMAND(' HELO foo.bar.com ') # => Howdy foo.bar.com
You'll receive ' HELO foo.bar.com ' (or you might get 'QUIT' or 'RCPT TO: foo'). This is tokenized into parts as ['HELO', 'foo.bar.com']. The actual method lookup name is taken from parts[0].
(The original method is also called state_COMMAND, because it uses the same pattern to implement a state machine, i.e. getattr(self, 'state_' + self.mode))
I'm just going to drop my two cents in here. The reason there isn't a case/switch statement in Python is because Python follows the principle of "there's only one right way to do something". So obviously you could come up with various ways of recreating switch/case functionality, but the Pythonic way of accomplishing this is the if/elif construct. I.e.,
if something:
return "first thing"
elif somethingelse:
return "second thing"
elif yetanotherthing:
return "third thing"
else:
return "default thing"
I just felt PEP 8 deserved a nod here. One of the beautiful things about Python is its simplicity and elegance. That is largely derived from principles laid out in PEP 8, including "There's only one right way to do something."
Let's say you don't want to just return a value, but want to use methods that change something on an object. Using the approach stated here would be:
result = {
'a': obj.increment(x),
'b': obj.decrement(x)
}.get(value, obj.default(x))
Here Python evaluates all methods in the dictionary.
So even if your value is 'a', the object will get incremented and decremented by x.
Solution:
func, args = {
'a' : (obj.increment, (x,)),
'b' : (obj.decrement, (x,)),
}.get(value, (obj.default, (x,)))
result = func(*args)
So you get a list containing a function and its arguments. This way, only the function pointer and the argument list get returned, not evaluated. 'result' then evaluates the returned function call.
Solution to run functions:
result = {
'case1': foo1,
'case2': foo2,
'case3': foo3,
}.get(option)(parameters_optional)
where foo1(), foo2() and foo3() are functions
Example 1 (with parameters):
option = number['type']
result = {
'number': value_of_int, # result = value_of_int(number['value'])
'text': value_of_text, # result = value_of_text(number['value'])
'binary': value_of_bin, # result = value_of_bin(number['value'])
}.get(option)(value['value'])
Example 2 (no parameters):
option = number['type']
result = {
'number': func_for_number, # result = func_for_number()
'text': func_for_text, # result = func_for_text()
'binary': func_for_bin, # result = func_for_bin()
}.get(option)()
Example 4 (only values):
option = number['type']
result = {
'number': lambda: 10, # result = 10
'text': lambda: 'ten', # result = 'ten'
'binary': lambda: 0b101111, # result = 47
}.get(option)()
If you have a complicated case block you can consider using a function dictionary lookup table...
If you haven't done this before it's a good idea to step into your debugger and view exactly how the dictionary looks up each function.
NOTE: Do not use "()" inside the case/dictionary lookup or it will call each of your functions as the dictionary / case block is created. Remember this because you only want to call each function once using a hash style lookup.
def first_case():
print "first"
def second_case():
print "second"
def third_case():
print "third"
mycase = {
'first': first_case, #do not use ()
'second': second_case, #do not use ()
'third': third_case #do not use ()
}
myfunc = mycase['first']
myfunc()
If you're searching extra-statement, as "switch", I built a Python module that extends Python. It's called ESPY as "Enhanced Structure for Python" and it's available for both Python 2.x and Python 3.x.
For example, in this case, a switch statement could be performed by the following code:
macro switch(arg1):
while True:
cont=False
val=%arg1%
socket case(arg2):
if val==%arg2% or cont:
cont=True
socket
socket else:
socket
break
That can be used like this:
a=3
switch(a):
case(0):
print("Zero")
case(1):
print("Smaller than 2"):
break
else:
print ("greater than 1")
So espy translate it in Python as:
a=3
while True:
cont=False
if a==0 or cont:
cont=True
print ("Zero")
if a==1 or cont:
cont=True
print ("Smaller than 2")
break
print ("greater than 1")
break
Most of the answers here are pretty old, and especially the accepted ones, so it seems worth updating.
First, the official Python FAQ covers this, and recommends the elif chain for simple cases and the dict for larger or more complex cases. It also suggests a set of visit_ methods (a style used by many server frameworks) for some cases:
def dispatch(self, value):
method_name = 'visit_' + str(value)
method = getattr(self, method_name)
method()
The FAQ also mentions PEP 275, which was written to get an official once-and-for-all decision on adding C-style switch statements. But that PEP was actually deferred to Python 3, and it was only officially rejected as a separate proposal, PEP 3103. The answer was, of course, no—but the two PEPs have links to additional information if you're interested in the reasons or the history.
One thing that came up multiple times (and can be seen in PEP 275, even though it was cut out as an actual recommendation) is that if you're really bothered by having 8 lines of code to handle 4 cases, vs. the 6 lines you'd have in C or Bash, you can always write this:
if x == 1: print('first')
elif x == 2: print('second')
elif x == 3: print('third')
else: print('did not place')
This isn't exactly encouraged by PEP 8, but it's readable and not too unidiomatic.
Over the more than a decade since PEP 3103 was rejected, the issue of C-style case statements, or even the slightly more powerful version in Go, has been considered dead; whenever anyone brings it up on python-ideas or -dev, they're referred to the old decision.
However, the idea of full ML-style pattern matching arises every few years, especially since languages like Swift and Rust have adopted it. The problem is that it's hard to get much use out of pattern matching without algebraic data types. While Guido has been sympathetic to the idea, nobody's come up with a proposal that fits into Python very well. (You can read my 2014 strawman for an example.) This could change with dataclass in 3.7 and some sporadic proposals for a more powerful enum to handle sum types, or with various proposals for different kinds of statement-local bindings (like PEP 3150, or the set of proposals currently being discussed on -ideas). But so far, it hasn't.
There are also occasionally proposals for Perl 6-style matching, which is basically a mishmash of everything from elif to regex to single-dispatch type-switching.
Expanding on the "dict as switch" idea. If you want to use a default value for your switch:
def f(x):
try:
return {
'a': 1,
'b': 2,
}[x]
except KeyError:
return 'default'
I found that a common switch structure:
switch ...parameter...
case p1: v1; break;
case p2: v2; break;
default: v3;
can be expressed in Python as follows:
(lambda x: v1 if p1(x) else v2 if p2(x) else v3)
or formatted in a clearer way:
(lambda x:
v1 if p1(x) else
v2 if p2(x) else
v3)
Instead of being a statement, the Python version is an expression, which evaluates to a value.
The solutions I use:
A combination of 2 of the solutions posted here, which is relatively easy to read and supports defaults.
result = {
'a': lambda x: x * 5,
'b': lambda x: x + 7,
'c': lambda x: x - 2
}.get(whatToUse, lambda x: x - 22)(value)
where
.get('c', lambda x: x - 22)(23)
looks up "lambda x: x - 2" in the dict and uses it with x=23
.get('xxx', lambda x: x - 22)(44)
doesn't find it in the dict and uses the default "lambda x: x - 22" with x=44.
You can use a dispatched dict:
#!/usr/bin/env python
def case1():
print("This is case 1")
def case2():
print("This is case 2")
def case3():
print("This is case 3")
token_dict = {
"case1" : case1,
"case2" : case2,
"case3" : case3,
}
def main():
cases = ("case1", "case3", "case2", "case1")
for case in cases:
token_dict[case]()
if __name__ == '__main__':
main()
Output:
This is case 1
This is case 3
This is case 2
This is case 1
I didn't find the simple answer I was looking for anywhere on Google search. But I figured it out anyway. It's really quite simple. Decided to post it, and maybe prevent a few less scratches on someone else's head. The key is simply "in" and tuples. Here is the switch statement behavior with fall-through, including RANDOM fall-through.
l = ['Dog', 'Cat', 'Bird', 'Bigfoot',
'Dragonfly', 'Snake', 'Bat', 'Loch Ness Monster']
for x in l:
if x in ('Dog', 'Cat'):
x += " has four legs"
elif x in ('Bat', 'Bird', 'Dragonfly'):
x += " has wings."
elif x in ('Snake',):
x += " has a forked tongue."
else:
x += " is a big mystery by default."
print(x)
print()
for x in range(10):
if x in (0, 1):
x = "Values 0 and 1 caught here."
elif x in (2,):
x = "Value 2 caught here."
elif x in (3, 7, 8):
x = "Values 3, 7, 8 caught here."
elif x in (4, 6):
x = "Values 4 and 6 caught here"
else:
x = "Values 5 and 9 caught in default."
print(x)
Provides:
Dog has four legs
Cat has four legs
Bird has wings.
Bigfoot is a big mystery by default.
Dragonfly has wings.
Snake has a forked tongue.
Bat has wings.
Loch Ness Monster is a big mystery by default.
Values 0 and 1 caught here.
Values 0 and 1 caught here.
Value 2 caught here.
Values 3, 7, 8 caught here.
Values 4 and 6 caught here
Values 5 and 9 caught in default.
Values 4 and 6 caught here
Values 3, 7, 8 caught here.
Values 3, 7, 8 caught here.
Values 5 and 9 caught in default.
# simple case alternative
some_value = 5.0
# this while loop block simulates a case block
# case
while True:
# case 1
if some_value > 5:
print ('Greater than five')
break
# case 2
if some_value == 5:
print ('Equal to five')
break
# else case 3
print ( 'Must be less than 5')
break
I was quite confused after reading the accepted answer, but this cleared it all up:
def numbers_to_strings(argument):
switcher = {
0: "zero",
1: "one",
2: "two",
}
return switcher.get(argument, "nothing")
This code is analogous to:
function(argument){
switch(argument) {
case 0:
return "zero";
case 1:
return "one";
case 2:
return "two";
default:
return "nothing";
}
}
Check the Source for more about dictionary mapping to functions.
def f(x):
dictionary = {'a':1, 'b':2, 'c':3}
return dictionary.get(x,'Not Found')
##Returns the value for the letter x;returns 'Not Found' if x isn't a key in the dictionary
I liked Mark Bies's answer
Since the x variable must used twice, I modified the lambda functions to parameterless.
I have to run with results[value](value)
In [2]: result = {
...: 'a': lambda x: 'A',
...: 'b': lambda x: 'B',
...: 'c': lambda x: 'C'
...: }
...: result['a']('a')
...:
Out[2]: 'A'
In [3]: result = {
...: 'a': lambda : 'A',
...: 'b': lambda : 'B',
...: 'c': lambda : 'C',
...: None: lambda : 'Nothing else matters'
...: }
...: result['a']()
...:
Out[3]: 'A'
Edit: I noticed that I can use None type with with dictionaries. So this would emulate switch ; case else
def f(x):
return 1 if x == 'a' else\
2 if x in 'bcd' else\
0 #default
Short and easy to read, has a default value and supports expressions in both conditions and return values.
However, it is less efficient than the solution with a dictionary. For example, Python has to scan through all the conditions before returning the default value.
Simple, not tested; each condition is evaluated independently: there is no fall-through, but all cases are evaluated (although the expression to switch on is only evaluated once), unless there is a break statement. For example,
for case in [expression]:
if case == 1:
print(end='Was 1. ')
if case == 2:
print(end='Was 2. ')
break
if case in (1, 2):
print(end='Was 1 or 2. ')
print(end='Was something. ')
prints Was 1. Was 1 or 2. Was something. (Dammit! Why can't I have trailing whitespace in inline code blocks?) if expression evaluates to 1, Was 2. if expression evaluates to 2, or Was something. if expression evaluates to something else.
There have been a lot of answers so far that have said, "we don't have a switch in Python, do it this way". However, I would like to point out that the switch statement itself is an easily-abused construct that can and should be avoided in most cases because they promote lazy programming. Case in point:
def ToUpper(lcChar):
if (lcChar == 'a' or lcChar == 'A'):
return 'A'
elif (lcChar == 'b' or lcChar == 'B'):
return 'B'
...
elif (lcChar == 'z' or lcChar == 'Z'):
return 'Z'
else:
return None # or something
Now, you could do this with a switch-statement (if Python offered one) but you'd be wasting your time because there are methods that do this just fine. Or maybe, you have something less obvious:
def ConvertToReason(code):
if (code == 200):
return 'Okay'
elif (code == 400):
return 'Bad Request'
elif (code == 404):
return 'Not Found'
else:
return None
However, this sort of operation can and should be handled with a dictionary because it will be faster, less complex, less prone to error and more compact.
And the vast majority of "use cases" for switch statements will fall into one of these two cases; there's just very little reason to use one if you've thought about your problem thoroughly.
So, rather than asking "how do I switch in Python?", perhaps we should ask, "why do I want to switch in Python?" because that's often the more interesting question and will often expose flaws in the design of whatever you're building.
Now, that isn't to say that switches should never be used either. State machines, lexers, parsers and automata all use them to some degree and, in general, when you start from a symmetrical input and go to an asymmetrical output they can be useful; you just need to make sure that you don't use the switch as a hammer because you see a bunch of nails in your code.
A solution I tend to use which also makes use of dictionaries is:
def decision_time( key, *args, **kwargs):
def action1()
"""This function is a closure - and has access to all the arguments"""
pass
def action2()
"""This function is a closure - and has access to all the arguments"""
pass
def action3()
"""This function is a closure - and has access to all the arguments"""
pass
return {1:action1, 2:action2, 3:action3}.get(key,default)()
This has the advantage that it doesn't try to evaluate the functions every time, and you just have to ensure that the outer function gets all the information that the inner functions need.
Defining:
def switch1(value, options):
if value in options:
options[value]()
allows you to use a fairly straightforward syntax, with the cases bundled into a map:
def sample1(x):
local = 'betty'
switch1(x, {
'a': lambda: print("hello"),
'b': lambda: (
print("goodbye," + local),
print("!")),
})
I kept trying to redefine switch in a way that would let me get rid of the "lambda:", but gave up. Tweaking the definition:
def switch(value, *maps):
options = {}
for m in maps:
options.update(m)
if value in options:
options[value]()
elif None in options:
options[None]()
Allowed me to map multiple cases to the same code, and to supply a default option:
def sample(x):
switch(x, {
_: lambda: print("other")
for _ in 'cdef'
}, {
'a': lambda: print("hello"),
'b': lambda: (
print("goodbye,"),
print("!")),
None: lambda: print("I dunno")
})
Each replicated case has to be in its own dictionary; switch() consolidates the dictionaries before looking up the value. It's still uglier than I'd like, but it has the basic efficiency of using a hashed lookup on the expression, rather than a loop through all the keys.
Expanding on Greg Hewgill's answer - We can encapsulate the dictionary-solution using a decorator:
def case(callable):
"""switch-case decorator"""
class case_class(object):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
def do_call(self):
return callable(*self.args, **self.kwargs)
return case_class
def switch(key, cases, default=None):
"""switch-statement"""
ret = None
try:
ret = case[key].do_call()
except KeyError:
if default:
ret = default.do_call()
finally:
return ret
This can then be used with the #case-decorator
#case
def case_1(arg1):
print 'case_1: ', arg1
#case
def case_2(arg1, arg2):
print 'case_2'
return arg1, arg2
#case
def default_case(arg1, arg2, arg3):
print 'default_case: ', arg1, arg2, arg3
ret = switch(somearg, {
1: case_1('somestring'),
2: case_2(13, 42)
}, default_case(123, 'astring', 3.14))
print ret
The good news are that this has already been done in NeoPySwitch-module. Simply install using pip:
pip install NeoPySwitch

How search an unordered list for a key using reduce?

I have a basic reduce function and I want to reduce a list in order to check if an item is in the list. I have defined the function below where f is a comparison function, id_ is the item I am searching for, and a is the list. For example, reduce(f, 2, [1, 6, 2, 7]) would return True since 2 is in the list.
def reduce(f, id_, a):
if len(a) == 0:
return id_
elif len(a) == 1:
return a[0]
else:
# can call these in parallel
res = f(reduce(f, id_, a[:len(a)//2]),
reduce(f, id_, a[len(a)//2:]))
return res
I tried passing it a comparison function:
def isequal(x, element):
if x == True: # if element has already been found in list -> True
return True
if x == element: # if key is equal to element -> True
return True
else: # o.w. -> False
return False
I realize this does not work because x is not the key I am searching for. I get how reduce works with summing and products, but I am failing to see how this function would even know what the key is to check if the next element matches.
I apologize, I am a bit new to this. Thanks in advance for any insight, I greatly appreciate it!
Based on your example, the problem you seem to be trying to solve is determining whether a value is or is not in a list. In that case reduce is probably not the best way to go about that. To check if a particular value is in a list or not, Python has a much simpler way of doing that:
my_list = [1, 6, 2, 7]
print(2 in my_list)
print(55 in my_list)
True
False
Edit: Given OP's comment that they were required to use reduce to solve the problem, the code below will work, but I'm not proud of it. ;^) To see how reduce is intended to be used, here is a good source of information.
Example:
from functools import reduce
def test_match(match_params, candidate):
pattern, found_match = match_params
if not found_match and pattern == candidate:
match_params = (pattern, True)
return match_params
num_list = [1,2,3,4,5]
_, found_match = reduce(test_match, num_list, (2, False))
print(found_match)
_, found_match = reduce(test_match, num_list, (55, False))
print(found_match)
Output:
True
False

Is there a way to extract function parameters from a dictionary?

I am trying to create some more functional code in Python and I want to know if it is possible to transform dictionary (key,values) to pass as a function parameter.
I am currently doing this in a more imperative way, where I filter and then manually extract each key depending on the result of the filter. My current code:
def a(i: int, config: dict):
function_array = [function1, function2, function3]
selected = function_array[i]
if (i == "0"):
result = selected(x = config['x'])
elif (i == "1"):
result = selected(y = config['y'])
elif (i == "2"):
result = selected(z = config['z'])
return result
The current result is correct, but when I have many cases, I need to hardcode each parameter for the specified function. So, that is why I want to know if it is possible to pass the config object as I want (with an x when i is 0, for example) and then just do something like this:
def a(i: int, config: dict):
function_array = [function1, function2, function3]
result = function_array[i](config)
return result
The syntax for passing items from a dictionary as function parameters is simply selected(**config)
So for your example, it would look something like this:
def function1(x=0):
return x + 1
def function2(y=42):
return y * 2
def function3(z=100):
return z
def a(i, config):
function_array = [function1, function2, function3]
selected = function_array[i]
return selected(**config)
config = {x: 10}
a(0, config) # calls function1(x=10)
config = {y: 20}
a(1, config) # calls function2(y=20)
config = {}
a(2, config) # calls function3()
Every python function can be instructed to take a dictionary of keywords. See e.g. https://www.pythoncheatsheet.org/blog/python-easy-args-kwargs . (Official source at https://docs.python.org/3/reference/compound_stmts.html#function-definitions, but it's harder to read.)
You could do:
def a(i: int, keyword: str, **kwargs: dict):
if keyword in kwargs:
result = kwargs[keyword](i)
and you would run it with something like:
a(5, "func3", func1=print, func2=sum, func3=all)
Or, you could just pass a dictionary itself into the function:
def a(i: int, keyword: str, config: dict)
if keyword in config:
result = config[keyword](i)
This would be run with something like:
a(5, "func3", {"func1": print, "func2": sum, "func3": all})
The only difference is that the ** in the function declaration tells python to automatically make a dictionary out of explicit keywords. In the second example, you make the dictionary by yourself.
There's an important thing happening behind the scenes here. Functions are being passed around just like anything else. In python, functions are still objects. You can pass a function just as easily as you can pass an int. So if you wanted to have a list of lists, where each inner list is a function with some arguments, you easily could:
things_to_do = [[sum, 5, 7, 9], [any, 1, 0], [all, 1, 0]]
for thing_list in things_to_do:
function = thing_list[0]
args = thing_list[1:]
print(function(args))
And you'll get the following results:
21
True
False
(Note also that all of those functions take an iterable, such as a list. If you want to pass each argument separately, you would use *args instead of args.)
You can do it with defined functions, too. If you have
def foo1(arg1):
pass
def foo2(arg1, arg2):
pass
you can just as easily have
things_to_do = [[sum, 5, 7, 9], [foo1, 'a'], [foo2, 0, None]]

Most "pythonic" way of populating a nested indexed list from a flat list

I have a situation where I am generating a number of template nested lists with n organised elements where each number in the template corresponds to the index from a flat list of n values:
S =[[[2,4],[0,3]], [[1,5],[6,7]],[[10,9],[8,11],[13,12]]]
For each of these templates, the values inside them correspond to the index value from a flat list like so:
A = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
to get;
B = [[["c","e"],["a","d"]], [["b","f"],["g","h"]],[["k","j"],["i","l"],["n","m"]]]
How can I populate the structure S with the values from list A to get B, considering that:
- the values of list A can change in value but not in a number
- the template can have any depth of nested structure of but will only use an index from A once as the example shown above.
I did this with the very ugly append unflatten function below that works if the depth of the template is not more then 3 levels. Is there a better way of accomplishing it using generators, yield so it works for any arbitrary depth of template.
Another solution I thought but couldn't implement is to set the template as a string with generated variables and then assigning the variables with new values using eval()
def unflatten(item, template):
# works up to 3 levels of nested lists
tree = []
for el in template:
if isinstance(el, collections.Iterable) and not isinstance(el, str):
tree.append([])
for j, el2 in enumerate(el):
if isinstance(el2, collections.Iterable) and not isinstance(el2, str):
tree[-1].append([])
for k, el3 in enumerate(el2):
if isinstance(el3, collections.Iterable) and not isinstance(el3, str):
tree[-1][-1].append([])
else:
tree[-1][-1].append(item[el3])
else:
tree[-1].append(item[el2])
else:
tree.append(item[el])
return tree
I need a better solution that can be employed to accomplish this when doing the above recursively and for n = 100's of organised elements.
UPDATE 1
The timing function I am using is this one:
def timethis(func):
'''
Decorator that reports the execution time.
'''
#wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return result
return wrapper
and I am wrapping the function suggested by #DocDrivin inside another to call it with a one-liner. Below it is my ugly append function.
#timethis
def unflatten(A, S):
for i in range(100000):
# making sure that you don't modify S
rebuilt_list = copy.deepcopy(S)
# create the mapping dict
adict = {key: val for key, val in enumerate(A)}
# the recursive worker function
def worker(alist):
for idx, entry in enumerate(alist):
if isinstance(entry, list):
worker(entry)
else:
# might be a good idea to catch key errors here
alist[idx] = adict[entry]
#build list
worker(rebuilt_list)
return rebuilt_list
#timethis
def unflatten2(A, S):
for i in range (100000):
#up to level 3
temp_tree = []
for i, el in enumerate(S):
if isinstance(el, collections.Iterable) and not isinstance(el, str):
temp_tree.append([])
for j, el2 in enumerate(el):
if isinstance(el2, collections.Iterable) and not isinstance(el2, str):
temp_tree[-1].append([])
for k, el3 in enumerate(el2):
if isinstance(el3, collections.Iterable) and not isinstance(el3, str):
temp_tree[-1][-1].append([])
else:
temp_tree[-1][-1].append(A[el3])
else:
temp_tree[-1].append(A[el2])
else:
temp_tree.append(A[el])
return temp_tree
The recursive method is much better syntax, however, it is considerably slower then using the append method.
You can do this by using recursion:
import copy
S =[[[2,4],[0,3]], [[1,5],[6,7]],[[10,9],[8,11],[13,12]]]
A = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
# making sure that you don't modify S
B = copy.deepcopy(S)
# create the mapping dict
adict = {key: val for key, val in enumerate(A)}
# the recursive worker function
def worker(alist):
for idx, entry in enumerate(alist):
if isinstance(entry, list):
worker(entry)
else:
# might be a good idea to catch key errors here
alist[idx] = adict[entry]
worker(B)
print(B)
This yields the following output for B:
[[['c', 'e'], ['a', 'd']], [['b', 'f'], ['g', 'h']], [['k', 'j'], ['i', 'l'], ['n', 'm']]]
I did not check if the list entry can actually be mapped with the dict, so you might want to add a check (marked the spot in the code).
Small edit: just saw that your desired output (probably) has a typo. Index 3 maps to "d", not to "c". You might want to edit that.
Big edit: To prove that my proposal is not as catastrophic as it seems at a first glance, I decided to include some code to test its runtime. Check this out:
import timeit
setup1 = '''
import copy
S =[[[2,4],[0,3]], [[1,5],[6,7]],[[10,9],[8,11],[13,12]]]
A = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
adict = {key: val for key, val in enumerate(A)}
# the recursive worker function
def worker(olist):
alist = copy.deepcopy(olist)
for idx, entry in enumerate(alist):
if isinstance(entry, list):
worker(entry)
else:
alist[idx] = adict[entry]
return alist
'''
code1 = '''
worker(S)
'''
setup2 = '''
import collections
S =[[[2,4],[0,3]], [[1,5],[6,7]],[[10,9],[8,11],[13,12]]]
A = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
def unflatten2(A, S):
#up to level 3
temp_tree = []
for i, el in enumerate(S):
if isinstance(el, collections.Iterable) and not isinstance(el, str):
temp_tree.append([])
for j, el2 in enumerate(el):
if isinstance(el2, collections.Iterable) and not isinstance(el2, str):
temp_tree[-1].append([])
for k, el3 in enumerate(el2):
if isinstance(el3, collections.Iterable) and not isinstance(el3, str):
temp_tree[-1][-1].append([])
else:
temp_tree[-1][-1].append(A[el3])
else:
temp_tree[-1].append(A[el2])
else:
temp_tree.append(A[el])
return temp_tree
'''
code2 = '''
unflatten2(A, S)
'''
print(f'Recursive func: { [i/10000 for i in timeit.repeat(setup = setup1, stmt = code1, repeat = 3, number = 10000)] }')
print(f'Original func: { [i/10000 for i in timeit.repeat(setup = setup2, stmt = code2, repeat = 3, number = 10000)] }')
I am using the timeit module to do my tests. When running this snippet, you will get an output similar to this:
Recursive func: [8.74395573977381e-05, 7.868373290111777e-05, 7.9051584698027e-05]
Original func: [3.548609419958666e-05, 3.537480780214537e-05, 3.501355930056888e-05]
These are the average times of 10000 iterations, and I decided to run it 3 times to show the fluctuation. As you can see, my function in this particular case is 2.22 to 2.50 times slower than the original, but still acceptable. The slowdown is probably due to using deepcopy.
Your test has some flaws, e.g. you redefine the mapping dict at every iteration. You wouldn't do that normally, instead you would give it as a param to the function after defining it once.
You can use generators with recursion
A = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n"]
S = [[[2,4],[0,3]], [[1,5],[6,7]],[[10,9],[8,11],[13,12]]]
A = {k: v for k, v in enumerate(A)}
def worker(alist):
for e in alist:
if isinstance(e, list):
yield list(worker(e))
else:
yield A[e]
def do(alist):
return list(worker(alist))
This is also a recursive approach, just avoiding individual item assignment and letting list do the work by reading the values "hot off the CPU" from your generator. If you want, you can Try it online!-- setup1 and setup2 copied from #DocDriven 's answer (but I recommend you don't exaggerate with the numbers, do it locally if you want to play around).
Here are example time numbers:
My result: [0.11194685893133283, 0.11086182110011578, 0.11299032904207706]
result1: [1.0810202199500054, 1.046933784848079, 0.9381260159425437]
result2: [0.23467918601818383, 0.236218704842031, 0.22498539905063808]

Loop over Set of Functions for Variable Pipelining and Separate Usage

I have the following 10 functions:
def function1(data1,data2):
...
return value
def function2(data1,data2):
...
return value
...
def function10(data1,data2):
...
return value
I want to use these functions separately when needed but also
in a pipeline for calculating properties and appending to a list.
Like this:
collecting_list = []
for idx in range(10):
collecting_list.append(function1(data1[idx],data2[idx]))
collecting_list.append(function2(data1[idx],data2[idx]))
collecting_list.append(function3(data1[idx],data2[idx]))
collecting_list.append(function4(data1[idx],data2[idx]))
collecting_list.append(function5(data1[idx],data2[idx]))
collecting_list.append(function6(data1[idx],data2[idx]))
collecting_list.append(function7(data1[idx],data2[idx]))
collecting_list.append(function8(data1[idx],data2[idx]))
collecting_list.append(function9(data1[idx],data2[idx]))
collecting_list.append(function10(data1[idx],data2[idx])
Obviously I would need some property to loop over function names, but I never came across this problem before and was just wondering if I can call those functions in a loop without hard coding this and just adjusting the function-number (e.g. function1(), function2(), ... function10()).
Hints and ideas appreciated!
use lambda and exec.
you could have a string array of the function names, and lambda functions that return the data like something below. With lambda functions, you can reuse the same name dataX over and over again and with proper implementation get the right data needed. See below for a very basic, abstract example:
import random
def getData1():
return random.randint(1, 10)
def getData2():
return random.randint(11, 20)
def function1(data1):
print("f1, {}".format(data1))
def function2(data1, data2):
print("f2, {} and {}".format(data1, data2))
data1 = lambda: getData1() # these can be any function that serves as the
data2 = lambda: getData2() # source for your data. using lambda allows for
# anonymization and reuse
functionList = ["function1({})".format(data1()), "function2({},{})".format(data1(), data2())]
for f in functionList:
exec(f)
function1(data1())
You might ask why not just use getData1() in the function list instead of data1, and the answer has to do with parameters. If the getDataX functions required parameters, you wouldn't want to compute the functionList every time a parameter name changed. This is one of the benefits of using lambda and exec.
Um, sure?
import sys
import types
module_name = sys.modules[__name__]
def function1(data1, data2):
return ("func1", data1 + data2)
def function2(data1, data2):
return ("func2", data1 + data2)
def function3(data1, data2):
return ("func3", data1 + data2)
def function4(data1, data2):
return ("func4", data1 + data2)
def function5(data1, data2):
return ("func5", data1 + data2)
def get_functions():
func_list = list()
for k in sorted(module_name.__dict__.keys()):
if k.startswith('function'):
if isinstance(module_name.__dict__[k], types.FunctionType):
func_list.append(module_name.__dict__[k])
return func_list
def get_functions_2():
func_list = list()
for itr in range(1, 100):
try:
func_list.append(getattr(module_name, "function%s" % itr))
except:
break
return func_list
def run_pipeline(function_list):
collecting_list = list()
for idx, func in enumerate(function_list):
collecting_list.append(func(idx, idx))
return collecting_list
if __name__ == "__main__":
funcs = get_functions()
results = run_pipeline(funcs)
print(results)
Outputs:
[('func1', 0), ('func2', 2), ('func3', 4), ('func4', 6), ('func5', 8)]
Note: I probably wouldn't do it this way if I was trying to construct dynamic computational pipelines, but you can use this method. You could in theory create a file per pipeline and name them in order to use this method though?
Edit: Added get_functions_2 per request

Resources