Python - PyDash example of reduce with accumulator - python-3.x

Looking for a code example using Python’s PyDash module of reduce with the accumulator arg, and how to output that with previous and current.

pydash.reduce_ is not very different from the built-in functools.reduce.
A good example for using the accumulator (or the initial parameter in functools' case) is to use it as a "neutral element":
def factorial(n):
return pydash.reduce_(range(1, n), lambda total, x: total*x, accumulator=1)
In this case, 1 is used as the initial value and doesn't affect the result (since 1*x=x), but more importantly: It will be used as the return value if there are no elements in range(1, n).
And indeed, factorial(0) == factorial(1) == 1 is the required result.

Related

Does Python implement short-circuiting in built-in functions such as min()?

Does Python 3 implement short-circuiting in built-in functions whenever possible, just like it does for boolean statements?
A specific example, take the below code snippet:
min((20,11), key = lambda x : x % 10) # 20
Does Python evaluate beforehand that the minimum value possible of the function passed as the key argument is 0, and therefore stops right after evaluating the first integer in the iterable passed (20) as 20 % 10 is equal to 0?
Or does it have to evaluate all the elements in the iterable before returning the answer?
I guess short-circuiting isn't even always possible especially for more complex functions, but what about for well-known, built-in functions or operators like %?
I couldn't find the answer in the official docs.
Thanks,
python has to evaluate all values inside the iterable because the languaje evaluate element by element, if you have in your tuple something that is not a number it will trigger an exception when try to perform the % operation. Python can not guess what is inside your list. You can test this by defining a function instead of a lambda and set debug point inside.
def my_mod(x):
import ipdb; ipdb.set_trace()
return x % 20
then call the function
min((20,11), key = my_mod)
you can do a quick error test case with
min((20,11, "s"), key = my_mod)
It will trigger an exception but first had to evaluate all the previous element in the list.

Finding item at index i of an iterable (syntactic sugar)?

I know that maps, range, filters etc. in python3 return iterables, and only calculate value when required. Suppose that there is a map M. I want to print the i^th element of M.
One way would be to iterate till i^th value, and print it:
for _ in range(i):
next(M)
print(next(M))
The above takes O(i) time, where I have to find the i^th value.
Another way is to convert to a list, and print the i^th value:
print(list(M)[i])
This however, takes O(n) time and O(n) space (where n is the size of the list from which the map M is created). However, this suits the so-called "Pythonic way of writing one-liners."
I was wondering if there is a syntactic sugar to minimise writing in the first way? (i.e., if there is a way which takes O(i) time, no extra space, and is more suited to the "Pythonic way of writing".)
You can use islice:
from itertools import islice
i = 3
print(next(islice(iterable), i, i + 1))
This outputs '3'.
It actually doesn't matter what you use as the stop argument, as long as you call next once.
Thanks to #DeepSpace for the reference to the official docs, I found the following:
from more_itertools import nth
print(nth(M, i))
It prints the element at i^th index of the iterable.

What is the Efficient way to right rotate list circularly in python without inbuilt function

def circularArrayRotation(a, k, queries):
temp=a+a
indexToCountFrom=len(a)-k
for val in queries:
print(temp[indexToCountFrom+val])
I am having this code to perform the rotation .
This function takes list as a, the number of time it needs to be rotated as k, and last is query which is a list containing indices whose value is needed after the all rotation.
My code works for all the cases except some bigger ones.
Where i am doing it wrong ?
link: https://www.hackerrank.com/challenges/circular-array-rotation/problem
You'll probably run into a timeout when you concatenate large lists with temp = a + a.
Instead, don't create a new list, but use the modulo operator in your loop:
print(a[(indexToCountFrom+val) % len(a)])

Why is my merge sort algorithm not working?

I am implementing the merge sort algorithm in Python. Previously, I have implemented the same algorithm in C, it works fine there, but when I implement in Python, it outputs an unsorted array.
I've already rechecked the algorithm and code, but to my knowledge the code seems to be correct.
I think the issue is related to the scope of variables in Python, but I don't have any clue for how to solve it.
from random import shuffle
# Function to merge the arrays
def merge(a,beg,mid,end):
i = beg
j = mid+1
temp = []
while(i<=mid and j<=end):
if(a[i]<a[j]):
temp.append(a[i])
i += 1
else:
temp.append(a[j])
j += 1
if(i>mid):
while(j<=end):
temp.append(a[j])
j += 1
elif(j>end):
while(i<=mid):
temp.append(a[i])
i += 1
return temp
# Function to divide the arrays recursively
def merge_sort(a,beg,end):
if(beg<end):
mid = int((beg+end)/2)
merge_sort(a,beg,mid)
merge_sort(a,mid+1,end)
a = merge(a,beg,mid,end)
return a
a = [i for i in range(10)]
shuffle(a)
n = len(a)
a = merge_sort(a, 0, n-1)
print(a)
To make it work you need to change merge_sort declaration slightly:
def merge_sort(a,beg,end):
if(beg<end):
mid = int((beg+end)/2)
merge_sort(a,beg,mid)
merge_sort(a,mid+1,end)
a[beg:end+1] = merge(a,beg,mid,end) # < this line changed
return a
Why:
temp is constructed to be no longer than end-beg+1, but a is the initial full array, if you managed to replace all of it, it'd get borked quick. Therefore we take a "slice" of a and replace values in that slice.
Why not:
Your a luckily was not getting replaced, because of Python's inner workings, that is a bit tricky to explain but I'll try.
Every variable in Python is a reference. a is a reference to a list of variables a[i], which are in turn references to a constantant in memory.
When you pass a to a function it makes a new local variable a that points to the same list of variables. That means when you reassign it as a=*** it only changes where a points. You can only pass changes outside either via "slices" or via return statement
Why "slices" work:
Slices are tricky. As I said a points to an array of other variables (basically a[i]), that in turn are references to a constant data in memory, and when you reassign a slice it goes trough the slice element by element and changes where those individual variables are pointing, but as a inside and outside are still pointing to same old elements the changes go through.
Hope it makes sense.
You don't use the results of the recursive merges, so you essentially report the result of the merge of the two unsorted halves.

How to create an array of functions which partly depend on outside parameters? (Python)

I am interested in creating a list / array of functions "G" consisting of many small functions "g". This essentially should correspond to a series of functions 'evolving' in time.
Each "g" takes-in two variables and returns the product of these variables with an outside global variable indexed at the same time-step.
Assume obs_mat (T x 1) is a pre-defined global array, and t corresponds to the time-steps
G = []
for t in range(T):
# tried declaring obs here too.
def g(current_state, observation_noise):
obs = obs_mat[t]
return current_state * observation_noise * obs
G.append(g)
Unfortunately when I test the resultant functions, they do not seem to pick up on the difference in the obs time-varying constant i.e. (Got G[0](100,100) same as G[5](100,100)). I tried playing around with the scope of obs but without much luck. Would anyone be able to help guide me in the right direction?
This is a common "gotcha" to referencing variables from an outer scope when in an inner function. The outer variable is looked up when the inner function is run, not when the inner function is defined (so all versions of the function see the variable's last value). For each function to see a different value, you either need to make sure they're looking in separate namespaces, or you need to bind the value to a default parameter of the inner function.
Here's an approach that uses an extra namespace:
def make_func(x):
def func(a, b):
return a*b*x
return func
list_of_funcs = [make_func(i) for i in range(10)]
Each inner function func has access to the x parameter in the enclosing make_func function. Since they're all created by separate calls to make_func, they each see separate namespaces with different x values.
Here's the other approach that uses a default argument (with functions created by a lambda expression):
list_of_funcs = [lambda a, b, x=i: a*b*x for i in range(10)]
In this version, the i variable from the list comprehension is bound to the default value of the x parameter in the lambda expression. This binding means that the functions wont care about the value of i changing later on. The downside to this solution is that any code that accidentally calls one of the functions with three arguments instead of two may work without an exception (perhaps with odd results).
The problem you are running into is one of scoping. Function bodies aren't evaluated until the fuction is actually called, so the functions you have there will use whatever is the current value of the variable within their scope at time of evaluation (which means they'll have the same t if you call them all after the for-loop has ended)
In order to see the value that you would like, you'd need to immediately call the function and save the result.
I'm not really sure why you're using an array of functions. Perhaps what you're trying to do is map a partial function across the time series, something like the following?
from functools import partial
def g(current_state, observation_noise, t):
obs = obs_mat[t]
return current_state * observation_noise * obs
g_maker = partial(g, current, observation)
results = list(map(g_maker, range(T)))
What's happening here is that partial creates a partially-applied function, which is merely waiting for its final value to be evaluated. That final value is dynamic (but the first two are fixed in this example), so mapping that partially-applied function over a range of values gets you answers for each value.
Honestly, this is a guess because it's hard to see what else you are trying to do with this data and it's hard to see what you're trying to achieve with the array of functions (and there are certainly other ways to do this).
The issue (assuming that your G.append call is mis-indented) is simply that the name t is mutated when you loop over the iterator returned by range(T). Since every function g you create stores returns the same name t, they wind up all returning the same value, T - 1. The fix is to de-reference the name (the simplest way to do this is by sending t into your function as a default value for an argument in g's argument list):
G = []
for t in range(T):
def g(current_state, observation_noise, t_kw=t):
obs = obs_mat[t_kw]
return current_state * observation_noise * obs
G.append(g)
This works because it creates another name that points at the value that t references during that iteration of the loop (you could still use t rather than t_kw and it would still just work because tg is bound to the value that tf is bound to - the value never changes, but tf is bound to another value on the next iteration, while tg still points at the "original" value.

Resources