make change in python (maximum recursion depth exceeded in comparison) - python-3.x

So I have a recursive solution to the make change problem that works sometimes. It is:
def change(n, c):
if (n == 0):
return 1
if (n < 0):
return 0;
if (c + 1 <= 0 and n >= 1):
return 0
return change(n, c - 1) + change(n - coins[c - 1], c);
where coins is my array of coins. For example [1,5,10,25]. n is the amount of coins, for example 1000, and c is the length of the coins array - 1. This solution works in some situations. But when I need it to run in under two seconds and I use:
coins: [1,5,10,25]
n: 1000
I get a:
RecursionError: maximum recursion depth exceeded in comparison
So my question is, what would be the best way to optimize this. Using some sort of flow control? I don't want to do something like.
# Set recursion limit
sys.setrecursionlimit(10000000000)
UPDATE:
I now have something like
def coinss(n, c):
if n == 0:
return 1
if n < 0:
return 0
nCombos = 0
for c in range(c, -1, -1):
nCombos += coinss(n - coins[c - 1], c)
return nCombos
but it takes forever. it'd be ideal to have this run under a second.

As suggested in the answers above you could use DP for a more optimal solution.
Also your conditional check -
if (c + 1 <= 0 and n >= 1)
should be
if (c <= 1 ):
as n will always be >=1 and c <= 1 will prevent any calculations if the number of coins is lesser than or equal to 1.

While using recursion you will always run into this. If you set the recursion limit higher, you may be able to use your algorithm on a bigger number, but you will always be limited. The recursion limit is there to keep you from getting a stack overflow.
The best way to solved for bigger change amounts would be to swap to an iterative approach. There are algorithms out there, wikipedia:
https://en.wikipedia.org/wiki/Change-making_problem

Note that you have a bug here:
if (c + 1 <= 0 and n >= 1):
is like
if (c <= -1 and n >= 1):
So c can be 0 and pass to the next step where you pass c-1 to the index, which works because python doesn't mind negative indexes but still false (coins[-1] yields 25), so your solution sometimes prints 1 combination too much.
I've rewritten your algorithm with recursive and stack approaches:
Recursive (fixed, no need for c at init thanks to an internal recursive method, but still overflows the stack):
coins = [1,5,10,25]
def change(n):
def change_recurse(n, c):
if n == 0:
return 1
if n < 0:
return 0;
if c <= 0:
return 0
return change_recurse(n, c - 1) + change_recurse(n - coins[c - 1], c)
return change_recurse(n,len(coins))
iterative/stack approach (not dynamic programming), doesn't recurse, just uses a "stack" to store the computations to perform:
def change2(n):
def change_iter(stack):
result = 0
# continue while the stack isn't empty
while stack:
# process one computation
n,c = stack.pop()
if n == 0:
# one solution found, increase counter
result += 1
if n > 0 and c > 0:
# not found, request 2 more computations
stack.append((n, c - 1))
stack.append((n - coins[c - 1], c))
return result
return change_iter([(n,len(coins))])
Both methods return the same values for low values of n.
for i in range(1,200):
a,b = change(i),change2(i)
if a != b:
print("error",i,a,b)
the code above runs without any error prints.
Now print(change2(1000)) takes a few seconds but prints 142511 without blowing the stack.

Related

Faster way to simulate the crunch command behaviour on Python3.8

I'm trying to simulate what the crunch command does in Linux, with the difference of yield the words instead of writing them into a file and i came up with something like this:
def wordlist(chars, min, max = None):
if max is None: # Means that the user want only a singular length
max = min
length = len(chars)
for n in range(min, max + 1):
indexes = [0] * n
for _ in range(length ** n): # The length of all the chars to the power of the places to fill return the number of words in the wordlist
for m in range(1, len(indexes) + 1): # This is the reporting system, like if indexes instead of a list is a number
if indexes[-m] == length:
indexes[-m] = 0
indexes[-m - 1] += 1
yield ''.join(chars[i] for i in indexes)
indexes[-1] += 1
It's a bit rude and not too much readable, maybe neither too much performing. Without using any external module like itertools, has someone got a better idea?
EDIT:
After a bit of struggling I have improved the math behind coming up to something like this:
def wordlist(chars, min, max = None):
if max is None:
max = min
if min <= 0 or max <= 0:
return
base = len(chars)
for n in range(min, max + 1):
for m in range(base ** n):
yield ''.join(chars[m // base ** (n - v - 1) % base] for v in range(n))
Anyway I measured the time taken by each of the two function and, while this new one is much more readable and pretty, the first one still faster. I still waiting for better ideas from you

Stone Game (leetcode 877) - how is the sum getting calculated

Summary of Game Game - There are even number of piles of stones, and each of the 2 players picks up a pile alternately. Sum of the stones is always odd, hence there cannot be a tie. We need to check if the player who starts first wins the game.
I've a question about the following code, which is working fine
This code checks whether Play A (who goes first) wins a game
Code below calculates the (sum of stones picked up by A - sum of stones picked up by B)
Question is - how is the code (under code if(parity == 0)) calculating the sum.
I understand dynamic programming/recursion is involved, however since the sum is not being passed in the recursive call - how is the sum calculated ?
def stoneGame(self, piles:List[int]) -> bool :
N = len(piles)
#lru_cache(maxsize=None)
def dp(i, j):
if(i > j):
return 0
parity = (j - i +1) %2
if(parity == 0):
return max(piles[i] + dp(i+1, j), piles[j] + dp(i, j-1))
else:
return min(-piles[i] + dp(i+1, j), -piles[j] + dp(i, j-1))
return dp(0, N-1) > 0
lets look at the term piles[i] + dp(i+1, j) and parity 0.
next calculation will be piles[i] - Piles[i+1] + dp(i+2,j) or piles[i] - Piles[j] + dp(i+1,j -1)
So you can observe how the piles array is either getting added or subtracted depending on the conditions.
At the base case ( i > j) the calculation will be the following:
piles[x1] - piles[x2] + piles[x3] - piles[x4] + ......
where the x1,x2,x3,x4 are different indexes of the array.

How can I solve finding consecutive factors problem in an optimal way?

I need to develop a function which finds consecutive factors of the given number and then the function will return the smallest of these consecutive numbers.
I tried to solve a Codility question. (I submitted my solution)
I need to develop the solution function.
def solution(N):
# write your code in Python 3.6
pass
An example:
If N is 6, the function will return 2 (because of 6 = 2 * 3)
If N is 20, the function will return 4 (because of 20 = 4 * 5)
If N is 29, the function will return 0
I developed the solution function (by checking all the numbers from 1 up to N, brute force search) and it works.
However, when the argument of the solution function is too big, the execution of the function takes too much time. Codility Python engine is running the function for a while and then it is throwing TIMEOUT ERROR.
What may be an optimal solution for this problem?
Thank you
I developed the function but it is not optimized.
def solution(N):
for i in range(1,N+1):
if i * (i+1) == N:
return i
return 0
When N is too big like 12,567,543, the function execution takes too much time.
After my comment, I thought a little bit about the question.
If you have an integer, N, and two consecutive factors, m and m+1, then it MUST be true that m < sqrt(N) and m + 1 > sqrt(N)
Therefore, all you have to do is check if the floor of the square root times the ceiling of the square root is equal to your original number..
import math
def solution(N):
n1 = math.floor(math.sqrt(N))
n2 = n1 + 1 # or n2 = math.ceil(math.sqrt(N))
if n1*n2 == N:
return n1
return 0
This has a run time of O(1).
import math
import math
def mysol(n):
s = math.sqrt(n)
if math.floor(s) * math.ceil(s) == n:
return math.floor(s)
else:
return 0

Project Euler #23 Optimization [Python 3.6]

I'm having trouble getting my code to run quickly for Project Euler Problem 23. The problem is pasted below:
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
And my code:
import math
import bisect
numbers = list(range(1, 20162))
tot = set()
numberabundance = []
abundant = []
for n in numbers:
m = 2
divisorsum = 1
while m <= math.sqrt(n):
if n % m == 0:
divisorsum += m + (n / m)
m += 1
if math.sqrt(n) % 1 == 0:
divisorsum -= math.sqrt(n)
if divisorsum > n:
numberabundance.append(1)
else:
numberabundance.append(0)
temp = 1
# print(numberabundance)
for each in numberabundance:
if each == 1:
abundant.append(temp)
temp += 1
abundant_set = set(abundant)
print(abundant_set)
for i in range(12, 20162):
for k in abundant:
if i - k in abundant_set:
tot.add(i)
break
elif i - k < i / 2:
break
print(sum(numbers.difference(tot)))
I know the issue lies in the for loop at the bottom but I'm not quire sure how to fix it. I've tried modeling it after some of the other answers I've seen here but none of them seem to work. Any suggestions? Thanks.
Your upper bound is incorrect - the question states all integers greater than 28123 can be written ..., not 20162
After changing the bound, generation of abundant is correct, although you could do this generation in a single pass by directly adding to a set abundant, instead of creating the bitmask array numberabundance.
The final loop is also incorrect - as per the question, you must
Find the sum of all the positive integers
whereas your code
for i in range(12, 20162):
will skip numbers below 12 and also doesn't include the correct upper bound.
I'm a bit puzzled about your choice of
elif i - k < i / 2:
Since the abundants are already sorted, I would just check if the inner loop had passed the midpoint of the outer loop:
if k > i / 2:
Also, since we just need the sum of these numbers, I would just keep a running total, instead of having to do a final sum on a collection.
So here's the result after making the above changes:
import math
import bisect
numbers = list(range(1, 28123))
abundant = set()
for n in numbers:
m = 2
divisorsum = 1
while m <= math.sqrt(n):
if n % m == 0:
divisorsum += m + (n / m)
m += 1
if math.sqrt(n) % 1 == 0:
divisorsum -= math.sqrt(n)
if divisorsum > n:
abundant.add(n)
#print(sorted(abundant))
nonabundantsum = 0
for i in numbers:
issumoftwoabundants = False
for k in abundant:
if k > i / 2:
break
if i - k in abundant:
issumoftwoabundants = True
break
if not issumoftwoabundants:
nonabundantsum += i
print(nonabundantsum)
Example here

Python 3 integer division. How to make math operators consistent with C

I need to port quite a few formulas from C to Python and vice versa. What is the best way to make sure that nothing breaks in the process?
I am primarily worried about automatic int/int = float conversions.
You could use the // operator. It performs an integer division, but it's not quite what you'd expect from C:
A quote from here:
The // operator performs a quirky kind of integer division. When the
result is positive, you can think of
it as truncating (not rounding) to 0
decimal places, but be careful with
that.
When integer-dividing negative numbers, the // operator rounds “up”
to the nearest integer. Mathematically
speaking, it’s rounding “down” since
−6 is less than −5, but it could trip
you up if you were expecting it to
truncate to −5.
For example, -11 // 2 in Python returns -6, where -11 / 2 in C returns -5.
I'd suggest writing and thoroughly unit-testing a custom integer division function that "emulates" C behaviour.
The page I linked above also has a link to PEP 238 which has some interesting background information about division and the changes from Python 2 to 3. There are some suggestions about what to use for integer division, like divmod(x, y)[0] and int(x/y) for positive numbers, perhaps you'll find more useful things there.
In C:
-11/2 = -5
In Python:
-11/2 = -5.5
And also in Python:
-11//2 = -6
To achieve C-like behaviour, write int(-11/2) in Python. This will evaluate to -5.
Some ways to compute integer division with C semantics are as follows:
def div_c0(a, b):
if (a >= 0) != (b >= 0) and a % b:
return a // b + 1
else:
return a // b
def div_c1(a, b):
q, r = a // b, a % b
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def div_c2(a, b):
q, r = divmod(a, b)
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def mod_c(a, b):
return (a % b if b >= 0 else a % -b) if a >= 0 else (-(-a % b) if b >= 0 else a % b)
def div_c3(a, b):
r = mod_c(a, b)
return (a - r) // b
With timings:
import itertools
n = 100
l = [x for x in range(-n, n + 1)]
ll = [(a, b) for a, b in itertools.product(l, repeat=2) if b]
funcs = div_c0, div_c1, div_c2, div_c3
for func in funcs:
correct = all(func(a, b) == funcs[0](a, b) for a, b in ll)
print(f"{func.__name__} correct:{correct} ", end="")
%timeit [func(a, b) for a, b in ll]
# div_c0 correct:True 100 loops, best of 5: 10.3 ms per loop
# div_c1 correct:True 100 loops, best of 5: 11.5 ms per loop
# div_c2 correct:True 100 loops, best of 5: 13.2 ms per loop
# div_c3 correct:True 100 loops, best of 5: 15.4 ms per loop
Indicating the first approach to be the fastest.
For implementing C's % using Python, see here.
In the opposite direction:
Since Python 3 divmod (or //) integer division requires the remainder to have the same sign as divisor at non-zero remainder case, it's inconsistent with many other languages (quote from 1.4. Integer Arithmetic).
To have your "C-like" result same as Python, you should compare the remainder result with divisor (suggestion: by xor on sign bits equals to 1, or multiplication with negative result), and in case it's different, add the divisor to the remainder, and subtract 1 from the quotient.
// Python Divmod requires a remainder with the same sign as the divisor for
// a non-zero remainder
// Assuming isPyCompatible is a flag to distinguish C/Python mode
isPyCompatible *= (int)remainder;
if (isPyCompatible)
{
int32_t xorRes = remainder ^ divisor;
int32_t andRes = xorRes & ((int32_t)((uint32_t)1<<31));
if (andRes)
{
remainder += divisor;
quotient -= 1;
}
}
(Credit to Gawarkiewicz M. for pointing this out.)
You will need to know what the formula does, and understand both the C implementation and how to implement it in Python. But unless you are doing integer maths it should be quite similar, and if you are doing integer maths, the question is why. :)
Integer maths are either done because of some specific purpose, often related to computers, or because it's faster than floats when doing massive computations, like Fractint does for fractals, and in that case Python is usually not the right choice. ;)

Resources