How many ways to get the change? - python-3.x

While practicing the following dynamic programming question on HackerRank, I got the 'timeout' error. The code run successfully on some test examples, but got 'timeout' errors on others. I'm wondering how can I further improve the code.
The question is
Given an amount and the denominations of coins available, determine how many ways change can be made for amount. There is a limitless supply of each coin type.
Example:
n = 3
c = [8, 3, 1, 2]
There are 3 ways to make change for n=3 : {1, 1, 1}, {1, 2}, and {3}.
My current code is
import math
import os
import random
import re
import sys
from functools import lru_cache
#
# Complete the 'getWays' function below.
#
# The function is expected to return a LONG_INTEGER.
# The function accepts following parameters:
# 1. INTEGER n
# 2. LONG_INTEGER_ARRAY c
#
def getWays(n, c):
# Write your code here
#c = sorted(c)
#lru_cache
def get_ways_recursive(n, cur_idx):
cur_denom = c[cur_idx]
n_ways = 0
if n == 0:
return 1
if cur_idx == 0:
return 1 if n % cur_denom == 0 else 0
for k in range(n // cur_denom + 1):
n_ways += get_ways_recursive(n - k * cur_denom,
cur_idx - 1)
return n_ways
return get_ways_recursive(n, len(c) - 1)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
m = int(first_multiple_input[1])
c = list(map(int, input().rstrip().split()))
# Print the number of ways of making change for 'n' units using coins having the values given by 'c'
ways = getWays(n, c)
fptr.write(str(ways) + '\n')
fptr.close()
It timed out on the following test example
166 23 # 23 is the number of coins below.
5 37 8 39 33 17 22 32 13 7 10 35 40 2 43 49 46 19 41 1 12 11 28

Related

How to select a good sample size of nodes from a graph

I have a network that has a node attribute labeled as 0 or 1. I want to find how the distance between nodes with the same attribute differs from the distance between nodes with a different attributes. As it is computationally difficult to find the distance between all combinations of nodes, I want to select a sample size of nodes. How will I select a sample size of nodes? I am working on python and networkx
You've not given many details, so I'll invent some data and make assumptions in the hope it's useful.
Start by importing packages and sampling a dataset:
import random
import networkx as nx
# human social networks tend to be "scale-free"
G = nx.generators.scale_free_graph(1000)
# set labels to either 0 or 1
for i, attr in G.nodes.data():
attr['label'] = 1 if random.random() < 0.2 else 0
Next, calculate the shortest paths between random pairs of nodes:
results = []
# I had to use 100,000 pairs to get the CI small enough below
for _ in range(100000):
a, b = random.sample(list(G.nodes), 2)
try:
n = nx.algorithms.shortest_path_length(G, a, b)
except nx.NetworkXNoPath:
# no path between nodes found
n = -1
results.append((a, b, n))
Finally, here is some code to summarise the results and print them out:
from collections import Counter
from scipy import stats
# somewhere to counts of both 0, both 1, different labels
c_0 = Counter()
c_1 = Counter()
c_d = Counter()
# accumulate distances into the above counters
node_data = {i: a['label'] for i, a in G.nodes.data()}
cc = { (0,0): c_0, (0,1): c_d, (1,0): c_d, (1,1): c_1 }
for a, b, n in results:
cc[node_data[a], node_data[b]][n] += 1
# code to display the results nicely
def show(c, title):
s = sum(c.values())
print(f'{title}, n={s}')
for k, n in sorted(c.items()):
# calculate some sort of CI over monte carlo error
lo, hi = stats.beta.ppf([0.025, 0.975], 1 + n, 1 + s - n)
print(f'{k:5}: {n:5} = {n/s:6.2%} [{lo:6.2%}, {hi:6.2%}]')
show(c_0, 'both 0')
show(c_1, 'both 1')
show(c_d, 'different')
The above prints out:
both 0, n=63930
-1: 60806 = 95.11% [94.94%, 95.28%]
1: 107 = 0.17% [ 0.14%, 0.20%]
2: 753 = 1.18% [ 1.10%, 1.26%]
3: 1137 = 1.78% [ 1.68%, 1.88%]
4: 584 = 0.91% [ 0.84%, 0.99%]
5: 334 = 0.52% [ 0.47%, 0.58%]
6: 154 = 0.24% [ 0.21%, 0.28%]
7: 50 = 0.08% [ 0.06%, 0.10%]
8: 3 = 0.00% [ 0.00%, 0.01%]
9: 2 = 0.00% [ 0.00%, 0.01%]
both 1, n=3978
-1: 3837 = 96.46% [95.83%, 96.99%]
1: 6 = 0.15% [ 0.07%, 0.33%]
2: 34 = 0.85% [ 0.61%, 1.19%]
3: 34 = 0.85% [ 0.61%, 1.19%]
4: 31 = 0.78% [ 0.55%, 1.10%]
5: 30 = 0.75% [ 0.53%, 1.07%]
6: 6 = 0.15% [ 0.07%, 0.33%]
To save space I've cut off the section where the labels differ. The proportions in the square brackets is the 95% CI of the Monte-Carlo error. Using more iterations above allows you to reduce this error, while obviously taking more CPU time.
This is more or less an extension of my discussion with Sam Mason and only want to give you some timing numbers, because as discussed maybe retrieving all distances is feasible and may even faster. Based on the code in Sam Mason answer, I tested both variants and retrieving all distances is for 1000 nodes much faster than sampling 100 000 pairs. The main advantage is that all "retrieved distances" are used.
import random
import networkx as nx
import time
# human social networks tend to be "scale-free"
G = nx.generators.scale_free_graph(1000)
# set labels to either 0 or 1
for i, attr in G.nodes.data():
attr['label'] = 1 if random.random() < 0.2 else 0
def timing(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(f.__name__, (time2-time1)*1000.0))
return ret
return wrap
#timing
def get_sample_distance():
results = []
# I had to use 100,000 pairs to get the CI small enough below
for _ in range(100000):
a, b = random.sample(list(G.nodes), 2)
try:
n = nx.algorithms.shortest_path_length(G, a, b)
except nx.NetworkXNoPath:
# no path between nodes found
n = -1
results.append((a, b, n))
#timing
def get_all_distances():
all_distances = nx.shortest_path_length(G)
get_sample_distance()
# get_sample_distance function took 2338.038 ms
get_all_distances()
# get_all_distances function took 304.247 ms
``

Dijkstra's algorithm in graph (Python)

I need some help with the graph and Dijkstra's algorithm in python 3. I tested this code (look below) at one site and it says to me that the code works too long. Can anybody say me how to solve that or paste the example of code for this algorithm? I don't know how to speed up this code. I read many sites but l don't found normal examples...
P.S. Now l edit code in few places and tried to optimize it, nut it still too slow(
from collections import deque
class node:
def __init__(self, name, neighbors, distance, visited):
self.neighbors = neighbors
self.distance = distance
self.visited = visited
self.name = name
def addNeighbor(self, neighbor_name, dist): # adding new neighbor and length to him
if neighbor_name not in self.neighbors:
self.neighbors.append(neighbor_name)
self.distance.append(dist)
class graph:
def __init__(self):
self.graphStructure = {} # vocabulary with information in format: node_name, [neighbors], [length to every neighbor], visited_status
def addNode(self, index): # adding new node to graph structure
if self.graphStructure.get(index) is None:
self.graphStructure[index] = node(index, [], [], False)
def addConnection(self, node0_name, node1_name, length): # adding connection between 2 nodes
n0 = self.graphStructure.get(node0_name)
n0.addNeighbor(node1_name, length)
n1 = self.graphStructure.get(node1_name)
n1.addNeighbor(node0_name, length)
def returnGraph(self): # printing graph nodes and connections
print('')
for i in range(len(self.graphStructure)):
nodeInfo = self.graphStructure.get(i + 1)
print('name =', nodeInfo.name, ' neighborns =', nodeInfo.neighbors, ' length to neighborns =', nodeInfo.distance)
def bfs(self, index): # bfs method of searching (also used Dijkstra's algorithm)
distanceToNodes = [float('inf')] * len(self.graphStructure)
distanceToNodes[index - 1] = 0
currentNode = self.graphStructure.get(index)
queue = deque()
for i in range(len(currentNode.neighbors)):
n = currentNode.neighbors[i]
distanceToNodes[n - 1] = currentNode.distance[i]
queue.append(n)
while len(queue) > 0: # creating queue and visition all nodes
u = queue.popleft()
node_u = self.graphStructure.get(u)
node_u.visited = True
for v in range(len(node_u.neighbors)):
node_v = self.graphStructure.get(node_u.neighbors[v])
distanceToNodes[node_u.neighbors[v] - 1] = min(distanceToNodes[node_u.neighbors[v] - 1], distanceToNodes[u - 1] + node_u.distance[v]) # update minimal length to node
if not node_v.visited:
queue.append(node_u.neighbors[v])
return distanceToNodes
def readInputToGraph(graph): # reading input data and write to graph datatbase
node0, node1, length = map(int, input().split())
graph.addNode(node0)
graph.addNode(node1)
graph.addConnection(node0, node1, length)
def main():
newGraph = graph()
countOfNodes, countOfPairs = map(int, input().split())
if countOfPairs == 0:
print('0')
exit()
for _ in range(countOfPairs): # reading input data for n(countOfPairs) rows
readInputToGraph(newGraph)
# newGraph.returnGraph() # printing information
print(sum(newGraph.bfs(1))) # starting bfs from start position
main()
The input graph structure may look like this:
15 17
3 7 2
7 5 1
7 11 5
11 5 1
11 1 2
1 12 1
1 13 3
12 10 1
12 4 3
12 15 1
12 13 4
1 2 1
2 8 2
8 14 1
14 6 3
6 9 1
13 9 2
I'm only learning python so l think l could do something wrong(
The correctness of Dijkstra's algorithm relies on retrieving the node with the shortest distance from the source in each iteration. Using your code as an example, the operation u = queue.popleft() MUST return the node that has the shortest distance from the source out of all nodes that are currently in the queue.
Looking at the documentation for collections.deque, I don't think the implementation guarantees that popleft() always returns the node with the lowest key. It simply returns the left most item in what is effectively a double linked list.
The run time of Dijkstra's algorithm (once you implement it correctly) almost entirely lies on the underlying data structure used to implement queue. I would suggest that you first revisit the correctness of your implementation, and once you can confirm that it is actually correct, then start experimenting with different data structures for queue.

Recursive memoization solutio to solve "count changes"

I am trying to solving the "Counting Change" problem with memorization.
Consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a function to compute the number of ways to change any given amount of money using any set of currency denominations?
And the intuitive solution with recursoin.
The number of ways to change an amount a using n kinds of coins equals
the number of ways to change a using all but the first kind of coin, plus
the number of ways to change the smaller amount a - d using all n kinds of coins, where d is the denomination of the first kind of coin.
#+BEGIN_SRC python :results output
# cache = {} # add cache
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0] # d for digit
return count_change(a, kinds[1:]) + count_change(a - d, kinds)
print(count_change(100))
#+END_SRC
#+RESULTS:
: 292
I try to take advantage of memorization,
Signature: count_change(a, kinds=(50, 25, 10, 5, 1))
Source:
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0]
cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
return cache[a]
It works properly for small number like
In [17]: count_change(120)
Out[17]: 494
work on big numbers
In [18]: count_change(11000)
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-18-52ba30c71509> in <module>
----> 1 count_change(11000)
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
... last 1 frames repeated, from the frame below ...
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
RecursionError: maximum recursion depth exceeded in comparison
What's the problem with memorization solution?
In the memoized version, the count_change function has to take into account the highest index of coin you can use when you make the recursive call, so that you can use the already calculated values ...
def count_change(n, k, kinds):
if n < 0:
return 0
if (n, k) in cache:
return cache[n,k]
if k == 0:
v = 1
else:
v = count_change(n-kinds[k], k, kinds) + count_change(n, k-1, kinds)
cache[n,k] = v
return v
You can try :
cache = {}
count_change(120,4, [1, 5, 10, 25, 50])
gives 494
while :
cache = {}
count_change(11000,4, [1, 5, 10, 25, 50])
outputs: 9930221951

Time and memory limit exceeded - Python3 - Number Theory

I am trying to find the sum of the multiples of 3 or 5 of all the numbers upto N.
This is a practise question on HackerEarth. I was able to pass all the test cases except 1. I get a time and memory exceeded error. I looked up the documentation and learnt that int can handle large numbers and the type bignum was removed.
I am still learning python and would appreciate any constructive feedback.
Could you please point me in the right direction so I can optimise the code myself?
test_cases = int(input())
for i in range(test_cases):
user_input = int(input())
sum = 0
for j in range (0, user_input):
if j % 3 == 0:
sum = sum + j
elif j % 5 == 0:
sum = sum + j
print(sum)
In such problems, try to use some math to find a direct solution rather than brute-forcing it.
You can calculate the number of multiples of k less than n, and calculate the sum of the multiples.
For example, with k=3 and n=13, you have 13 // 3 = 4 multiples.
The sum of these 4 multiples of 3 is 3*1 + 3*2 + 3*3 + 3*4 = 3 * (1+2+3+4)
Then, use the relation: 1+2+....+n = n*(n+1)/2
To sum the multiples of 3 and 5, you can sum the multiples of 3, add the sum of the multiples of 5, and subtract the ones you counted twice: the multiples of 15.
So, you could do it like this:
def sum_of_multiples_of(k, n):
"""
Returns the sum of the multiples of k under n
"""
# number of multiples of k between 1 and n
m = n // k
return k * m * (m+1) // 2
def sum_under(n):
return (sum_of_multiples_of(3, n)
+ sum_of_multiples_of(5, n)
- sum_of_multiples_of(15, n))
# 3+5+6+9+10 = 33
print(sum_under(10))
# 33
# 3+5+6+9+10+12+15+18 = 78
print(sum_under(19))
# 78

How can I count the recursive calls of a function in Python?

I was playing with the recursive Ackermanns function. For certain values my prompt whould not show every calculated output 'cause Python whould exceed its recursive limit so fast that whould freeze the prompt before the "easy" parts whould catch up with it.
So I thought I could add a recursive counter and a quick pause after a full execution of the function. I was getting the anticipated outputs until it reached the values (1,0). After that I got a TypeError: can only concatenate tuple (not "int") to tuple.
My code is as follows:
import time
import sys
sys.setrecursionlimit(3000)
def ackermann(i,j,rec):
output = None
if i==0:
output = j+1
elif j==0:
output = ackermann(i-1,1,rec)
rec=rec+1
else:
output = ackermann(i-1,ackermann(i,j-1,rec),rec)
rec=rec+1
return output,rec
rec=0
for i in range(5):
for j in range(5):
print("(",i,",",j,")= ",ackermann(i,j,rec))
time.sleep(2)
Notice that removing all instances of rec (my recurence counter), the program runs fine. (You can see all outputs for values i,j = 3)
Can someone point out how to correct my code or propose a different method of finding how many times the Ackermann function has calls itself ?
Also, I've noticed that putting a limit of 5000 whould crash my python kernel very fast. Is there an upper limit ?
I use the latest Anaconda.
EDIT
I tried to implement the same function using a list as a parameter with the following data [i,j,output,#recursion]
import time
import sys
sys.setrecursionlimit(3000)
def ackermann(*rec):
rec=list(rec)
print(rec) # see the data as they initialize the function
if rec[0][0]==0:
rec[0][1]=rec[0][1]+1
rec[0][2] = rec[0][1]+1
elif rec[0][1]==0:
rec[0][0]=rec[0][0]-1
rec[0][1]=1
rec = ackermann()
rec[0][3]=rec[0][3]+1
else:
rec[0][0]=rec[0][0]-1
rec[0][1] = ackermann()
rec = ackermann()
rec[0][3]=rec[0][3]+1
return rec
for i in range(5):
for j in range(5):
rec=[i,j,0,0]
print(ackermann(rec))
time.sleep(1)
But this time I get a IndexError: list index out of rangebecause for some unknown reason my list gets emptied
OUTPUT:
[[0, 0, 0, 0]]
[[0, 1, 2, 0]]
[[0, 1, 0, 0]]
[[0, 2, 3, 0]]
[[0, 2, 0, 0]]
[[0, 3, 4, 0]]
[[0, 3, 0, 0]]
[[0, 4, 5, 0]]
[[0, 4, 0, 0]]
[[0, 5, 6, 0]]
[[1, 0, 0, 0]]
[]
The problem with the original implementation is that
return output, rec
will happily create a tuple when output and rec are both numbers, which is true whenever i=0. But once you get to i=1, j=0 the function calls Ackerman on (0,1,rec), which returns a tuple, to which it then cannot add the integer rec, hence the error message. I believe I have worked with that idea, though, almost unchanged, except rather than trying to pass and return rec, I made it global (ugly, I know). I also reformatted the output so I could read it better. Thus:
import time
import sys
sys.setrecursionlimit(3000)
def ackermann(i,j):
global rec
output = None
if i==0:
output = j+1
elif j==0:
output = ackermann(i-1,1)
rec=rec+1
else:
output = ackermann(i-1,ackermann(i,j-1))
rec=rec+1
return output
for i in range(5):
for j in range(5):
rec = 0
print
print("ack("+str(i)+","+str(j)+") = "+str(ackermann(i,j)))
print("rec = "+str(rec))
print
time.sleep(1)
and the output, before erroring out, is,
ack(0,0) = 1
rec = 0
ack(0,1) = 2
rec = 0
ack(0,2) = 3
rec = 0
ack(0,3) = 4
rec = 0
ack(0,4) = 5
rec = 0
ack(1,0) = 2
rec = 1
ack(1,1) = 3
rec = 2
ack(1,2) = 4
rec = 3
ack(1,3) = 5
rec = 4
ack(1,4) = 6
rec = 5
ack(2,0) = 3
rec = 3
ack(2,1) = 5
rec = 8
ack(2,2) = 7
rec = 15
ack(2,3) = 9
rec = 24
ack(2,4) = 11
rec = 35
ack(3,0) = 5
rec = 9
ack(3,1) = 13
rec = 58
ack(3,2) = 29
rec = 283
ack(3,3) = 61
rec = 1244
ack(3,4) = 125
rec = 5213
ack(4,0) = 13
rec = 59
It seems to me there are only one or two other values (it will choke on 4,2 I believe, no matter what, so you would need to get 5, 0 first) you could hope to get out this way, no matter how much you tinker.
I am a little troubled that rec appears to exceed the recursion limit, but I think Python must be interpreting along the way somehow, so that it gets deeper than one might think, or that I don't fully understand sys.recursionlimit (I looked at rec a few times, and at the very least I followed your lead on calculating it; also, as a sanity check I switched the order of incrementing it and the function call and got the same results).
EDIT: I added another parameter to track how deeply any particular call ever recurses. That turns out to be typically less than (and at most one more than) "rec." rec represents (actually 1 less than) how many times the function is called to make the particular calculation, but not all of these need be on the Python interpreter stack simultaneously.
Revised code:
import time
import sys
sys.setrecursionlimit(3000)
def ackermann(i,j,d):
global rec
global maxDepth
if ( d > maxDepth ) : maxDepth = d
output = None
if i==0:
output = j+1
elif j==0:
rec=rec+1
output = ackermann(i-1,1, d+1)
else:
rec=rec+1
output = ackermann(i-1,ackermann(i,j-1, d+1),d+1)
return output
for i in range(5):
for j in range(5):
rec = 0
maxDepth=0
print
print("ack("+str(i)+","+str(j)+") = "+str(ackermann(i,j,1)))
print("rec = "+str(rec))
print("maxDepth = "+str(maxDepth))
print
time.sleep(1)
revised output (before it gives up)
ack(0,0) = 1
rec = 0
maxDepth = 1
ack(0,1) = 2
rec = 0
maxDepth = 1
ack(0,2) = 3
rec = 0
maxDepth = 1
ack(0,3) = 4
rec = 0
maxDepth = 1
ack(0,4) = 5
rec = 0
maxDepth = 1
ack(1,0) = 2
rec = 1
maxDepth = 2
ack(1,1) = 3
rec = 2
maxDepth = 3
ack(1,2) = 4
rec = 3
maxDepth = 4
ack(1,3) = 5
rec = 4
maxDepth = 5
ack(1,4) = 6
rec = 5
maxDepth = 6
ack(2,0) = 3
rec = 3
maxDepth = 4
ack(2,1) = 5
rec = 8
maxDepth = 6
ack(2,2) = 7
rec = 15
maxDepth = 8
ack(2,3) = 9
rec = 24
maxDepth = 10
ack(2,4) = 11
rec = 35
maxDepth = 12
ack(3,0) = 5
rec = 9
maxDepth = 7
ack(3,1) = 13
rec = 58
maxDepth = 15
ack(3,2) = 29
rec = 283
maxDepth = 31
ack(3,3) = 61
rec = 1244
maxDepth = 63
ack(3,4) = 125
rec = 5213
maxDepth = 127
ack(4,0) = 13
rec = 59
maxDepth = 16
In your edited version of the code, you used a *arg in your def for ackerman and made it explicitly a list, and you get eleven output lists containing a four-element list in each until on the twelfth recursion you get an empty list. So, did the first eleven lists contain the expected elements according to the ackermann constraints? Also, on the twelfth recursion, you say the list was "emptied." I wonder for analytical purposes if it might make sense to say instead it wasn't filled in the first place. That is, not that something emptied it but that something didn't fill it as expected on the twelfth time through.

Resources