def rec_coin_dynam(target,coins,known_results):
'''
INPUT: This funciton takes in a target amount and a list of possible coins to use.
It also takes a third parameter, known_results, indicating previously calculated results.
The known_results parameter shoud be started with [0] * (target+1)
OUTPUT: Minimum number of coins needed to make the target.
'''
# Default output to target
min_coins = target
# Base Case
if target in coins:
known_results[target] = 1
return 1
# Return a known result if it happens to be greater than 1
elif known_results[target] > 0:
return known_results[target]
else:
# for every coin value that is <= than target
for i in [c for c in coins if c <= target]:
# Recursive call, note how we include the known results!
num_coins = 1 + rec_coin_dynam(target-i,coins,known_results)
# Reset Minimum if we have a new minimum
if num_coins < min_coins:
min_coins = num_coins
# Reset the known result
known_results[target] = min_coins
return min_coins
This runs perfectly fine but I have few questions about it.
We give it the following input to run:
target = 74
coins = [1,5,10,25]
known_results = [0]*(target+1)
rec_coin_dynam(target,coins,known_results)
why are we initalising the know result with zeros of length target+1? why can't we just write
know_results = []
Notice that the code contains lines such as:
known_results[target] = 1
return known_results[target]
known_results[target] = min_coins
Now, let me demonstrate the difference between [] and [0]*something in the python interactive shell:
>>> a = []
>>> b = [0]*10
>>> a
[]
>>> b
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>>
>>> a[3] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list assignment index out of range
>>>
>>> b[3] = 1
>>>
>>> a
[]
>>> b
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
The exception IndexError: list assignment index out of range was raised because we tried to access cell 3 of list a, but a has size 0; there is no cell 3. We could put a value in a using a.append(1), but then the 1 would be at position 0, not at position 3.
There was no exception when we accessed cell 3 of list b, because b has size 10, so any index between 0 and 9 is valid.
Conclusion: if you know in advance the size that your array will have, and this size never changes during the execution of the algorithm, then you might as well begin with an array of the appropriate size, rather than with an empty array.
What is the size of known_results? The algorithm needs results for values ranging from 0 to target. How many results is that? Exactly target+1. For instance, if target = 2, then the algorithm will deal with results for 0, 1 and 2; that's 3 different results. Thus known_results must have size target+1. Note that in python, just like in almost every other programming language, a list of size n has n elements, indexed 0 to n-1. In general, in an integer interval [a, b], there are b-a+1 integers. For instance, there are three integers in interval [8, 10] (those are 8, 9 and 10).
Related
I have one list a containing 100 lists and one list x containing 4 lists (all of equal length). I want to test the lists in a against those in x. My goal is to find out how often numbers in a "touch" those in x. Stated differently, all the lists are points on a line and the lines in a should not touch (or cross) those in x.
EDIT
In the code, I am testing each line in a (e.g. a1, a2 ... a100) first against x1, then against x2, x3 and x4. A condition and a counter check whether the a's touch the x's. Note: I am not interested in counting how many items in a1, for example, touch x1. Once a1 and x1 touch, I count that and can move on to a2, and so on.
However, the counter does not properly update. It seems that it does not tests a against all x. Any suggestions on how to solve this? Here is my code.
EDIT
I have updated the code so that the problem is easier to replicate.
x = [[10, 11, 12], [14, 15, 16]]
a = [[11, 10, 12], [15, 17, 20], [11, 14, 16]]
def touch(e, f):
e = np.array(e)
f = np.array(f)
lastitems = []
counter = 0
for lst in f:
if np.all(e < lst): # This is the condition
lastitems.append(lst[-1]) # This allows checking the end values
else:
counter += 1
c = counter
return c
touch = touch(x, a)
print(touch)
The result I get is:
2
But I expect this:
1
2
I'm unsure of what exactly is the result you expect, your example and description are still not clear. Anyway, this is what I guess you want. If you want more details, you can uncomment some lines i.e. those with #
i = 0
for j in x:
print("")
#print(j)
counter = 0
for k in a:
inters = set(j).intersection(k)
#print(k)
#print(inters)
if inters:
counter += 1
#print("yes", counter)
#else:
#print("nope", counter)
print(i, counter)
i += 1
which prints
0 2
1 2
I am trying to solving the "Counting Change" problem with memorization.
Consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a function to compute the number of ways to change any given amount of money using any set of currency denominations?
And the intuitive solution with recursoin.
The number of ways to change an amount a using n kinds of coins equals
the number of ways to change a using all but the first kind of coin, plus
the number of ways to change the smaller amount a - d using all n kinds of coins, where d is the denomination of the first kind of coin.
#+BEGIN_SRC python :results output
# cache = {} # add cache
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0] # d for digit
return count_change(a, kinds[1:]) + count_change(a - d, kinds)
print(count_change(100))
#+END_SRC
#+RESULTS:
: 292
I try to take advantage of memorization,
Signature: count_change(a, kinds=(50, 25, 10, 5, 1))
Source:
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0]
cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
return cache[a]
It works properly for small number like
In [17]: count_change(120)
Out[17]: 494
work on big numbers
In [18]: count_change(11000)
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-18-52ba30c71509> in <module>
----> 1 count_change(11000)
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
... last 1 frames repeated, from the frame below ...
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
RecursionError: maximum recursion depth exceeded in comparison
What's the problem with memorization solution?
In the memoized version, the count_change function has to take into account the highest index of coin you can use when you make the recursive call, so that you can use the already calculated values ...
def count_change(n, k, kinds):
if n < 0:
return 0
if (n, k) in cache:
return cache[n,k]
if k == 0:
v = 1
else:
v = count_change(n-kinds[k], k, kinds) + count_change(n, k-1, kinds)
cache[n,k] = v
return v
You can try :
cache = {}
count_change(120,4, [1, 5, 10, 25, 50])
gives 494
while :
cache = {}
count_change(11000,4, [1, 5, 10, 25, 50])
outputs: 9930221951
I need to convert words to numbers for RSA cipher, so i found code which can convert text to decimal, but when i run it in terminal by python 3 i get:
Traceback (most recent call last):
File "test.py", line 49, in <module>
numberOutput = int(bit_list_to_string(string_to_bits(inputString)),2) #1976620216402300889624482718775150
File "test.py", line 31, in string_to_bits
map(chr_to_bit, s)
File "test.py", line 30, in <listcomp>
return [b for group in
File "test.py", line 29, in chr_to_bit
return pad_bits(convert_to_bits(ord(c)), ASCII_BITS)
File "test.py", line 14, in pad_bits
assert len(bits) <= pad
AssertionError
when i use "python convert_text_to_decimal.py" in terminal it works correctly.
Code:
BITS = ('0', '1')
ASCII_BITS = 8
def bit_list_to_string(b):
"""converts list of {0, 1}* to string"""
return ''.join([BITS[e] for e in b])
def seq_to_bits(seq):
return [0 if b == '0' else 1 for b in seq]
def pad_bits(bits, pad):
"""pads seq with leading 0s up to length pad"""
assert len(bits) <= pad
return [0] * (pad - len(bits)) + bits
def convert_to_bits(n):
"""converts an integer `n` to bit array"""
result = []
if n == 0:
return [0]
while n > 0:
result = [(n % 2)] + result
n = n / 2
return result
def string_to_bits(s):
def chr_to_bit(c):
return pad_bits(convert_to_bits(ord(c)), ASCII_BITS)
return [b for group in
map(chr_to_bit, s)
for b in group]
def bits_to_char(b):
assert len(b) == ASCII_BITS
value = 0
for e in b:
value = (value * 2) + e
return chr(value)
def list_to_string(p):
return ''.join(p)
def bits_to_string(b):
return ''.join([bits_to_char(b[i:i + ASCII_BITS])
for i in range(0, len(b), ASCII_BITS)])
inputString = "attack at dawn"
numberOutput = int(bit_list_to_string(string_to_bits(inputString)),2) #1976620216402300889624482718775150
bitSeq = seq_to_bits(bin(numberOutput)[2:]) #[2:] is needed to get rid of 0b in front
paddedString = pad_bits(bitSeq,len(bitSeq) + (8 - (len(bitSeq) % 8))) #Need to pad because conversion from dec to bin throws away MSB's
outputString = bits_to_string(paddedString) #attack at dawn
So when i use just python he have 2.7 version. Please, help me fix this code to python 3
Change line 22,
n = n / 2
to
n = n // 2
This solves the immediate error you get (and another one that follows from it). The rest of the routine may or may not work for your purposes; I did not check any further.
You get the assert error because the function convert_to_bits should be, theoretically speaking, returning a proper list of single bit values for a valid integer in its range. It calculates this list by dividing the integer by 2 until 0 remains.
However.
One of the more significant changes from Python 2.7 to 3.x was the behavior of the division operator. Prior, this always returned an integer, but with Python 3 it was decided to have it return a float instead.
That means the simple bit calculation loop
while n > 0:
result = [(n % 2)] + result
n = n / 2
does not return a steady list of 0s and 1s anymore, always ending because the source integer runs out of numbers, but instead you get a list of more than a thousand floating point numbers. At a glance it may be unclear what that list represents, but as it ends with
… 1.03125, 0.0625, 0.125, 0.25, 0.5, 1]
you can see it's the divide-by-two loop that keeps on dividing until its input finally runs out of floating point accuracy and stops dividing further.
The resulting array is not only way, way larger than the next routines expect, its data is also of the wrong type. The values in this list are used as an index for the BITS tuple at the top of your code. With the floating point division, you get an error when trying to use the value as an index, even if it is a round 0.0 or 1.0. The integer division, again, fixes this.
I'm new to python3 I'm trying to write a code that takes a matrix as its argument and computes and prints the QR factorization using the modified Gram-Schmidt algorithm. I'm trying to use nested for loops for the code and not use NUMPY at all. I have attached my code below any help would be greatly appreciated. Thank you in advance.
def twoNorm(vector):
'''
twoNorm takes a vector as it's argument. It then computes the sum of
the squares of each element of the vector. It then returns the square
root of this sum.
'''
# This variable will keep track of the validity of our input.
inputStatus = True
# This for loop will check each element of the vector to see if it's a number.
for i in range(len(vector01)):
if ((type(vector01[i]) != int) and (type(vector01[i]) != float) and (type(vector01[i]) != complex)):
inputStatus = False
print("Invalid Input")
# If the input is valid the function continues to compute the 2-norm
if inputStatus == True:
result = 0
# This for loop will compute the sum of the squares of the elements of the vector.
for i in range(len(vector01)):
result = result + (vector01[i]**2)
result = result**(1/2)
return result
def QR(matrix):
r[i][i] = twoNorm(vector01)
return [vector01 * (1/(twoNorm(vector01)) for i in matrix]
for j in range(len(matrix)):
r[i][j] = q[i] * vector02[i]
vector02 = vector02[i] - (r[i][j] * q[i])
matrix = [[1, 2], [0, 1], [1, 0]]
vector01 = [1, 0, 1]
vector02 = [2, 1, 0]
A certain string-processing language offers a primitive operation
which splits a string into two pieces. Since this operation involves
copying the original string, it takes n units of time for a string of
length n, regardless of the location of the cut. Suppose, now, that
you want to break a string into many pieces.
The order in which the breaks are made can affect the total running
time. For example, suppose we wish to break a 20-character string (for
example "abcdefghijklmnopqrst") after characters at indices 3, 8, and
10 to obtain for substrings: "abcd", "efghi", "jk" and "lmnopqrst". If
the breaks are made in left-right order, then the first break costs 20
units of time, the second break costs 16 units of time and the third
break costs 11 units of time, for a total of 47 steps. If the breaks
are made in right-left order, the first break costs 20 units of time,
the second break costs 11 units of time, and the third break costs 9
units of time, for a total of only 40 steps. However, the optimal
solution is 38 (and the order of the cuts is 10, 3, 8).
The input is the length of the string and an ascending-sorted array with the cut indexes. I need to design a dynamic programming table to find the minimal cost to break the string and the order in which the cuts should be performed.
I can't figure out how the table structure should look (certain cells should be the answer to certain sub-problems and should be computable from other entries etc.). Instead, I've written a recursive function to find the minimum cost to break the string: b0, b1, ..., bK are the indexes for the cuts that have to be made to the (sub)string between i and j.
totalCost(i, j, {b0, b1, ..., bK}) = j - i + 1 + min {
totalCost(b0 + 1, j, {b1, b2, ..., bK}),
totalCost(i, b1, {b0 }) + totalCost(b1 + 1, j, {b2, b3, ..., bK}),
totalCost(i, b2, {b0, b1 }) + totalCost(b2 + 1, j, {b3, b4, ..., bK}),
....................................................................................
totalCost(i, bK, {b0, b1, ..., b(k - 1)})
} if k + 1 (the number of cuts) > 1,
j - i + 1 otherwise.
Please help me figure out the structure of the table, thanks!
For example we have a string of length n = 20 and we need to break it in positions cuts = [3, 8, 10]. First of all let's add two fake cuts to our array: -1 and n - 1 (to avoid edge cases), now we have cuts = [-1, 3, 8, 10, 19]. Let's fill table M, where M[i, j] is a minimum units of time to make all breaks between i-th and j-th cuts. We can fill it by rule: M[i, j] = (cuts[j] - cuts[i]) + min(M[i, k] + M[k, j]) where i < k < j. The minimum time to make all cuts will be in the cell M[0, len(cuts) - 1]. Full code in python:
# input
n = 20
cuts = [3, 8, 10]
# add fake cuts
cuts = [-1] + cuts + [n - 1]
cuts_num = len(cuts)
# init table with zeros
table = []
for i in range(cuts_num):
table += [[0] * cuts_num]
# fill table
for diff in range(2, cuts_num):
for start in range(0, cuts_num - diff):
end = start + diff
table[start][end] = 1e9
for mid in range(start + 1, end):
table[start][end] = min(table[start][end], table[
start][mid] + table[mid][end])
table[start][end] += cuts[end] - cuts[start]
# print result: 38
print(table[0][cuts_num - 1])
Just in case you may feel easier to follow when everything is 1-based (same as DPV Dasgupta Algorithm book problem 6.9, and same as UdaCity Graduate Algorithm course initiated by GaTech), following is the python code that does the equivalent thing with the previous python code by Jemshit and Aleksei. It follows the chain multiply (binary tree) pattern as taught in the video lecture.
import numpy as np
# n is string len, P is of size m where P[i] is the split pos that split string into [1,i] and [i+1,n] (1-based)
def spliting_cost(P, n):
P = [0,] + P + [n,] # make sure pos list contains both ends of string
m = len(P)
P = [0,] + P # both C and P are 1-base indexed for easy reading
C = np.full((m+1,m+1), np.inf)
for i in range(1, m+1): C[i, i:i+2] = 0 # any segment <= 2 does not need split so is zero cost
for s in range(2, m): # s is split string len
for i in range(1, m-s+1):
j = i + s
for k in range(i, j+1):
C[i,j] = min(C[i,j], P[j] - P[i] + C[i,k] + C[k,j])
return C[1,m]
spliting_cost([3, 5, 10, 14, 16, 19], 20)
The output answer is 55, same as that with split points [2, 4, 9, 13, 15, 18] in the previous algorithm.