Coin Change Optimization - dynamic-programming

I'm trying to solve this problem:
Suppose I have a set of n coins {a_1, a2, ..., a_n}. A coin with value
1 will always appear. What is the minimum number of coins I
need to reach M?
The constraints are:
1 ≤ n ≤ 25
1 ≤ m ≤ 10^6
1 ≤ a_i ≤ 100
Ok, I know that it's the Change-making problem.
I have tried to solve this problem using Breadth-First Search, Dynamic Programming and Greedly (which is incorrect, since it don't always give best solution). However, I get Time Limit Exceeded (3 seconds).
So I wonder if there's an optimization for this problem.
The description and the constraints called my attention, but I don't know how to use it in my favour:
A coin with value 1 will always appear.
1 ≤ a_i ≤ 100
I saw at wikipedia that this problem can also be solved by "Dynamic programming with the probabilistic convolution tree". But I could not understand anything.
Can you help me?
This problem can be found here: http://goo.gl/nzQJem

Let a_n be the largest coin. Use these two clues:
result is >= ceil(M/a_n),
result configuration has lot of a_n's.
It is best to try with maximum of a_n's and than check if it is better result with less a_n's till it is possible to find better result.
Something like: let R({a_1, ..., a_n}, M) be function that returns result for a given problem. Than R can be implemented:
num_a_n = floor(M/a_n)
best_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
while num_a_n > 0:
num_a_n = num_a_n - 1
# Check is it possible at all to get better result
if num_a_n + ceil(M-a_n*num_a_n / a_(n-1) ) >= best_r:
return best_r
next_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
if next_r < best_r:
best_r = next_r
return best_r

Related

Divide a value in excel by a set of preset values to find out how many of each are needed

I am curious if there is a way to make my life easier. In excel I am producing a total value, say 750 and need to find out how many orders of pipe I need from values of 50,100,200,250,500. Is there anyway to have excel take a value and then return how many of each of these numbers I would need, so for the 750 case 1 500 and 1 250?
Currently the solution is just worked out in my head
Assuming you want to try to fit pipes in decreasing order of size,and that you have access to the required functions, you can use Reduce as demonstrated here to step through the sizes and successively divide by each one although the formula is a little laboured:
=LET(pipes,{500;250;200;100;50},reqd,750,DROP(REDUCE(0,pipes,
LAMBDA(a,c,VSTACK(a,QUOTIENT(reqd-IF(ROWS(a)>1,SUM(DROP(a,1)*TAKE(pipes,ROWS(a)-1)),0),c)))),1))
As pointed out by #Jos Woolley, this may not give you the answer you want if the total is something like 749. It will fit as many values in as possible and give a result 500+200 total 700 (remainder 49). You could fix it perhaps by rounding up to the next multiple of 50.
For the example of 823, you would have:
=LET(pipes,{500;250;200;100;50},reqd,CEILING(823,MIN(pipes)),DROP(REDUCE(0,pipes,
LAMBDA(a,c,VSTACK(a,QUOTIENT(reqd-IF(ROWS(a)>1,SUM(DROP(a,1)*TAKE(pipes,ROWS(a)-1)),0),c)))),1))
which gives 500+250+100=850.
Well I've got a bit obsessed with this now and I am determined to get a lambda working to find the optimal answer! I have been looking at the brute-force solution to finding the minimum number of coins required to make up a given total in the reference mentioned previously and have managed to translate it into a lambda using Reduce:
Mincoins1= LAMBDA(coins, m, v,
IF(
v <= 0,
0,
REDUCE(
999,
coins,
LAMBDA(a, c,
IF(v >= c, LET(mc, mincoins1.mincoins1(coins, m, v - c) + 1, IF(mc < a, mc, a)), a)
)
)
)
)
This does give the correct answer, 2, for the case when you want to make up a value of 400 from the list of pipes given. The next step will be to modify the code to return the list of pipes which give that total (200,200).
https://www.enjoyalgorithms.com/blog/minimum-coin-change
Here is the lambda modified to return a string containing the chosen pipes:
Mincoins2= LAMBDA(coins, m, v,
IF(
v <= 0,
"",
REDUCE(
rept("x",999),
coins,
LAMBDA(a, c,
IF(v >= c, LET(mc, c&"|"&mincoins2.mincoins2(coins, m, v - c), IF(len(mc) < len(a), mc, a)), a)
)
)
)
);
It does work BUT (and this is a big but) it hits a limit as soon as the value to be produced exceeds 1000 and you get a #value error. Disappointing. But interesting I think as a proof of concept.
Not sure I understand the question but lets try.
if you have 1 450 to divide, have a formula that divides 1 450 with you highest lenght (750) and then round it down.
so the formula would be something of the line: = rounddown(1 450 / 750; 0)
you will then get the answer that you need 1 of the length 750.
then keep the info about how much length you have remaining. So a formula like:
=1 450 - 750 * [the answer from previous formula = 1]. this would sum to 700.
then start over with the same thing, but divide 700 with 500 (second largest size).
Your question is extremely difficult: one might think for this easy solution, starting with value_begin:
amount_of_500 = value_begin DIV 500; // integer division
temp = value_begin - 500 * amount_of_500;
amount_of_250 = temp DIV 250; // again integer division
temp = temp - 250 * amount_of_250;
amount_of_200 = temp DIV 200; // again integer division
temp = temp - 200 * amount_of_200;
...
However, this will not work because of the value 200, which is far too close to 250: just start with value_begin equal to 400 (algorithm solution : 250 + 100 + 50, while best solution : 200 + 200).
Are you sure you need both 200 and 250 as possible numbers to divide by? If yes, you might have a serious problem getting this implemented.

Maximum Sum of XOR operation on a selected element with array elements with an optimize approach

Problem: Choose an element from the array to maximize the sum after XOR all elements in the array.
Input for problem statement:
N=3
A=[15,11,8]
Output:
11
Approach:
(15^15)+(15^11)+(15^8)=11
My Code for brute force approach:
def compute(N,A):
ans=0
for i in A:
xor_sum=0
for j in A:
xor_sum+=(i^j)
if xor_sum>ans:
ans=xor_sum
return ans
Above approach giving the correct answer but wanted to optimize the approach to solve it in O(n) time complexity. Please help me to get this.
If you have integers with a fixed (constant) number of c bites then it should be possible because O(c) = O(1). For simplicity reasons I assume unsigned integers and n to be odd. If n is even then we sometimes have to check both paths in the tree (see solution below). You can adapt the algorithm to cover even n and negative numbers.
find max in array with length n O(n)
if max == 0 return 0 (just 0s in array)
find the position p of the most significant bit of max O(c) = O(1)
p = -1
while (max != 0)
p++
max /= 2
so 1 << p gives a mask for the highest set bit
build a tree where the leaves are the numbers and every level stands for a position of a bit, if there is an edge to the left from the root then there is a number that has bit p set and if there is an edge to the right there is a number that has bit p not set, for the next level we have an edge to the left if there is a number with bit p - 1 set and an edge to the right if bit p - 1 is not set and so on, this can be done in O(cn) = O(n)
go through the array and count how many times a bit at position i (i from 0 to p) is set => sum array O(cn) = O(n)
assign the root of the tree to node x
now for each i from p to 0 do the following:
if x has only one edge => x becomes its only child node
else if sum[i] > n / 2 => x becomes its right child node
else x becomes its left child node
in this step we choose the best path through the tree that gives us the most ones when xoring O(cn) = O(n)
xor all the elements in the array with the value of x and sum them up to get the result, actually you could have built the result already in the step before by adding sum[i] * (1 << i) to the result if going left and (n - sum[i]) * (1 << i) if going right O(n)
All the sequential steps are O(n) and therefore overall the algorithm is also O(n).

Why KL divergence is giving nan? Is it some mathematical error or my input data is incorrect?

In the following code s returns nan. As each value in Q<1 so it returns a negative value when I take its log. Does it mean that I can not calculate KL divergence with these values of P and Q or can I fix it?
`P= np.array([1.125,3.314,2.7414])
Q=np.array([0.42369288, 0.89152044, 0.60905852])
for i in range(len(P)):
if P[i] != 0 and Q[i]!=0:
s= P[i] *np.log(P[i]/Q[i])
print("s: ",s)`
First of, P and Q should describe probability mass functions, meaning that each element should be in the interval [0,1] and they each should sum to 1, which is not the case for your examples.
The second np.log is wrong. Is there a reason you put it there or was it a typo? It should be P[i]*np.log(P[i]/Q[i]). You also want to perform the sum over all these terms for i.
Finally there is a technical issue of what to do if P[i] = 0. In that case np.log(0) would cause problems. The actual contribution of the term should be 0 in that case (because lim_{x->0} x*log(x) = 0). You can guarantee this, e.g. by handling this case specially with an if clause.
The case of Q[i] = 0 would cause similar issues, however the KL divergence doesn't exist if Q[i] = 0, but not P[i] = 0, anyway.

What is the approach to solve spoj KPMATRIX?

The problem link is here. The problem is basically to count all such sub matrices of a given matrix of size N by M, whose sum of elements is between A and B inclusive. N,M<=250. 10^-9<=A<=B<=10^9.
People have solved it using DP and BIT. I am not clear how.
First, i tried to solve a simpler version, 1-D case of the above problem: Given an array A, of length N, count all subarrays, where sum of elements in the subarray lies between A and B, but still couldn't think of better than O(n^2). Here is what i did :
I thought of making another array for keeping prefix sum of the original array, say prefix[N]. prefix[i] = A1 + A[2] + A[3] + ...A[i]. set prefix[ 1] = A [ 1]. Then for each i from 2 to N, problem is to count all j <= i such that sum Z = A[j] + A[j+1] + ..A[i] lies between A and B. This is equivalent to prefix[i] - prefix[j-1]. But it's still O(n^2), as for each i, j is hitting i places.
can anybody help me step by step to advance me in the given approach to solve the main problem ?.

Analyzing the time complexity of Coin changing

We're doing the classic problem of determining the number of ways that we can make change that amounts to Z given a set of coins.
For example, Amount=5 and Coins={1, 2, 3}. One way we can make 5 is {2, 3}.
The naive recursive solution has a time complexity of factorial time.
f(n) = n * f(n-1) = n!
My professor argued that it actually has a time complexity of O(2^n), because we only choose to use a coin or not. That intuitively makes sense. However how come my recurence doesn't work out to be O(2^n)?
EDIT:
My recurrence is as follows:
f(5, {1, 2, 3})
/ \ .....
f(4, {2, 3}) f(3, {1, 3}) .....
Notice how the branching factor decreases by 1 at every step.
Formally.
T(n) = n*F(n-1) = n!
The recurrence doesn't work out to what you expect it to work out to because it doesn't reflect the number of operations made by the algorithm.
If the algorithm decides for each coin whether to output it or not, then you can model its time complexity with the recurrence T(n) = 2*T(n-1) + O(1) with T(1)=O(1); the intuition is that for each coin you have two options---output the coin or not; this obviously solves to T(n)=O(2^n).
I too was trying to analyze the time complexity for the brute force which performs depth first search:
def countCombinations(coins, n, amount, k=0):
if amount == 0:
return 1
res = 0
for i in range(k, n):
if coins[k] <= amount:
remaining_amount = amount - coins[i] # considering this coin, try for remaining sum
# in next round include this coin too
res += countCombinations(coins, n, remaining_amount, i)
return res
but we can see that the coins which are used in one round is used again in the next round, so at least for 1st coin we have n items at each stage which is equivalent to permutation with repetition n^r for n items available to arrange into r positions at each stage.
ex: [1, 1, 1, 1]; sum = 4
This will generate a recursive tree where for first path we literally have solutions at each diverged subpath until we have the sum=0. so the time complexity is O(sum^n) ie for each stage in the path towards sum we have n different subpaths.
Note however there is another algorithm which uses take/not-take approach and at most there is 2 branch at a node in recursion tree. Hence the time complexity for this algorithm is O(2^(n*m))
ex: say coins = [1, 1] sum = 2 there are 11 nodes/points to visit in the recursion tree for 6 paths(leaves) then complexity is at most 2^(2*2) => 2^4 => 16 (Hence 11 nodes visiting for a max of 16 possibility is correct but little loose on upper bound).
def get_count(coins, n, sum):
if(n == 0): # no coins left, to try a combination that matches the sum
return 0
if(sum == 0): # no more sum left to match, means that we have completely co-incided with our trial
return 1 # (return success)
# don't-include the last coin in the sum calc so, leave it and try rest
excluded = get_count(coins, n-1, sum)
included = 0
if(coins[n-1] <= sum):
# include the last coin in the sum calc, so reduce by its quantity in the sum
# we assume here that n is constant ie, it is supplied in unlimited(we can choose same coin again and again),
included = get_count(coins, n, sum-coins[n-1])
return included+excluded

Resources