I have a homework assignment in which I have to write a program that outputs the change to be given by a vending machine using the lowest number of coins. E.g. £3.67 can be dispensed as 1x£2 + 1x£1 + 1x50p + 1x10p + 1x5p + 1x2p.
However, I'm not getting the right answers and suspect that this might be due to a rounding problem.
change=float(input("Input change"))
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
while change!=0:
if change-2>=0:
change=change-2
twocount+=1
else:
if change-1>=0:
change=change-1
onecount+=1
else:
if change-0.5>=0:
change=change-0.5
halfcount+=1
else:
if change-0.2>=0:
change=change-0.2
pttwocount+=1
else:
if change-0.1>=0:
change=change-0.1
ptonecount+=1
else:
break
print(twocount,onecount,halfcount,pttwocount,ptonecount)
RESULTS:
Input: 2.3
Output: 10010
i.e. 2.2
Input: 3.4
Output: 11011
i.e. 3.3
Some actually work:
Input: 3.2
Output: 11010
i.e. 3.2
Input: 1.1
Output: 01001
i.e. 1.1
Floating point accuracy
Your approach is correct, but as you guessed, the rounding errors are causing trouble. This can be debugged by simply printing the change variable and information about which branch your code took on each iteration of the loop:
initial value: 3.4
taking a 2... new value: 1.4
taking a 1... new value: 0.3999999999999999 <-- uh oh
taking a 0.2... new value: 0.1999999999999999
taking a 0.1... new value: 0.0999999999999999
1 1 0 1 1
If you wish to keep floats for output and input, multiply by 100 on the way in (cast to integer with int(round(change))) and divide by 100 on the way out of your function, allowing you to operate on integers.
Additionally, without the 5p, 2p and 1p values, you'll be restricted in the precision you can handle, so don't forget to add those. Multiplying all of your code by 100 gives:
initial value: 340
taking a 200... new value: 140
taking a 100... new value: 40
taking a 20... new value: 20
taking a 20... new value: 0
1 1 0 2 0
Avoid deeply nested conditionals
Beyond the decimal issue, the nested conditionals make your logic very difficult to reason about. This is a common code smell; the more you can eliminate branching, the better. If you find yourself going beyond about 3 levels deep, stop and think about how to simplify.
Additionally, with a lot of branching and hand-typed code, it's very likely that a subtle bug or typo will go unnoticed or that a denomination will be left out.
Use data structures
Consider using dictionaries and lists in place of blocks like:
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
which can be elegantly and extensibly represented as:
denominations = [200, 100, 50, 10, 5, 2, 1]
used = {x: 0 for x in denominations}
In terms of efficiency, you can use math to handle amounts for each denomination in one fell swoop. Divide the remaining amount by each available denomination in descending order to determine how many of each coin will be chosen and subtract accordingly. For each denomination, we can now write a simple loop and eliminate branching completely:
for val in denominations:
used[val] += amount // val
amount -= val * used[val]
and print or show a final result of used like:
278 => {200: 1, 100: 0, 50: 1, 10: 2, 5: 1, 2: 1, 1: 1}
The end result of this is that we've reduced 27 lines down to 5 while improving efficiency, maintainability and dynamism.
By the way, if the denominations were a different currency, it's not guaranteed that this greedy approach will work. For example, if our available denominations are 25, 20 and 1 cents and we want to make change for 63 cents, the optimal solution is 6 coins (3x 20 and 3x 1). But the greedy algorithm produces 15 (2x 25 and 13x 1). Once you're comfortable with the greedy approach, research and try solving the problem using a non-greedy approach.
Related
I made a program to find primes below a given number.
number = int(input("Enter number: "))
prime_numbers = [2] # First prime is needed.
for number_to_be_checked in range(3, number + 1):
square_root = number_to_be_checked ** 0.5
for checker in prime_numbers: # Checker will become
# every prime number below the 'number_to_be_checked'
# variable because we are adding all the prime numbers
# in the 'prime_numbers' list.
if checker > square_root:
prime_numbers.append(number_to_be_checked)
break
elif number_to_be_checked % checker == 0:
break
print(prime_numbers)
This program checks every number below the number given as the input. But primes are of the form 6k ± 1 only. Therefore, instead of checking all the numbers, I defined a generator that generates all the numbers of form 6k ± 1 below the number given as the input. (I added 3 also in the prime_numbers list while initializing it as 2,3 cannot be of the form 6k ± 1)
def potential_primes(number: int) -> int:
"""Generate the numbers potential to be prime"""
# Prime numbers are always of the form 6k ± 1.
number_for_function = number // 6
for k in range(1, number_for_function + 1):
yield 6*k - 1
yield 6*k + 1
Obviously, the program should have been much faster because I am checking comparatively many less numbers. But, counterintuitively the program is slower than before. What could be the reason behind this?
In every six numbers, three are even and one is a multiple of 3. The other two are 6-coprime, so are potentially prime:
6k+0 6k+1 6k+2 6k+3 6k+4 6k+5
even even even
3x 3x
For the three evens your primality check uses only one division (by 2) and for the 4th number, two divisions. In all, five divisions that you seek to avoid.
But each call to a generator has its cost too. If you just replace the call to range with the call to create your generator, but leave the other code as is(*), you are not realizing the full savings potential.
Why? Because (*)if that's the case, while you indeed test only 1/3 of the numbers now, you still test each of them by 2 and 3. Needlessly. And apparently the cost of generator use is too high.
The point to this technique known as wheel factorization is to not test the 6-coprime (in this case) numbers by the primes which are already known to not be their divisors, by construction.
Thus, you must start with e.g. prime_numbers = [5,7] and use it in your divisibility testing loop, not all primes, which start with 2 and 3, which you do not need.
Using nested for loop along with square root will be heavy on computation, rather look at Prime Sieve Algorithm which is much faster but does take some memory.
One way to use the 6n±1 idea is to alternate step sizes in the main loop by stepping 2 then 4. My Python is not good, so this is pseudocode:
function listPrimes(n)
// Deal with low numbers.
if (n < 2) return []
if (n = 2) return [2]
if (n = 3) return [2, 3]
// Main loop
primeList ← [2, 3]
limit ← 1 + sqrt(n) // Calculate square root once.
index ← 5 // We have checked 2 and 3 already.
step ← 2 // Starting step value: 5 + 2 = 7.
while (index <= limit) {
if (isPrime(index)) {
primeList.add(index)
}
index ← index + step
step ← 6 - step // Alternate steps of 2 and 4
}
return primeList
end function
I have problem that I could not figured out yet. In this problem I want to assemble chocolates in ordered way. To do this I have function called chocolate() and seven inputs which are
final weight (27)
number of smaller chocolates (4)
weight of smaller chocolates (2)
number of medium chocolates (4)
weight of medium chocolates (5)
number of large chocolates (3)
weight of large chocolates (8)
The problem wants from me to use as much as large chocolate then medium them small ones and finally return to (number of final small chocolates in final, number of medium chocolates in final, number of large chocolates in final). Actually it looks easy but I am not sure about how can I use functions with multiple inputs and then return to multiple outputs. IF you can help me about how can I start I would be very glad and continue to write a proper code, I hope.
Here some example
input: 27 4 2 4 5 3 8
output: 3 1 2
I imagined that your goal is packing the chocolates, starting with the big ones first and trying to reach the target weight if possible.
The below returns the packed choc sizes and also the packed weight, as it may not always be possible to exactly match the target weight.
As you may know, this is a known problem called 'bin packing' with better solutions than the 'first-fit decreasing' approach below (which assumes only one bin).
def chocolates(final_weight, num_small_choc, w_small_choc, num_medium_choc, w_medium_choc, num_big_choc, w_big_choc):
packed_weight = 0
# Try to pack chocolates if they are not too big
def pack_choc(packed_weight, num_choc, w_choc):
while num_choc > 0:
if packed_weight + w_choc <= final_weight:
packed_weight += w_choc
num_choc -= 1
else:
break
return packed_weight, num_choc
packed_weight, remaining_big_choc = pack_choc(packed_weight, num_big_choc, w_big_choc)
packed_weight, remaining_medium_choc = pack_choc(packed_weight, num_medium_choc, w_medium_choc)
packed_weight, remaining_small_choc = pack_choc(packed_weight, num_small_choc, w_small_choc)
return num_small_choc - remaining_small_choc, num_medium_choc - remaining_medium_choc, num_big_choc - remaining_big_choc, packed_weight
>>> chocolates(27, 4, 2, 4, 5, 3, 8)
... (1, 0, 3, 26)
I'm trying to understand what scipy.stats.nbinom.rvs is returning. Here is a sample of code:
*Code:**
from scipy.stats import nbinom
for i in range(10):
x = nbinom.rvs(n = 20, p = 0.5, size = 1)
print(str(i) + ": " + str(x[0]))
I thought this was basically saying: How many trials did it take to find 20 successes when flipping a coin (p=0.5). But a sample of my output shows some returns are well below 20. And since its impossible to get 20 success in 8 flips, I clearly don't understand the return value. Help please.
Sample output:
0: 19
1: 25
2: 14
3: 24
4: 30
5: 8
6: 28
7: 21
8: 14
9: 30
I've looked at the docs online but just seeing "random variates" isn't very helpful
From the docstring of scipy.stats.nbinom:
The probability mass function of the number of failures for `nbinom` is:
.. math::
f(k) = \binom{k+n-1}{n-1} p^n (1-p)^k
for :math:`k \ge 0`.
`nbinom` takes :math:`n` and :math:`p` as shape parameters where n is the
number of successes, whereas p is the probability of a single success.
So the values that you see are the number of "failures" that occur before achieving n "successes".
There is a note on the wikipedia page for the negative binomial distribution that is worth repeating here:
Different texts adopt slightly different definitions for the negative binomial distribution. They can be distinguished by whether the support starts at k = 0 or at k = r, whether p denotes the probability of a success or of a failure, and whether r represents success or failure, so it is crucial to identify the specific parametrization used in any given text.
So I am working on a problem which need me to get factors of a certain number. So as always I am using the module % in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying ZeroDivisionError . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one for potenial in range(number + 1): But this does not seem to work. Below is the rest of my code any help will be appreciated.
def Factors(number):
factors = []
for potenial in range(number + 1):
if number % potenial == 0:
factors.append(potenial)
return factors
In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):
def get_factors(number):
factors = []
for potential in range(1, number + 1):
if number % potential == 0:
factors.append(potential)
return factors
However, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15.
I encourage you to try an implement a more efficient algorithm.
Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions.
Remember back in primary school where you learn to carry numbers?
Example:
123
+ 127
-------
250
You carry the 1 from 3+7 over to the next column, and change the first column to 0?
Anyway, what I am getting at is that I want to make a program that calculates how many carries that the 2 numbers make (addition).
The way I am doing it, is that I am converting both numbers to strings, splitting them into individuals, and turning them back into integers. After that, I am going to run through adding 1 at a time, and when a number is 2 digits long, I will take 10 off it and move to the next column, calculating as I go.
The problem is, I barely know how to do that, and it also sounds pretty slow.
Here is my code so far.
numberOne = input('Number: ')
numberTwo = input('Number: ')
listOne = [int(i) for i in str(numberOne)]
listTwo = [int(i) for i in str(numberTwo)]
And then... I am at a loss for what to do. Could anyone please help?
EDIT:
Some clarification.
This should work with floats as well.
This only counts the amount of times it has carried, not the amount of carries. 9+9+9 will be 1, and 9+9 will also be 1.
The numbers are not the same length.
>>> def countCarries(n1, n2):
... n1, n2 = str(n1), str(n2) # turn the numbers into strings
... carry, answer = 0, 0 # we have no carry terms so far, and we haven't carried anything yet
... for one,two in itertools.zip_longest(n1[::-1], n2[::-1], fillvalue='0'): # consider the corresponding digits in reverse order
... carry = int(((int(one)+int(two)+carry)//10)>0) # calculate whether we will carry again
... answer += ((int(one)+int(two)+carry)//10)>0 # increment the number of carry terms, if we will carry again
... carry += ((int(one)+int(two)+carry)//10)>0 # compute the new carry term
... return answer
...
>>> countCarries(127, 123)
1
>>> countCarries(127, 173)
2