function with multiple outputs in python 3 - python-3.x

I have problem that I could not figured out yet. In this problem I want to assemble chocolates in ordered way. To do this I have function called chocolate() and seven inputs which are
final weight (27)
number of smaller chocolates (4)
weight of smaller chocolates (2)
number of medium chocolates (4)
weight of medium chocolates (5)
number of large chocolates (3)
weight of large chocolates (8)
The problem wants from me to use as much as large chocolate then medium them small ones and finally return to (number of final small chocolates in final, number of medium chocolates in final, number of large chocolates in final). Actually it looks easy but I am not sure about how can I use functions with multiple inputs and then return to multiple outputs. IF you can help me about how can I start I would be very glad and continue to write a proper code, I hope.
Here some example
input: 27 4 2 4 5 3 8
output: 3 1 2

I imagined that your goal is packing the chocolates, starting with the big ones first and trying to reach the target weight if possible.
The below returns the packed choc sizes and also the packed weight, as it may not always be possible to exactly match the target weight.
As you may know, this is a known problem called 'bin packing' with better solutions than the 'first-fit decreasing' approach below (which assumes only one bin).
def chocolates(final_weight, num_small_choc, w_small_choc, num_medium_choc, w_medium_choc, num_big_choc, w_big_choc):
packed_weight = 0
# Try to pack chocolates if they are not too big
def pack_choc(packed_weight, num_choc, w_choc):
while num_choc > 0:
if packed_weight + w_choc <= final_weight:
packed_weight += w_choc
num_choc -= 1
else:
break
return packed_weight, num_choc
packed_weight, remaining_big_choc = pack_choc(packed_weight, num_big_choc, w_big_choc)
packed_weight, remaining_medium_choc = pack_choc(packed_weight, num_medium_choc, w_medium_choc)
packed_weight, remaining_small_choc = pack_choc(packed_weight, num_small_choc, w_small_choc)
return num_small_choc - remaining_small_choc, num_medium_choc - remaining_medium_choc, num_big_choc - remaining_big_choc, packed_weight
>>> chocolates(27, 4, 2, 4, 5, 3, 8)
... (1, 0, 3, 26)

Related

Program does not run faster as expected when checking much less numbers for finding primes

I made a program to find primes below a given number.
number = int(input("Enter number: "))
prime_numbers = [2] # First prime is needed.
for number_to_be_checked in range(3, number + 1):
square_root = number_to_be_checked ** 0.5
for checker in prime_numbers: # Checker will become
# every prime number below the 'number_to_be_checked'
# variable because we are adding all the prime numbers
# in the 'prime_numbers' list.
if checker > square_root:
prime_numbers.append(number_to_be_checked)
break
elif number_to_be_checked % checker == 0:
break
print(prime_numbers)
This program checks every number below the number given as the input. But primes are of the form 6k ± 1 only. Therefore, instead of checking all the numbers, I defined a generator that generates all the numbers of form 6k ± 1 below the number given as the input. (I added 3 also in the prime_numbers list while initializing it as 2,3 cannot be of the form 6k ± 1)
def potential_primes(number: int) -> int:
"""Generate the numbers potential to be prime"""
# Prime numbers are always of the form 6k ± 1.
number_for_function = number // 6
for k in range(1, number_for_function + 1):
yield 6*k - 1
yield 6*k + 1
Obviously, the program should have been much faster because I am checking comparatively many less numbers. But, counterintuitively the program is slower than before. What could be the reason behind this?
In every six numbers, three are even and one is a multiple of 3. The other two are 6-coprime, so are potentially prime:
6k+0 6k+1 6k+2 6k+3 6k+4 6k+5
even even even
3x 3x
For the three evens your primality check uses only one division (by 2) and for the 4th number, two divisions. In all, five divisions that you seek to avoid.
But each call to a generator has its cost too. If you just replace the call to range with the call to create your generator, but leave the other code as is(*), you are not realizing the full savings potential.
Why? Because (*)if that's the case, while you indeed test only 1/3 of the numbers now, you still test each of them by 2 and 3. Needlessly. And apparently the cost of generator use is too high.
The point to this technique known as wheel factorization is to not test the 6-coprime (in this case) numbers by the primes which are already known to not be their divisors, by construction.
Thus, you must start with e.g. prime_numbers = [5,7] and use it in your divisibility testing loop, not all primes, which start with 2 and 3, which you do not need.
Using nested for loop along with square root will be heavy on computation, rather look at Prime Sieve Algorithm which is much faster but does take some memory.
One way to use the 6n±1 idea is to alternate step sizes in the main loop by stepping 2 then 4. My Python is not good, so this is pseudocode:
function listPrimes(n)
// Deal with low numbers.
if (n < 2) return []
if (n = 2) return [2]
if (n = 3) return [2, 3]
// Main loop
primeList ← [2, 3]
limit ← 1 + sqrt(n) // Calculate square root once.
index ← 5 // We have checked 2 and 3 already.
step ← 2 // Starting step value: 5 + 2 = 7.
while (index <= limit) {
if (isPrime(index)) {
primeList.add(index)
}
index ← index + step
step ← 6 - step // Alternate steps of 2 and 4
}
return primeList
end function

Average size of intersection of random samples drawn from the same population

Let say we have an urn with N balls, and we draw several random samples of random size from this urn (we replace the balls in the urn after each sampling but each sample is drawn without replacement).
I need to compute the average size of each sample after removing elements that are at least in 2 samples.
For example, if the N = 2 and we have one sample of 1 element and one sample of 2 elements, average size after removing intersection will be 0 for the first sample and 1 for the last one.
If N = 3, and first sample has 1 element and second sample has 2 elements, I think the element in first sample have 2/3 chance of being in other sample, so the size of first sample would be 1/3, and the size of second sample 1 + 1/3 = 4/3.
I'm struggling to find a formula to compute this for any situation, I guess it can be done with number of combinations and sizes of samples.
I have small values for N (less than 100), for the samples sizes (less than 10) and for the number of samples (2, 3 or 4)...
I could approximate this easily with some Monte Carlo simulation (see code below) but it would be faster to apply directly the correct formula.
Maybe this would be more understandable with some python code which compute some approximations of what I want:
from random import sample, randint
def simulations(population_size, samples_sizes, iterations_count=10000):
population = range(population_size)
samples_count = len(samples_sizes)
average_intersection_size = [0 for _ in range(samples_count)]
for iteration in range(1, iterations_count + 1): # start from 1
# generate random samples
samples = []
for sample_index in range(samples_count):
samples.append(sample(population, samples_sizes[sample_index]))
# count items overlapping
for sample_index in range(samples_count):
# retrieve intersection size with the union of other samples
union_of_others = set()
for other_sample_index in range(samples_count):
if other_sample_index == sample_index:
# we skip current sample
continue
union_of_others |= set(samples[other_sample_index])
n = len(set(samples[sample_index]) & union_of_others)
# incremental mean...
delta = n - average_intersection_size[sample_index]
average_intersection_size[sample_index] += delta / iteration
# output results
print(f'population size is {population_size}')
for sample_index in range(samples_count):
print(
f'sample {sample_index + 1}: original_size={samples_sizes[sample_index]}, new_size={samples_sizes[sample_index] - average_intersection_size[sample_index]}')
population_size = 12
samples_count = randint(2, 4)
samples_sizes = [randint(1, population_size) for _ in range(samples_count)]
simulations(population_size, samples_sizes)
"""Example outputs
population size is 12
sample 1: original_size=10, new_size=1.859099999999989
sample 2: original_size=4, new_size=0.18249999999999833
sample 3: original_size=7, new_size=0.51390000000002
sample 4: original_size=4, new_size=0.1847000000000114
population size is 12
sample 1: original_size=1, new_size=0.4761000000000001
sample 2: original_size=5, new_size=3.821899999999997
sample 3: original_size=2, new_size=1.0762999999999967
population size is 12
sample 1: original_size=4, new_size=0.0
sample 2: original_size=4, new_size=0.0
sample 3: original_size=6, new_size=0.0
sample 4: original_size=12, new_size=2.6712000000000167
population size is 12
sample 1: original_size=8, new_size=7.335500000000002
sample 2: original_size=1, new_size=0.33550000000000246
population size is 12
sample 1: original_size=9, new_size=0.7495999999999565
sample 2: original_size=11, new_size=2.7495999999999565
"""
My goal is to avoid simulations and applying the exact formula directly instead.

problem with rounding in calculating minimum amount of coins in change (python)

I have a homework assignment in which I have to write a program that outputs the change to be given by a vending machine using the lowest number of coins. E.g. £3.67 can be dispensed as 1x£2 + 1x£1 + 1x50p + 1x10p + 1x5p + 1x2p.
However, I'm not getting the right answers and suspect that this might be due to a rounding problem.
change=float(input("Input change"))
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
while change!=0:
if change-2>=0:
change=change-2
twocount+=1
else:
if change-1>=0:
change=change-1
onecount+=1
else:
if change-0.5>=0:
change=change-0.5
halfcount+=1
else:
if change-0.2>=0:
change=change-0.2
pttwocount+=1
else:
if change-0.1>=0:
change=change-0.1
ptonecount+=1
else:
break
print(twocount,onecount,halfcount,pttwocount,ptonecount)
RESULTS:
Input: 2.3
Output: 10010
i.e. 2.2
Input: 3.4
Output: 11011
i.e. 3.3
Some actually work:
Input: 3.2
Output: 11010
i.e. 3.2
Input: 1.1
Output: 01001
i.e. 1.1
Floating point accuracy
Your approach is correct, but as you guessed, the rounding errors are causing trouble. This can be debugged by simply printing the change variable and information about which branch your code took on each iteration of the loop:
initial value: 3.4
taking a 2... new value: 1.4
taking a 1... new value: 0.3999999999999999 <-- uh oh
taking a 0.2... new value: 0.1999999999999999
taking a 0.1... new value: 0.0999999999999999
1 1 0 1 1
If you wish to keep floats for output and input, multiply by 100 on the way in (cast to integer with int(round(change))) and divide by 100 on the way out of your function, allowing you to operate on integers.
Additionally, without the 5p, 2p and 1p values, you'll be restricted in the precision you can handle, so don't forget to add those. Multiplying all of your code by 100 gives:
initial value: 340
taking a 200... new value: 140
taking a 100... new value: 40
taking a 20... new value: 20
taking a 20... new value: 0
1 1 0 2 0
Avoid deeply nested conditionals
Beyond the decimal issue, the nested conditionals make your logic very difficult to reason about. This is a common code smell; the more you can eliminate branching, the better. If you find yourself going beyond about 3 levels deep, stop and think about how to simplify.
Additionally, with a lot of branching and hand-typed code, it's very likely that a subtle bug or typo will go unnoticed or that a denomination will be left out.
Use data structures
Consider using dictionaries and lists in place of blocks like:
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
which can be elegantly and extensibly represented as:
denominations = [200, 100, 50, 10, 5, 2, 1]
used = {x: 0 for x in denominations}
In terms of efficiency, you can use math to handle amounts for each denomination in one fell swoop. Divide the remaining amount by each available denomination in descending order to determine how many of each coin will be chosen and subtract accordingly. For each denomination, we can now write a simple loop and eliminate branching completely:
for val in denominations:
used[val] += amount // val
amount -= val * used[val]
and print or show a final result of used like:
278 => {200: 1, 100: 0, 50: 1, 10: 2, 5: 1, 2: 1, 1: 1}
The end result of this is that we've reduced 27 lines down to 5 while improving efficiency, maintainability and dynamism.
By the way, if the denominations were a different currency, it's not guaranteed that this greedy approach will work. For example, if our available denominations are 25, 20 and 1 cents and we want to make change for 63 cents, the optimal solution is 6 coins (3x 20 and 3x 1). But the greedy algorithm produces 15 (2x 25 and 13x 1). Once you're comfortable with the greedy approach, research and try solving the problem using a non-greedy approach.

Keep Getting ZeroDivisonError Whenever using module

So I am working on a problem which need me to get factors of a certain number. So as always I am using the module % in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying ZeroDivisionError . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one for potenial in range(number + 1): But this does not seem to work. Below is the rest of my code any help will be appreciated.
def Factors(number):
factors = []
for potenial in range(number + 1):
if number % potenial == 0:
factors.append(potenial)
return factors
In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):
def get_factors(number):
factors = []
for potential in range(1, number + 1):
if number % potential == 0:
factors.append(potential)
return factors
However, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15.
I encourage you to try an implement a more efficient algorithm.
Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions.

Python 3.3.2 - Calculating the Carrying of Numbers

Remember back in primary school where you learn to carry numbers?
Example:
123
+ 127
-------
250
You carry the 1 from 3+7 over to the next column, and change the first column to 0?
Anyway, what I am getting at is that I want to make a program that calculates how many carries that the 2 numbers make (addition).
The way I am doing it, is that I am converting both numbers to strings, splitting them into individuals, and turning them back into integers. After that, I am going to run through adding 1 at a time, and when a number is 2 digits long, I will take 10 off it and move to the next column, calculating as I go.
The problem is, I barely know how to do that, and it also sounds pretty slow.
Here is my code so far.
numberOne = input('Number: ')
numberTwo = input('Number: ')
listOne = [int(i) for i in str(numberOne)]
listTwo = [int(i) for i in str(numberTwo)]
And then... I am at a loss for what to do. Could anyone please help?
EDIT:
Some clarification.
This should work with floats as well.
This only counts the amount of times it has carried, not the amount of carries. 9+9+9 will be 1, and 9+9 will also be 1.
The numbers are not the same length.
>>> def countCarries(n1, n2):
... n1, n2 = str(n1), str(n2) # turn the numbers into strings
... carry, answer = 0, 0 # we have no carry terms so far, and we haven't carried anything yet
... for one,two in itertools.zip_longest(n1[::-1], n2[::-1], fillvalue='0'): # consider the corresponding digits in reverse order
... carry = int(((int(one)+int(two)+carry)//10)>0) # calculate whether we will carry again
... answer += ((int(one)+int(two)+carry)//10)>0 # increment the number of carry terms, if we will carry again
... carry += ((int(one)+int(two)+carry)//10)>0 # compute the new carry term
... return answer
...
>>> countCarries(127, 123)
1
>>> countCarries(127, 173)
2

Resources