I am trying to solve fractional knapsack problem.
I have to find items with maximum calories per weight. I will fill my bag up to defined/limited weight with maximum calories.
Though algorithm is true, I can't find true result because of python division weirdness
When I try to find items with max calories per weight (python3)
print ((calories_list[i]/weight_list[i])*10)
# calories[i] 500 and weight[i] 30 (they're integers)
166.66666666666669
on the other hand, I opened terminal and typed python3
>>> 500/30
16.666666666666668
#when multiply with 10, it must be 16.666666666666668 not
#166.66666666666669
as you see, it gives different results
most of all, the important thing is that the real answer
500/30=16.6666666667
I got stucked here two days ago, please help me
Thanks you
As explained in the Python FAQ:
The float type in CPython uses a C double for storage. A float object’s value is stored in binary floating-point with a fixed precision (typically 53 bits) and Python uses C operations, which in turn rely on the hardware implementation in the processor, to perform floating-point operations. This means that as far as floating-point operations are concerned, Python behaves like many popular languages including C and Java.
You could use the decimal module as an alternative:
>>> from decimal import Decimal
>>> Decimal(500)/Decimal(30)
Decimal('16.66666666666666666666666667')
Related
I created a simple function to solve e to the power pi explained here
def e_to_power_pi(number):
return (1 + (1/number)) ** (number * math.pi)
from the look of it, clearly simple piece of code. But look at the output difference of these two values:
Example one:
e_to_power_pi(1000000000000000)
output:
32.71613881872869
Example Two:
e_to_power_pi(10000000000000000)
output:
1.0
upon tear down of the code, I learnt that 1.0 is coming from this portion
1 + (1/number) of the code above.
When I tore it down further, I learnt that 1/10000000000000000 outputs correct answer as it should 0.00000000000000001.
But when I add 1 to result it returns 1.0 instead of 1.00000000000000001.
I presumed that it must be default round off in python that may be changing the value.
I decided to use round(<float>, 64) # where <float> is any computation taking place in code above to try and get 64 digits post decimal result. But still I got stuck with the same result when addition was performed i.e. 1.0.
Can someone guide me or point me to the direction where I can learn or further read about it?
You are using double-precision binary floating-point format, with 53-bit significand precision, which is not quite enough to represent your fraction:
10000000000000001/10000000000000000 = 1.0000000000000001
See IEEE 754 double-precision binary floating-point format: binary64
Mathematica can operate in precisions higher than the architecturally imposed machine precision.
See Wolfram Language: MachinePrecision
The Mathematica screenshot below shows you would need a significand precision higher than 53-bit to obtain a result other than 1.
N numericises the fractional result to the requested precision. Machine precision is the default; higher precision calculations are done in software.
Consider the following terminating decimal numbers.
3.1^2 = 9.61
3.1^4 = 92.3521
3.1^8 = 8528.91037441
The following shows how Mathematica treats these expressions
In[1]:= 3.1^2
Out[1]= 9.61
In[2]:= 3.1^4
Out[2]= 92.352
So far so good, but
In[3]:= 3.1^8
Out[3]= 8528.91
doesn't provide enough precision.
So let's try N[], NumberForm[], and DecimalForm[] with a precision of 12
In[4]:= N[3.1^8,12]
Out[4]= 8528.91
In[5]:= NumberForm[3.1^8,12]
Out[5]= 8528.91037441
In[6]:= DecimalForm[3.1^8,12]
Out[6]= 8528.91037441
In this case DecimialForm[] and NumberForm[] work as expected, but N[] only provided the default precision of 6, even though I asked for 12. So DecimalForm[] or NumberForm[] seem to be the way to go if you want exact results when the inputs are terminating decimals.
Next consider rational numbers with infinite repeating decimals like 1/3.
In[7]:= N[1/3,20]
Out[7]= 0.33333333333333333333
In[9]:= NumberForm[1/3, 20]
Out[9]=
1/3
In[9]:= DecimalForm[1/3, 20]
Out[9]=
1/3
Unlike the previous case, N[] seems to be the proper way to go here, whereas NumberForm[] and DecimalForm[] do not respect precisions.
Finally consider irrational numbers like Sqrt[2] and Pi.
In[10]:= N[Sqrt[2],20]
Out[10]= 1.4142135623730950488
In[11]:= NumberForm[Sqrt[2], 20]
Out[11]=
sqrt(2)
In[12]:= DecimalForm[Sqrt[2], 20]
Out[12]=
sqrt(2)
In[13]:= N[π^12,30]
Out[13]= 924269.181523374186222579170358
In[14]:= NumberForm[Pi^12,30]
Out[14]=
π^12
In[15]:= DecimalForm[Pi^12,30]
Out[15]=
π^12
In these cases N[] works, but NumberForm[] and DecimalForm[] do not. However, note that N[] switches to scientific notation at π^13, even with a larger precision. Is there a way to avoid this switch?
In[16]:= N[π^13,40]
Out[16]= 2.903677270613283404988596199487803130470*10^6
So there doesn't seem to be a consistent way of formulating how to get decimal numbers with requested precisions and at the same time avoiding scientific notation. Sometimes N[] works, othertimes DecimalForm[] or NumberForm[] works, and at othertimes nothing seems to work.
Have I missed something or are there bugs in the system?
It isn't a bug because it is designed purposefully to behave this way. Precision is limited by the precision of your machine, your configuration of Mathematica, and the algorithm and performance constraints of the calculation.
The documentation for N[expr, n] states it attempts to give a result with n‐digit precision. When it cannot give the requested precision it gets as close as it can. DecimalForm and NumberForm work the same way.
https://reference.wolfram.com/language/ref/N.html explains the various cases behind this:
Unless numbers in expr are exact, or of sufficiently high precision, N[expr,n] may not be able to give results with n‐digit precision.
N[expr,n] may internally do computations to more than n digits of precision.
$MaxExtraPrecision specifies the maximum number of extra digits of precision that will ever be used internally.
The precision n is given in decimal digits; it need not be an integer.
n must lie between $MinPrecision and $MaxPrecision. $MaxPrecision can be set to Infinity.
n can be smaller than $MachinePrecision.
N[expr] gives a machine‐precision number, so long as its magnitude is between $MinMachineNumber and $MaxMachineNumber.
N[expr] is equivalent to N[expr,MachinePrecision].
N[0] gives the number 0. with machine precision.
N converts all nonzero numbers to Real or Complex form.
N converts each successive argument of any function it encounters to numerical form, unless the head of the function has an attribute such as NHoldAll.
You can define numerical values of functions using N[f[args]]:=value and N[f[args],n]:=value.
N[expr,{p,a}] attempts to generate a result with precision at most p and accuracy at most a.
N[expr,{Infinity,a}] attempts to generate a result with accuracy a.
N[expr,{Infinity,1}] attempts to find a numerical approximation to the integer part of expr.
So I was trying to get e^(pi*I)=-1, but python 3 gives me another, weird result:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)))
Result:
(-1+1.2246467991473532e-16j)
This should in theory return -1, no?
(Partial answer to the revised question.)
In theory, the result should be -1, but in practice the theory is slightly wrong.
The cmath unit uses floating-point variables to do its calculations--one float value for the real part of a complex number and another float value for the imaginary part. Therefore the unit experiences the limitations of floating point math. For more on those limitations, see the canonical question Is floating point math broken?.
In brief, floating point values are usually mere approximations to real values. The value of cmath.pi is not actually pi, it is just the best approximation that will fit into the floating-point unit of many computers. So you are not really calculating e^(pi*I), just an approximation of it. The returned value has the exact, correct real part, -1, which is somewhat surprising to me. The imaginary part "should be" zero, but the actual result agrees with zero to 15 decimal places, or over 15 significant digits compared to the start value. That is the usual precision for floating point.
If you require exact answers, you should not be working with floating point values. Perhaps you should try an algebraic solution, such as the sympy module.
(The following was my original answer, which applied to the previous version of the question, where the result was an error message.)
The error message shows that you did not type what you thought you typed. Instead of cmath.exp on the outside of the expression, you typed math.exp. The math version of the exponential function expects a float value. You gave it a complex value (cmath.pi * cmath.sqrt(-1)) so Python thought you wanted to convert that complex value to float.
When I type the expression you give at the top of your question, with the cmath properly typed, I get the result
(-1+1.2246467991473532e-16j)
which is very close to the desired value of -1.
Found the answer.
First of all, python 3 cannot properly compute irrational numbers, and so e^(pi*I) will not return -1, as per This answer
Secondly, python 3 returns any complex number as a cartesian pair (real + imaginary).
The fix was to extract the real part of the number:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)).real)
In Python3 (I am using 3.6) they decided to start outputting Integral values.
That created the following problem for me. Suppose that we input a large float
math.floor(4.444444444444445e+85)
The output in this case is being
44444444444444447395279681404626730521364975775215375673863470153230912354225773084672
In Python2.7 the output used to be 4.444444444444445e+85.
Question 1: Is the output in 3.6 reproducible? In other words, what is it? Computing several times in different computers gave me the same result. I guess then that it is a value depending only on the input 4.444444444444445e+85. My guess what it is is that it is the floor of the binary representation of that float. The factorization of the output is
2^232 × 3 × 17 × 31 × 131 × 1217 × 1933 × 13217
where that factor 2^232 is close to the 10^70 that the scientific notation has, but I am not completely sure.
Question 2: I think I know how to take a float 4.444444444444445e+85, extract its significand and exponent, and produce myself that actual integral value of 4444444444444445*10**70 or the float 4.444444444444445e+85, which in my opinion seems a more honest value of for the floor of float(4.444444444444445e+85). Is there a neat way to recover this (allow me to call it) honest floor?
Ok, I retract about calling 'honest' to the floor of the decimal representation. Since the computer stores the numbers in binary, it is fair calling honest the output computed for the binary representation. This, if my guess for Question 1 is correct.
Displaying the output in hex should be helpful:
>>> import math
>>> math.floor(4.444444444444445e+85)
44444444444444447395279681404626730521364975775215375673863470153230912354225773084672
>>> hex(_)
'0x16e0c6d18f4bfb0000000000000000000000000000000000000000000000000000000000'
Note all the trailing zeroes! On almost all platforms, Python floats are represented by the hardware with a significand containing 53 bits, and a power-of-2 exponent. And, indeed,
>>> (0x16e0c6d18f4bfb).bit_length() # the non-zero part does have 53 bits
53
>>> 0x16e0c6d18f4bfb * 2**232 # and 232 zero bits follow it
44444444444444447395279681404626730521364975775215375673863470153230912354225773084672
So the integer you got back is, mathematically, exactly equal to the float you started with. Another way to see that:
>>> (4.444444444444445e85).hex()
'0x1.6e0c6d18f4bfbp+284'
If you want to work with decimal representations instead, see the docs for the decimal module.
Edit: as discussed in comments, perhaps what you really want here is simply
float(math.floor(x))
That will reproduce the same result Python 2 gave for
math.floor(x)
I am trying to get random numbers that are normally distributed with a mean of 20 and standard deviation of 2 for a sample size of 225 in Excel but I am getting numbers with decimals ( like 17.5642 , 16.337).
if I round it off, normal distribution cant be achieved. Please help me to get round figures that are normally distributed too....I used the Excel FORMULA "* =NORMINV(RAND(),20,2) *" for generating those numbers. Please suggest to get round figures.
As #circular-ruin has observed, what you are asking for strictly speaking doesn't make sense.
But -- perhaps you can run the Central Limit Theorem backwards. CLT is often used to approximate discrete distributions by normal distributions. You can use it to approximate a normal distribution by a discrete distribution.
If X is binomial with parameters p and n, then it is a standard result that the mean of X is np and the variance of X is np(1-p). Elementary algebra yields that such an X has mean 20 and variance 4 (hence standard deviation 2) if and only if n = 25 and p = 0.8. Thus -- if you simulate a bin(25,0.8) random variable you will get integer values which will be approximately N(20,4). This seems a little more principled then simulating N(20,4) directly and then just rounding. It still isn't normal -- but you really need to drop that requirement if you want your values to be integers.
To simulate a bin(25,0.8) random variable in Excel, just use the formula
=BINOM.INV(25,0.8,RAND())
with just 225 observations the results would probably pass a Chi-squared goodness of fit test for N(20,4) (though the right tail would be under-represented).