Different booleans output at the same value=0 when starting from different values in a while loop [duplicate] - python-3.x

This question already has answers here:
Closed 11 years ago.
Duplicates:
How is floating point stored? When does it matter?
Is floating point math broken?
Why does the following occur in the Python Interpreter?
>>> 0.1+0.1+0.1-0.3
5.551115123125783e-17
>>> 0.1+0.1
0.2
>>> 0.2+0.1
0.30000000000000004
>>> 0.3-0.3
0.0
>>> 0.2+0.1
0.30000000000000004
>>>
Why doesn't 0.2 + 0.1 = 0.3?

That's because .1 cannot be represented exactly in a binary floating point representation. If you try
>>> .1
Python will respond with .1 because it only prints up to a certain precision, but there's already a small round-off error. The same happens with .3, but when you issue
>>> .2 + .1
0.30000000000000004
then the round-off errors in .2 and .1 accumulate. Also note:
>>> .2 + .1 == .3
False

Not all floating point numbers are exactly representable on a finite machine. Neither 0.1 nor 0.2 are exactly representable in binary floating point. And nor is 0.3.
A number is exactly representable if it is of the form a/b where a and b are an integers and b is a power of 2. Obviously, the data type needs to have a large enough significand to store the number also.
I recommend Rob Kennedy's useful webpage as a nice tool to explore representability.

Related

python return wrong result when I multiple float with int

I have a multiple in python 3.7.3
when I run 0.58 * 100 it return 57.99999999999999
Then I found that Java have same result. But C can return right number. I don't know what happen with them. Sorry if it look like basic.
Its actually not the wrong answer, just an unexpected one.
If we think a bit about the problem, There are an infinite amount of numbers between 0 and 1. Then we can see that you cannot represent all numbers between 0 and 1 with a finite amount of bytes, as infinite numbers are more then a finite number of numbers. so some numbers just cant be represented (in fact, most numbers of the infinite series between 0 and 1 cannot be represented)
Following the floating point standard (IEEE-754), 0.58 is really 0.5799999999999999289457264239899814128875732421875 which is the closest number to 0.58 that can be represented with 64bit floating points.
check it with python
>>> Decimal(0.58)
Decimal('0.57999999999999996003197111349436454474925994873046875')
If you want 58.0 you can quantize it to two decimals with the Decimal class.
>>> Decimal(100 * 0.58).quantize(Decimal('.01'))
Decimal('58.00')

How can 0.2 + 0.1 be equal to 0.3 in Excel?

I understand perfectly why 0.1 + 0.2 is not equal to 0.3 due to the floating point. In most of programming languages, 0.1 + 0.2 == 0.3 is False.
But in Excel if(0.1 + 0.2 == 0.3; 1; 0) gives 1
The reason this happens in Excel is because Excel only keeps track of 15 digits of precision. Floating point math for 0.2 + 0.1 results in 0.30000000000000004, and that 4 way out there is the 17th digit. That means Excel just truncates everything after the 15th 0 and is left with 0.300000000000000 which = 0.3
See here for more info: https://en.wikipedia.org/wiki/Numeric_precision_in_Microsoft_Excel

Normalized values, when summed are more than 1

I have two files:
File 1:
TOPIC:topic_0 1294
aa 234
bb 123
TOPIC:topic_1 2348
aa 833
cc 239
bb 233
File 2:
0.1 0.2 0.3 0.4
This is just the format of my files. Basically, when the second column (omitting the first "TOPIC" line) is summed for each topic, it constitutes to 1 as they are the normalized values. Similarly, in file 2, the values are normalized and hence they also constitute to 1.
I perform multiplication of the values from file 1 and 2. The resulting output file looks like:
aa 231
bb 379
cc 773
The second column when summed of the output file should give 1. But few files have values little over 1 like 1.1, 1.00038. How can I precisely get 1 for the output file? Is it some rounding off that I should do or something?
PS: The formats are just examples, the values and words are different. This is just for understanding purposes. Please help me sort this.
Python stores floating point decimals in base-2.
https://docs.python.org/2/tutorial/floatingpoint.html
This means that some decimals could be terminating in base-10, but are repeating in base-2, hence the floating-point error when you add them up.
This gets into some math, but imagine in base-10 trying to express the value 2/6. When you eliminate the common factors from the numerator and denominator it's 1/3.
It's 0.333333333..... repeating forever. I'll explain why in a moment, but for now, understand that if only store the first 16 digits in the decimal, for example, when you multiply the number by 3, you won't get 1, you'll get .9999999999999999, which is a little off.
This rounding error occurs whenever there's a repeating decimal.
Here's why your numbers don't repeat in base-10, but they do repeat in base-2.
Decimals are in base-10, which prime factors out to 2^1 * 5^1. Therefore for any ratio to terminate in base-10, its denominator must prime factor to a combination of 2's and 5's, and nothing else.
Now let's get back to Python. Every decimal is stored as binary. This means that in order for a ratio's "decimal" to terminate, the denominator must prime factor to only 2's and nothing else.
Your numbers repeat in base-2.
1/10 has (2*5) in the denominator.
2/10 reduces to 1/5 which still has five in the denominator.
3/10... well you get the idea.

How much have xxx precision binary fixed point representation?

I am trying to measure how much have accuracy when I convert to binary fixed point representation way.
first I tried use this 0.9375. And I got the binary 0.1111.
second I tried used this 0.9377 and I also got the binary 0.1111
There is nothing different between them.
Also how can I solve this problem?
Is there any other way? To make converting ?
For your understand, I let me know some more example,
For example, If I want to convert 3.575 to binary then 3.575 be 11.1001.
but If I back to again to decimal then 3.5625. It so quite different on original value.
From a similar question we have:
Base 2: Twos complement 4 integer, 4 bit fractional
-2^3 2^2 2^1 2^0 . 2^-1 2^-2 2^-3 2^-4
-8 4 2 1 . 0.5 0.25 0.125 0.0625
With only 4 fractional bits the represented number only has an accuracy of 0.0625
3.575 could be 11.1001 = 2+ 1+ 0.5 + 0.0625 => 3.5625 to low
or 11.1010 = 2+ 1+ 0.5 + 0.125 => 3.625 to high
This should indicate that 4 bits is just not enough to represent "3.575" exactly.
To figure out how many bit you would need multiply by a power of 2 until you get an integer: For "3.575" it is rather a lot (50 fractional bits).
3.575 * 2^2 = 14.3 (not integer)
3.575 * 2^20 = 3748659.2
3.575 * 2^30 = 3838627020.8
3.575 * 2^40 = 3930754069299.2 (not integer)
3.575 * 2^50 = 4025092166962381.0 (INTEGER) we need 50 bits!
3.575 => 11.10010011001100110011001100110011001100110011001101
Multiplying by a power of two shift the word to the left (<<) When there is no fractional bits left it means the number is fully represented, then number of shifts is the number of fractional bits required.
For fixed point you are better off thinking about the level of precision your application requires.

Limit Decimal Places In Generated Numbers in R.Studio

I am a beginner in programmin in general and R specifically.
I would like to generate a set of random numbers in a normal distribution but to limit the decimal places in these numbers to only 2.
I have been using x1 <- runif() to generate my numbers.
Can I add something to it to enable me to only get results rounded off to 2 decimal places?
You can limit the decimal places using the round() function.
If I understand your question correctly this should do the trick:
x1 <- round(
runif(5, min=0, max=1)
, digits = 2
)
x1
The results, which will be different each time, are:
[1] 0.55 0.55 0.75 0.85 0.13

Resources