Excel: number entered vs number displayed vs number stored in memory - excel

How does excel determine what to number to display? specifically the number of decimal places
for example:
50.98, when stored as a single-precision float is 50.979999542236328125
50.979999 is also stored as the exact same single-precision float
(binary rep. 01000010010010111110101110000101, taken from here: https://www.h-schmidt.net/FloatConverter/IEEE754.html)
when i type 50.98 & 50.979999 into 2 cells, change format to number, and extend out the decimal places using the formatting button
it represents them exactly as 50.98 & 50.979999, as i originally typed.
how is that working? is excel storing the exact text i typed and not (directly) storing the float data type at all?
if it stores it as a double, how does it preserve the exact precision i originally typed in that case?
i can't find documentation outlining how this works.
Note its not causing me any problems, i just need an explanation for differences in how excel displays vs calculations based on those values.

it represents them exactly as 50.98 & 50.979999, as i originally typed.
Excel is padding with zeros after 15 significant decimal digits.
The internal number is encoded with a high enough binary precision such that limiting output to 15 deimcal places, the original typed in decimal values appear to be exactly that.
=2/3 is an informative example showing this limit and exposing the binary internals by carefully extracting out a bit at a time.
As displayed in one cell, decimal output rounds to 15 places, padding with zero after that.
0.66666666666666700000000
The below does a binary conversion of =2/3 and forms 0.101010101010101010101010101010101010101010101010101012, exactly what is expected if Excel used a binary64. (Below)
OP's observations are consistent with using binary64 and rounding output as decimal text to 15 significant digits.
Cell A3: =FLOOR(B2*A$1,1), Cell B3 = =B2*A$1 - A3

Hypothesis: When displaying a number, Excel first converts a number to a decimal numeral with at most 15 significant digits even if more are requested. If additional digits are requested, they are filled in as zeros. (In addition, Excel may apply other alterations depending on context.)
In Microsoft Excel 2008 for Mac, I entered =1+22*POWER(2,-52) in A1 and =1+23*POWER(2,-52) in A2. Using IEEE-754 binary64, these should generate the numbers 1.000000000000004884981308350688777863979339599609375 and 1.0000000000000051070259132757200859487056732177734375. Entering =A1-1 and =A2-1 in B1 and B2 and setting these to Number format with 30 decimal places shows “0.000000000000004884981308350690” and “0.000000000000005107025913275720”, which is consistent with IEEE-754 binary64. So we have some assurance the numbers above were indeed generated and stored in Excel.
Setting A1 and A2 to Number format with 20 decimal places shows “1.00000000000000000000” and “1.00000000000001000000”.
Clearly, if Excel were displaying the actual numbers with 20 decimal places, it would show “1.000000000000004885” and “1.000000000000005107”. It does not. The display we see is consistent with converting the numbers using 15 decimal digits (significant digits, not just those after the decimal point) and then padding with zeros.
Converting 50.98 to the IEEE-754 binary64 format yields 50.97999999999999687361196265555918216705322265625. Displaying this with 15 decimal digits yields 50.9800000000000.

Related

Excel 2007, inconsistent logical OR response

Regarding Excel 2007 (though it may pertain to other versions):
I want to apply Excel Data Validation to manually inputted data. In this particular case, the input is of the form NN.nnnnh, where the digit "h" is a "half-digit". That is, it can either be 0 or 5.
The spread-sheet converts land-surveying that is manually entered in the form of Feet, Inches, and 16ths of an inch, into decimal feet
The function of the half-digit is to allow the optional higher-precision to 1/32nd of an inch.
For example:
43.0913 is the raw entry for 43 feet, nine inches, and 13/16ths of an inch.
Now, by adding the half-digit in the fifth decimal place, a precision of 1/32" can be expressed.
For example:
27.08135 is the manual entry for 27 feet, 08 inches, and (13.5/16=) 27/32nds of an inch.
The raw input NN.nnnnh is decomposed and converted into feet as a decimal number, using Excel TRUNC function. This manner of conversion is analogous to the more familiar conversion of angles entered as D˚M'S", into DD.dddddd).
I want to assure that the 5th decimal place, manually entered, is ONLY Zero or 5.
I can separately apply logical tests to determine if the fifth-decimal entry is Zero, or 5.
But, when I combine those separate logical tests using the =IF(OR( structure, I get inconsistent results IFF the manually-entered data has an integer value (i.e., in the NN.nnnnh format, any length of just one foot or greater, manually entered as >= 1.00000). Unless I undertake the surveying of table-top architectural scale models, this has serious limitations !!!
I have attached an example spreadsheet to illustrate the formulae used and the results. If anybody can shed some light on this, it would be appreciated.
(If there is a way to simply Attach a *.xlsx example....please let me know. I had intended to do this, and then discovered that it seems to be impossible!)
Use MROUND to test if the number is the same:
=A1=MROUND(A1,0.00005)

Auto Generate Number Microsoft Excel

how to generate auto number from 0,000000000000000000000000000001 till 0,999999999999999999999999999999 at excel and the format cell is number ?
i've tried for dragging mouse , but i guess thats so terrible
You're out of luck.
Excel uses a 64 bit double precision IEEE754 floating point type for numbers (along with some clever rounding tricks). That gives you 53 bits of precision which loosely translates to 15 decimal significant figures of accuracy.
You will not be able to descriminate between numbers with such a small interval between them, if the total range is between 0 and 1.
(There's also the small matter of there not being enough space in a workbook to represent all those numbers.)

Why does excel AVERAGE change when changing the number format of cells?

I've got an Excel sheet which is exhibiting strange behaviour. I have 2 values, followed by an average of those 2 values - simple enough, right?
However, if I change the number format of the top cell from 2 decimal places to 30, I get a different result:
Can anyone explain this? When a cell is formatted to 2 decimal places, does that mean all formulae using this cell are rounding the value to 2 decimal places also?
Check your Excel options (Alt+F,T) for the Advanced ► When calculating this workbook ► Set precision as displayed option. When this is checked, calculation is automatically rounded off to the displayed number of decimals rather than the internal 15 digit floating point precision. It also permanently truncates the raw value to the displayed precision so I am unclear on how you are bouncing between the two average values.
The actual average of 1.6786427146 and 1.73 is 1.7043213573 which is 1.70 when only two decimals are displayed. It would only be through Precision as displayed that 1.6786427146 would actually be converted to 1.68 making the average 1.71.
Turn the option off and the underlying raw value will be stored to a 15 digit floating point precision. The same goes for all internal formula calculations.

Certain fractions being calculated in excel 2013

I'm creating a simple spreadsheet to calculate some betting odds and keeping track of my wins/losses
when I put fractional odds in one column Excel converts some of them to whole numbers (i.e. the ones that are 1/1, 2/1, etc where it does not do it for odds like 4/11, 7/2, etc.
Is there a way of turning this off?
Please note that some of the top heave fractions (11/2, 11/10 etc) get put into whole numbers such as 5 1/2 etc! And I do not want this to occur either
I've tried the Custom formatting of the cells but all of the denominators will inevitably be different, so having something like ??/28 won't work for me
EDIT:
This was solved using the custom format ??/?? and simply removeing the # that was at the front of the custom cell format dialog box
You simply need to change the cell format; you want to use ???/???. This will make Excel represent any decimal number to the closest fraction approximation it can find using the specified numerator and denominator significant digits (number of ? in the format string)
If the cell input is directly a fraction, it will reduce it if possible but always keeping the fraction format.
Examples:
= .10 will be converted to 1/10
= 0.1231 will be converted to 81/658 (supossing ???/??? format is used).
= 10/100 will be converted to 1/10
= 11/12 will remain as 11/12 as no reduction is possible.
= 1/1 will remain as 1/1
etc.
The behavior you are describing is becuase you are using one of Excel's default fraction formats which are all similar to # ???/??? (take note of the leading #). This format will reduce integral values to the non fractional part.
You could use text format for the cells with the odds, and then the VALUE function in any calculations you need to do with them

Stata: Variable type (8-digit numbers)

input a
88888888
99999999
end
export excel a.xlsx, replace
Then, if I open the excel file, the numbers are shown as 8.89e+07 and 1.00e+08. How can I restore these to the original numbers. Do I have to do this in Excel? Is there any way to prevent Stata from converting those numbers to the "scientific" format?
The effect of your input command is to read those numbers into variables of float type. But there aren't enough bits in a float to hold 99999999 exactly. This is well documented.
See e.g. the help for data types:
"floats have about 7 digits of accuracy; the magnitude of the number does not matter. Thus, 1234567 can be stored perfectly as a float, as can 1234567e+20. The number 123456789, however, would be rounded to 123456792. In general, this rounding does not matter.
If you are storing identification numbers, the rounding could matter. If the
identification numbers are integers and take 9 digits or less, store them as longs;
otherwise, store them as doubles. doubles have 16 digits of accuracy."
So you degraded your data by using an inappropriate data type. That is the issue, not export excel.

Resources