Float to String: What is an Exponent part? - string

I've written a small function in C, which almost do the same work as standart function `fcvt'. As you may know, this function takes a float/double and make a string, representing this number in ANSI characters. Everything works ;-)
For example, for number 1.33334, my function gives me string: "133334" and set up special integer variable `decimal_part', in this example will be 1, which means in decimal part only 1 symbol, everything else is a fraction.
Now I'm curious about what to do standart C function `printf'. It can take %a or %e as format string. Let me cite for %e (link junked):
"double" argument is output in scientific notation
[-]m.nnnnnne+xx
... The exponent always contains two digits.
It said: "The exponent always contains two digits". But what is an Exponent? This is the main question. And also, how to get this 'exponent' from my function above or from `fcvt'.

The notation might be better explained if we expand the e:
[-]m.nnnnnn * (10^xx)
So you have one digit of m (from 0 to 9, but it will only ever be 0 if the entire value is 0), and several digits of n. I guess it might be best to show with examples:
1 = 1.0000 * 10^0 = 1e0
10 = 1.0000 * 10^1 = 1e1
10000 = 1.0000 * 10^4 = 1e4
0.1 = 1.0000 * 10^-1 = 1e-1
1,419 = 1.419 * 10^3 = 1.419e3
0.00000123 = 1.23 * 10^-5 = 1.23e-5
You can look up scientific notation off Google, but it is useful for expressing very large or small numbers like 1232100000000000000 would be 1.2321e24 (I didn't actually count, exponent may be inaccurate).
In C, I think you can actually extract the exponent from the top 12 bits (the first being the sign which you will have to ignore). See: IEEE758-1985 Floating Point

The exponent is the power 10 is raised to then multiplied by the base.
SI is explained at wikipeida. http://en.wikipedia.org/wiki/Scientific_notation
m.nnnnnne+xx is logically equal to m.nnnnnn * 10 ^ +xx

In scientific notation, the exponent is the ten to the XX power, so 1234.5678 can be represented as 1.2345678E03 where the normalized form is multiplied by 10^3 to get the "real" answer.

400 = 4 * 10 ^ 2
2 is the exponent.

If you write a number in scientific notation then the exponent is part of that notation.
You can see a full description here http://en.wikipedia.org/wiki/Scientific_notation, but basically its just another way to write a number, typically used for very large or very small numbers.
Say you have the number 300, that is equal to 3 * 100, or 3 * 10^2 in scientific notation.
If you use %e it will be printed as 3.0e+02

Related

Why do VBA and Excel disagree on whether two cells are equal? [duplicate]

This question already has answers here:
VBA rounding problem
(2 answers)
Closed 9 months ago.
I am trying to compare two cells in a table:
The column "MR" is calculated using the formula =ABS([#Value]-A1) to determine the moving range of the column "Value". The values in the "Value" column are not rounded. The highlighted cells in the "MR" column (B3 and B4) are equal. I can enter the formula =B3=B4 into a cell and Excel says that B3 is equal to B4.
But when I compare them in VBA, VBA says that B4 is greater than B3. I can select cell B3 and enter the following into the Immediate Window ? selection.value = selection.offset(1).value. That statement evaluates to false.
I tried removing the absolute value from the formula thinking that might have had something to do with it, but VBA still says they aren't equal.
I tried adding another row where Value=1.78 so MR=0.18. Interestingly, the MR in the new row (B5) is equal to B3, but is not equal to B4.
I then tried increasing the decimal of A4 to match the other values, and now VBA says they are equal. But when I added the absolute value back into the formula, VBA again says they are not equal. I removed the absolute value again and now VBA is saying they are not equal.
Why is VBA telling me the cells are not equal when Excel says they are? How can I reliably handle this situation through VBA going forward?
The problem is that the IEEE 754 Standard for Floating-Point Arithmetic is imprecise by design. Virtually every programming language suffers because of this.
IEEE 754 is an extremely complex topic and when you study it for months and you believe you understand fully, you are simply fooling yourself!
Accurate floating point value comparisons are difficult and error prone. Think long and hard before attempting to compare floating point numbers!
The Excel program gets around the issue by cheating on the application side. VBA on the other hand follows the IEEE 754 spec for Double Precision (binary64) faithfully.
A Double value is represented in memory using 64 bits. These 64 bits are split into three distinct fields that are used in binary scientific notation:
The SIGN bit (1 bit to represent the sign of the value: pos/neg)
The EXPONENT (11 bits, biased in value by +1023)
The MANTISSA (53 bits, 52 bits stored + 1 bit implied)
The mantissa in this system leverages the fact that all binary numbers begin with a digit of 1 and so that 1 is not stored in the bit-pattern. It is implied, increasing the mantissa precision to 53-bits for normal values.
The math works like this: Stored Value = SIGN VALUE * 2^UNBIASED EXPONENT * MANTISSA
Note that a stored value of 1 for the sign bit denotes a negative SIGN VALUE (-1) while a 0 denotes a positive SIGN VALUE (+1). The formula is SIGN VALUE = (-1) ^ (sign bit).
The problem always boils down to the same thing.
The vast majority of real numbers cannot be expressed precisely
within this system which introduces small rounding errors that propagate
like weeds.
It may help to think of this system as a grid of regularly spaced points. The system can represent ONLY the point-values and NONE of the real numbers between the points. All values assigned to a float will be rounded to one of the point-values (usually the closest point, but there are modes that enforce rounding upwards to the next highest point, or rounding downwards). Conducting any calculation on a floating-point value virtually guarantees the resulting value will require rounding.
To accent the obvious, there are an infinite number of real numbers between adjacent representable point-values on this grid; and all of them are rounded to the discreet grid-points.
To make matters worse, the gap size doubles at every Power-of-Two as the grid expands away from true zero (in both directions). For example, the gap length between grid points for values in the range of 2 to 4 is twice as large as it is for values in the range of 1 to 2. When representing values with large enough magnitudes, the grid gap length becomes massive, but closer to true zero, it is miniscule.
With your example numbers...
1.24 is represented with the following binary:
Sign bit = 0
Exponent = 01111111111
Mantissa = 0011110101110000101000111101011100001010001111010111
The Hex pattern over the full 64 bits is precisely: 3FF3D70A3D70A3D7.
The precision is derived exclusively from the 53-bit mantissa and the exact decimal value from the binary is:
0.2399999999999999911182158029987476766109466552734375
In this instance a leading integer of 1 is implied by the hidden bit associated with the mantissa and so the complete decimal value is:
1.2399999999999999911182158029987476766109466552734375
Now notice that this is not precisely 1.24 and that is the entire problem.
Let's examine 1.42:
Sign bit = 0
Exponent = 01111111111
Mantissa = 0110101110000101000111101011100001010001111010111000
The Hex pattern over the full 64 bits is precisely: 3FF6B851EB851EB8.
With the implied 1 the complete decimal value is stored as:
1.4199999999999999289457264239899814128875732421875000
And again, not precisely 1.42.
Now, let's examine 1.6:
Sign bit = 0
Exponent = 01111111111
Mantissa = 1001100110011001100110011001100110011001100110011010
The Hex pattern over the full 64 bits is precisely: 3FF999999999999A.
Notice the repeating binary fraction in this case that is truncated
and rounded when the mantissa bits run out? Obviously 1.6 when
represented in binary base2 can never be precisely accurate in the
same way as 1/3 can never be accurately represented in decimal base10
(0.33333333333333333333333... ≠ 1/3).
With the implied 1 the complete decimal value is stored as:
1.6000000000000000888178419700125232338905334472656250
Not exactly 1.6 but closer than the others!
Now let's subtract the full stored double precision representations:
1.60 - 1.42 = 0.18000000000000015987
1.42 - 1.24 = 0.17999999999999993782
So as you can see, they are not equal at all.
The usual way to work around this is threshold testing, basically an inspection to see if two values are close enough... and that depends on you and your requirements. Be forewarned, effective threshold testing is way harder than it appears at first glance.
Here is a function to help you get started comparing two Double Precision numbers. It handles many situations well but not all because no function can.
Function Roughly(a#, b#, Optional within# = 0.00001) As Boolean
Dim d#, x#, y#, z#
Const TINY# = 1.17549435E-38 'SINGLE_MIN
If a = b Then Roughly = True: Exit Function
x = Abs(a): y = Abs(b): d = Abs(a - b)
If a <> 0# Then
If b <> 0# Then
z = x + y
If z > TINY Then
Roughly = d / z < within
Exit Function
End If
End If
End If
Roughly = d < within * TINY
End Function
The idea here is to have the function return True if the two Doubles are Roughly the same Within a certain margin:
MsgBox Roughly(3.14159, 3.141591) '<---dispays True
The Within margin defaults to 0.00001, but you can pass whatever margin you need.
And while we know that:
MsgBox 1.60 - 1.42 = 1.42 - 1.24 '<---dispays False
Consider the utility of this:
MsgBox Roughly(1.60 - 1.42, 1.42 - 1.24) '<---dispays True
#chris neilsen linked to an interesting Microsoft page about Excel and IEEE 754.
And please read David Goldberg's seminal What Every Computer Scientist Should Know About Floating-Point Arithmetic. It changed the way I understood floating point numbers.

Understanding the maths

I am trying to understand the maths in this code that converts binary to decimal. I was wondering if anyone could break it down so that I can see the working of a conversion. Sorry if this is too newb, but I've been searching for an explanation for hours and can't find one that explains it sufficently.
I know the conversion is decimal*2 + int(digit) but I still can't break it down to understand exaclty how it's converting to decimal
binary = input('enter a number: ')
decimal = 0
for digit in binary:
decimal= decimal*2 + int(digit)
print(decimal)
Here's example with small binary number 10 (which is 2 in decimal number)
binary = 10
for digit in binary:
decimal= decimal*2 + int(digit)
For for loop will take 1 from binary number which is at first place.
digit = 1 for 1st iteration.
It will overwrite the value of decimal which is initially 0.
decimal = 0*2 + 1 = 1
For the 2nd iteration digit= 0.
It will again calculate the value of decimal like below:
decimal = 1*2 + 0 = 2
So your decimal number is 2.
You can refer this for binary to decimal conversion
The for loop and syntax are hiding a larger pattern. First, consider the same base-10 numbers we use in everyday life. One way of representing the number 237 is 200 + 30 + 7. Breaking it down further, we get 2*10^2 + 3*10^1 + 7*10^0 (note that ** is the exponent operator in Python, but ^ is used nearly everywhere else in the world).
There's this pattern of exponents and coefficients with respect to the base 10. The exponents are 2, 1, and 0 for our example, and we can represent fractions with negative exponents. The coefficients 2, 3, and 7 are the same as from the number 237 that we started with.
It winds up being the case that you can do this uniquely for any base. I.e., every real number has a unique representation in base 10, base 2, and any other base you want to work in. In base 2, the exact same pattern emerges, but all the 10s are replaced with 2s. E.g., in binary consider 101. This is the same as 1*2^2 + 0*2^1 + 1*2^0, or just 5 in base-10.
What the algorithm you have does is make that a little more efficient. It's pretty wasteful to compute 2^20, 2^19, 2^18, and so on when you're basically doing the same operations in each of those cases. With our same binary example of 101, they've re-written it as (1 *2+0)*2+1. Notice that if you distribute the second 2 into the parenthesis, you get the same representation we started with.
What if we had a larger binary number, say 11001? Well, the same trick still works. (((1 *2+1 )*2+0)*2+0)*2+1.
With that last example, what is your algorithm doing? It's first computing (1 *2+1 ). On the next loop, it takes that number and multiplies it by 2 and adds the next digit to get ((1 *2+1 )*2+0), and so on. After just two more iterations your entire decimal number has been computed.
Effectively, what this is doing is taking each binary digit and multiplying it by 2^n where n is the place of that digit, and then summing them up. The confusion comes due to this being done almost in reverse, let's step through an example:
binary = "11100"
So first it takes the digit '1' and adds it on to 0 * 2 = 0, so we
have digit = '1'.
Next take the second digit '1' and add it to 1* 2 =
2, digit = '1' + '1'*2.
Same again, with digit = '1' + '1'*2 +
'1'*2^2.
Then the 2 zeros add nothing, but double the result twice,
so finally, digit = '0' + '0'*2 + '1'*2^2 + '1'*2^3 + '1'*2^4 = 28
(I've left quotes around digits to show where they are)
As you can see, the end result in this format is a pretty simple binary to decimal conversion.
I hope this helped you understand a bit :)
I will try to explain the logic :
Consider a binary number 11001010. When looping in Python, the first digit 1 comes in first and so on.
To convert it to decimal, we will multiply it with 2^7 and do this till 0 multiplied by 2^0.
And then we will add(sum) them.
Here we are adding whenever a digit is taken and then will multiply by 2 till the end of loop. For example, 1*(2^7) is performed here as decimal=0(decimal) +1, and then multiplied by 2, 7 times. When the next digit(1) comes in the second iteration, it is added as decimal = 1(decimal) *2 + 1(digit). During the third iteration of the loop, decimal = 3(decimal)*2 + 0(digit)
3*2 = (2+1)*2 = (first_digit) 1*2*2 + (seconds_digit) 1*2.
It continues so on for all the digits.

Round NaN in Haskell

To my great surprise, I found that rounding a NaN value in Haskell returns a gigantic negative number:
round (0/0)
-269653970229347386159395778618353710042696546841345985910145121736599013708251444699062715983611304031680170819807090036488184653221624933739271145959211186566651840137298227914453329401869141179179624428127508653257226023513694322210869665811240855745025766026879447359920868907719574457253034494436336205824
The same thing happens with floor and ceiling.
What is happening here? Is this behavior intended? Of course, I understand that anyone who doesn't want this behavior can always write another function that checks isNaN - but are there existing alternative standard library functions that handle NaN more sanely (for some definition of "more sanely")?
TL;DR: NaN have an arbitrary representation between 2 ^ 1024 and 2 ^ 1025 (bounds not included), and - 1.5 * 2 ^ 1024 (which is one possible) NaN happens to be the one you hit.
Why any reasoning is off
What is happening here?
You're entering the region of undefined behaviour. Or at least that is what you would call it in some other languages. The report defines round as follows:
6.4.6 Coercions and Component Extraction
The ceiling, floor, truncate, and round functions each take a real fractional argument and return an integral result. … round x returns the nearest integer to x, the even integer if x is equidistant between two integers.
In our case x does not represent a number to begin with. According to 6.4.6, y = round x should fulfil that any other z from round's codomain has an equal or greater distance:
y = round x ⇒ ∀z : dist(z,x) >= dist(y,x)
However, the distance (aka the subtraction) of numbers is defined only for, well, numbers. If we used
dist n d = fromIntegral n - d
we get in trouble soon: any operation that includes NaN will return NaN again, and comparisons on NaN fail, so our property above does not hold for any z if x was a NaN to begin with. If we check for NaN, we can return any value, but then our property holds for all pairs:
dist n d = if isNaN d then constant else fromIntegral n - d
So we're completely arbitrary in what round x shall return if x was not a number.
Why do we get that large number regardless?
"OK", I hear you say, "that's all fine and dandy, but why do I get that number?" That's a good question.
Is this behavior intended?
Somewhat. It isn't really intended, but to be expected. First of all, we have to know how Double works.
IEE 754 double precision floating point numbers
A Double in Haskell is usually a IEEE 754 compliant double precision floating point number, that is a number that has 64 bits and is represented with
x = s * m * (b ^ e)
where s is a single bit, m is the mantissa (52 bits) and e is the exponent (11 bits, floatRange). b is the base, and its usually 2 (you can check with floadRadix). Since the value of m is normalized, every well-formed Double has a unique representation.
IEEE 754 NaN
Except NaN. NaN is represented as the emax+1, as well as a non-zero mantissa. So if the bitfield
SEEEEEEEEEEEMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
represents a Double, what's a valid way to represent NaN?
?111111111111000000000000000000000000000000000000000000000000000
^
That is, a single M is set to 1, the other are not necessary to set this notion. The sign is arbitrary. Why only a single bit? Because its sufficient.
Interpret NaN as Double
Now, when we ignore the fact that this is a malformed Double—a NaN– and really, really, really want to interpret it as number, what number would we get?
m = 1.5
e = 1024
x = 1.5 * 2 ^ 1024
= 3 * 2 ^ 1024 / 2
= 3 * 2 ^ 1023
And lo and behold, that's exactly the number you get for round (0/0):
ghci> round $ 0 / 0
-269653970229347386159395778618353710042696546841345985910145121736599013708251444699062715983611304031680170819807090036488184653221624933739271145959211186566651840137298227914453329401869141179179624428127508653257226023513694322210869665811240855745025766026879447359920868907719574457253034494436336205824
ghci> negate $ 3 * 2 ^ 1023
-269653970229347386159395778618353710042696546841345985910145121736599013708251444699062715983611304031680170819807090036488184653221624933739271145959211186566651840137298227914453329401869141179179624428127508653257226023513694322210869665811240855745025766026879447359920868907719574457253034494436336205824
Which brings our small adventure to a halt. We have a NaN, which yields a 2 ^ 1024, and we have some non-zero mantissa, which yields a result with absolute value between 2 ^ 1024 < x < 2 ^ 1025.
Note that this isn't the only way NaN can get represented:
In IEEE 754, NaNs are often represented as floating-point numbers with the exponent emax + 1 and nonzero significands. Implementations are free to put system-dependent information into the significand. Thus there is not a unique NaN, but rather a whole family of NaNs.
For more information, see the classic paper on floating point numbers by Goldberg.
This has long been observed as a problem. Here're a few tickets filed against GHC on this very topic:
https://ghc.haskell.org/trac/ghc/ticket/3070
https://ghc.haskell.org/trac/ghc/ticket/11553
https://ghc.haskell.org/trac/ghc/ticket/3676
Unfortunately, this is a thorny issue with lots of ramifications. My personal belief is that this is a genuine bug and it should be fixed properly by throwing an error. But you can read the comments on these tickets to get an understanding of the tricky issues preventing GHC from implementing a proper solution. Essentially, it comes down to speed vs. correctness, and this is one point where (i) the Haskell report is woefully underspecified, and (ii) GHC compromises the latter for the former.

Create list of strings from list of doubles, non Scientific notation

listOfLongDeci = [showFFloat Nothing (1/a) | a<-[2..1000], length (show (1/a)) > 7]
listOfLongDeci2 = [show (1/a) | a<-[2..1000], length (show (1/a)) > 7]
listOfLongDeci3 = [(1/a) | a<-[2..1000], length (show (1/a)) > 7]
the 1st gives a list of ShowS, how can I make a string from showS?
the 2nd gives a list of scientific notation
the 3rd only gives list
of doubles
How can I use any of these to create a list of strings with non scientific notation? (Euler 26)
As requested:
the 1st gives a list of ShowS, how can I make a String from ShowS?
Since ShowS is a type synonym for String -> String, you obtain a String by applying the function to a String. Since the showXFloat functions produce a function that prepends some String to the final String argument (basically a difference list; many show-related functions produce such - shows, showChar, showString, to name a few - for reasons of efficiency), the natural choice for the final argument is the empty String, so
listOfLongDeci = [showFFloat Nothing (1/a) "" | a<-[2..1000], length (show (1/a)) > 7]
produces a list of Strings, correctly rounded approximations to the decimal representation of the numbers 1/a in non scientific notation.
how can I use any of these to create a list of strings with non scientific notation? (euler 26)
The first part has been answered, but these representations won't help you solve Problem 26 of Project Euler,
Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.
A Double has 53 bits of precision (52 explicit bits for the significand plus one hidden bit for normalized numbers, no hidden bit, thus 52 or fewer bits of precision for subnormal numbers), and the number 1/d cannot be exactly represented as a Double unless d is a power of 2. The 53 bits of precision give you roughly
Prelude> 53 * log 2 / log 10
15.954589770191001
significant decimal digits of precision, so from the first nonzero digit on, you have 15 or 16 digits that you can expect to be correct for the exact [terminating or recurring] decimal expansion of the fraction 1/d, beyond that, the expansions differ.
For example, 1/71 has a recurring cycle 01408450704225352112676056338028169 of length 35 (by far not the longest in the range to be considered). The closest representable Double to 1/71 is
0.01408450704225352144438598855913369334302842617034912109375 = 8119165525400331 / (2^59)
of which the first 17 significant digits are correct (and 0.014084507042253521 is also what showFFloat Nothing (1/71) "" gives you).
To find the longest recurring cycle in the decimal expansion of 1/d, you can use an exact (or sufficiently accurate finite) string representation of the Rational number 1 % d, or, better, use pure integer arithmetic (compute the decimal expansion using long division) without involving a Rational.

Conversion of numeric to string in MATLAB

Suppose I want to conver the number 0.011124325465476454 to string in MATLAB.
If I hit
mat2str(0.011124325465476454,100)
I get 0.011124325465476453 which differs in the last digit.
If I hit num2str(0.011124325465476454,'%5.25f')
I get 0.0111243254654764530000000
which is padded with undesirable zeros and differs in the last digit (3 should be 4).
I need a way to convert numerics with random number of decimals to their EXACT string matches (no zeros padded, no final digit modification).
Is there such as way?
EDIT: Since I din't have in mind the info about precision that Amro and nrz provided, I am adding some more additional info about the problem. The numbers I actually need to convert come from a C++ program that outputs them to a txt file and they are all of the C++ double type. [NOTE: The part that inputs the numbers from the txt file to MATLAB is not coded by me and I'm actually not allowed to modify it to keep the numbers as strings without converting them to numerics. I only have access to this code's "output" which is the numerics I'd like to convert]. So far I haven't gotten numbers with more than 17 decimals (NOTE: consequently the example provided above, with 18 decimals, is not very indicative).
Now, if the number has 15 digits eg 0.280783055069002
then num2str(0.280783055069002,'%5.17f') or mat2str(0.280783055069002,17) returns
0.28078305506900197
which is not the exact number (see last digits).
But if I hit mat2str(0.280783055069002,15) I get
0.280783055069002 which is correct!!!
Probably there a million ways to "code around" the problem (eg create a routine that does the conversion), but isn't there some way using the standard built-in MATLAB's to get desirable results when I input a number with random number of decimals (but no more than 17);
My HPF toolbox also allows you to work with an arbitrary precision of numbers in MATLAB.
In MATLAB, try this:
>> format long g
>> x = 0.280783054
x =
0.280783054
As you can see, MATLAB writes it out with the digits you have posed. But how does MATLAB really "feel" about that number? What does it store internally? See what sprintf says:
>> sprintf('%.60f',x)
ans =
0.280783053999999976380053112734458409249782562255859375000000
And this is what HPF sees, when it tries to extract that number from the double:
>> hpf(x,60)
ans =
0.280783053999999976380053112734458409249782562255859375000000
The fact is, almost all decimal numbers are NOT representable exactly in floating point arithmetic as a double. (0.5 or 0.375 are exceptions to that rule, for obvious reasons.)
However, when stored in a decimal form with 18 digits, we see that HPF did not need to store the number as a binary approximation to the decimal form.
x = hpf('0.280783054',[18 0])
x =
0.280783054
>> x.mantissa
ans =
2 8 0 7 8 3 0 5 4 0 0 0 0 0 0 0 0 0
What niels does not appreciate is that decimal numbers are not stored in decimal form as a double. For example what does 0.1 look like internally?
>> sprintf('%.60f',0.1)
ans =
0.100000000000000005551115123125782702118158340454101562500000
As you see, matlab does not store it as 0.1. In fact, matlab stores 0.1 as a binary number, here in effect...
1/16 + 1/32 + 1/256 + 1/512 + 1/4096 + 1/8192 + 1/65536 + ...
or if you prefer
2^-4 + 2^-5 + 2^-8 + 2^-9 + 2^-12 + 2^13 + 2^-16 + ...
To represent 0.1 exactly, this would take infinitely many such terms since 0.1 is a repeating number in binary. MATLAB stops at 52 bits. Just like 2/3 = 0.6666666666... as a decimal, 0.1 is stored only as an approximation as a double.
This is why your problem really is completely about precision and the binary form that a double comprises.
As a final edit after chat...
The point is that MATLAB uses a double to represent a number. So it will take in a number with up to 15 decimal digits and be able to spew them out with the proper format setting.
>> format long g
>> eps
ans =
2.22044604925031e-16
So for example...
>> x = 1.23456789012345
x =
1.23456789012345
And we see that MATLAB has gotten it right. But now add one more digit to the end.
>> x = 1.234567890123456
x =
1.23456789012346
In its full glory, look at x, as MATLAB sees it:
>> sprintf('%.60f',x)
ans =
1.234567890123456024298320699017494916915893554687500000000000
So always beware the last digit of any floating point number. MATLAB will try to round things intelligently, but 15 digits is just on the edge of where you are safe.
Is it necessary to use a tool like HPF or MP to solve such a problem? No, as long as you recognize the limitations of a double. However tools that offer arbitrary precision give you the ability to be more flexible when you need it. For example, HPF offers the use and control of guard digits down in that basement area. If you need them, they are there to save the digits you need from corruption.
You can use Multiple Precision Toolkit from MATLAB File Exchange for arbitrary precision numbers. Floating point numbers do not usually have a precise base-10 presentation.
That's because your number is beyond the precision of the double numeric type (it gives you between 15 to 17 significant decimal digits). In your case, it is rounded to the nearest representable number as soon as the literal is evaluated.
If you need more precision than what the double-precision floating-points provides, store the numbers in strings, or use arbitrary-precision libraries. For example use the Symbolic Toolbox:
sym('0.0111243254654764549999999')
You cannot get EXACT string since the number is stored in double type, or even long double type.
The number stored will be a subtle more or less than the number you gives.
computer only knows binary number 0 & 1. You must know that numbers in one radix may not expressed the same in other radix. For example, number 1/3, radix 10 yields 0.33333333...(The ellipsis (three dots) indicate that there would still be more digits to come, here is digit 3), and it will be truncated to 0.333333; radix 3 yields 0.10000000, see, no more or less, exactly the amount; radix 2 yields 0.01010101... , so it will likely truncated to 0.01010101 in computer,that's 85/256, less than 1/3 by rounding, and next time you fetch the number, it won't be the same you want.
So from the beginning, you should store the number in string instead of float type, otherwise it will lose precision.
Considering the precision problem, MATLAB provides symbolic computation to arbitrary precision.

Resources