floor() behave strangely VC++ - visual-c++

I was doing some coding and suddenly wondered with a strange behavior of floor(). The piece of line that caused error is mentioned below:
printf("%f",floor(310.96*100));
and the output was 31095.0000.
Why is this happening?

This is a typical floating point issue. The constant value 310.96 is not equally representable as a float number. Instead the closest float value representation is 310.9599914550781.
You can try out your self here. Multipled that by 100 and truncated with floor() results in your 31095.0000

Floating point numbers are not 100% exact 310.96*100 might result in 31095.99999999... hence your result, see also this

Related

Why it throws "Floating Point Exception" if I divide a floating number by zero?

As titled.
As I know about the floating-point number, if we try to divide a floating-point number by zero, the result could actually be a "∞", namely infinity. And it also could be represented in floating-point number format as I showed below. So, why does the Linux system need to raise an exception rather than just doing what I expected? (the exception is raised by the underlying system)
Dividing by 0 does not necessarily result in infinity. There's a good numberphile video that goes into this.
More importantly here, the IEEE 754 floating point standard (which is what most languages/cpus use) dictates that dividing by 0 should result in NaN, and many programming languages just turn this into an error.
This is not linux specific. I don't even think Linux itself can raise something called an exception, so this must be a higher-level language thing.

Paraview "possible mismatch of datasize with declaration" error

Paraview (v4.1.0 64-bit, OSX 10.9.2) is giving me the following error:
Generic Warning: In /Users/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/IO/Legacy/vtkDataReader.cxx, line 1388
Error reading ascii data. Possible mismatch of datasize with declaration.
I'm not sure why. I've double-checked that fields are all of the expected lengths, and none of the values are NaN, inf, or otherwise extremely large. The issue starts with the output from timestep 16 (0-15 produces no error). Graphically, steps 0-15 produce plots of my data as expected; step 16 shows the "Y/Yc" series having an unexpectedly large point (0.5625, 2.86616e+36).
Is fine:
http://www.filedropper.com/ring0000015
Produces error:
http://www.filedropper.com/ring0000016
I have been facing the same problem for the last 6 months and been struggling to find a solution. I was given the following reasons to explain the error(http://www.cfd-online.com/Forums/paraview/139451-error-while-reading-vtk-files-paraview.html#post503315):
It could be a problem due to the character used for the line ending (http://en.wikipedia.org/wiki/Newline)
In a nutshell:
a)On Windows, line transition is with CR+LF.
b)On Linux, line transition is with LF only.
c)On Mac, some older versions used CR only. Nowadays I guess it should use LF as well.
CR= "Carriage Return" byte
LF= "Line Feed" byte
There might be one or more values that are of type NaN or Inf or some other special computational numeric definition for non-real numbers. They might be readable on Linux, but not on Mac, perhaps of the next possibility. If this is the case,
Location based numeric definitions, aka Locale, might be triggering situations where values are being stored with commas or with a strange scientific notation. For example, if a value "1.0002" is stored as "1,0002" or even perhaps "1.0002ES+000"
I have viewed other forums, and they have generally stated #2 and #3 and the possible solutions -- it has in general worked. However, none of the above seemed to solve my problem.
I noticed that some of the stored solution values in the ASCII files were as small as 10.e-34. I had a feeling that the underflow conditions maybe be triggering problems. I put a check in my code for underflow conditions and rounded them off to 0. This fixed the issue, with the solution being displayed at all times without error messages.
This may not fix the Inf/NaN problems, but if the numbers in the vtk file are too large or too small (i.e. 1e-50, 1e45), this may cause the same error.
One solution in this case is to change the datatype specification. When I had this problem, I specified the datatype as "float", which uses a 32-bit floating point representation (same as "float32"). Changing it to "float64" uses a 64-bit double-precision representation, which is consistent with my C++ code that generated the vtk file that uses doubles. This may eliminate the problem.
If you are using Fortran, this problem also occur when you write to file but not close it in code.
For example:
do i=1,10
write(numb,'(i3)')i
open(unit=1, file='test'//numb//'.vtk')
write(1,*).......
enddo

Function giving slightly different answer than expected

I'm doing some monad stuff in Haskell and I wrote a function that calculates the probability of winning a gambling game given the game's decision tree. It works like a charm, except for the fact that it sometimes returns SLIGHTLY different answers than expected. For example, I'm uploading my code to DOMjudge and it returns an error, saying that the correct answer should be 1 % 6 instead of 6004799503160661 % 36028797018963968, which is what my function is returning. If you actually do the division they're both nearly the same, but I don't understand why my answer is still slightly different. I've been messing around with different types (using Real instead of Int for example), but so far no luck. I'm kind of new to this stuff and I can't seem to figure this out. Can anyone point me in the right direction?
-code deleted-
You're losing precision due to the division in probabilityOfWinning. You have the right solution to avoiding it---using type Rational = Ratio Integer---but you're applying it too late in the game. By converting toRational after division you've already lost your precision before you converted to Rational.
Try something like this
import Data.Ratio
probabilityOfWinning tree = countWins tree % countGames tree
And then remove the Real type restrictions from countWins and countGames so that they return whole integers instead of floating point numbers. These together will make sure you always use infinite precision math instead of floating point.

msvc division by zero

I have two console apps (msvc 2008). When they have division by zero,
they behave differently. My questions are below.
a) In one app, result of division by zero shows as 1.#INF000000000000 as debugger.
Printf "%4.1f" prints it as "1.$".
b) In another app, result of division by zero 9.2559631349317831e+061 in debugger.
Printf "%4.1f" prints it as "-1.$".
Why neither app has exception or signal on div by zero ?
Isn't exception/signal a default dehavour ?
What are define names for the two constants above ?
Generally, If I check for denominator == 0 before div, then which define value shall I use for dummy result ? DBL_MIN ok ? I found that NaN value is not.
Can I tell stdio how to format one specific double value as char string I tell it? I realize it's too much to ask. But it would be nice to tell stdio to print, say, "n/a" for vaues DBL_MIN in my app, as example.
How shall I approach, for best portability, division-by-zero and printing it's results ? By printing, I mean "print number as 'n/a' if it's a result of division by zero".
What is not clear here to me, how shall I represent result of div-by-zero in one double, in a portable way.
Why two different results? It is compiler options ?
Compiler is C++, but used very much like C. Thanks.
When doing floating-point division by zero, the result should be infinity (represented with a special bit pattern).
My guess is that the second application does not actually perform a division by zero, but rather a division with a really small number. You can check this by inspecting the underlying representation, either in a debugger or by trace output (you can access it by placing it in a union of the floating-point type and an integer of the same size). Note that simply printing it might not reveal this, as the print algorithm sometimes print really small numbers as zero.

Excel Floating Point Arihmetic - What Type Does It Actually Use

From a previous questions someone indicated that Excel uses a 64 bit (8 byte) double-precision floating.
Is that correct - Is there any material on this at all?
I am trying to tie off numbers and this is killing me!
According to this article yes, it uses 64 bit double precision floating point numbers. The article also describes the rounding errors etc associated with this format.
Have a look at
Floating-point arithmetic
Under the section Precision
A floating-point number is stored in
binary in three parts within a 65-bit
range: the sign, the exponent, and the
mantissa.
Here is another article
Understanding Floating Point Precision, aka “Why does Excel Give Me Seemingly Wrong Answers?”
Have a look at the section Structure of a Floating Point Number

Resources