after multiplying 2 double, it gets so many additional precision. Then it causes rounding(to 2 decimal) issue. Where I'm suppose to get 37.34, but it gives 37.33 instead. (viewing in debug mode)
additional precision http://s8.postimg.org/9nn2bbiab/precision.jpg
Any idea why? and how to solve?
EDIT
I actually did tried the MidpointRounding. Try this on Any calculator it should give you exactly 37.335
But C# gave me 37.334999999999, which later result in wrong answer after rounding with 2 decimal.
still rounded wrongly http://s28.postimg.org/psi2dz59n/precision2.jpg
The problem I believe was not on the rounding, but the multiplying.
Before I understand why could this happen. I got myself a workaround, probably could help others in case they're having the same problem. It looks dirty anyway.
double price = 39.3;
double m = 0.95;
double result = 39.3 * 0.95;
result = double.Parse(result.ToString());
result = Math.Round(result, 2, MidpointRounding.AwayFromZero);
This way I'll get 37.335 after multiplying, and I got 37.34 after rounding to 2 decimal.
Related
It's my first question
So, the problem is python rounding. I have seen it, but I dont really know how to get around it.
For example: i have the number 10.34 - I need to receive just fractional part, so 0.34
I had some ideas how to do that. One of this:
n = float(input())
print(n - int(n))
In case of 10.34 the code give me "0.33999999999999986" instead 0.34.
I have some ideas how to do it with help of strings or another tools, but the task assumes that I need just some basic tools
Use round:
res = n - int(n)
print(round(res, 10))
n = float(input()) n = n - int(n) n = round(n,2)
https://www.w3schools.com/python/ref_func_round.asp
The round() function returns a floating point number that is a rounded version of the specified number, with the specified number of decimals.
round(number, digits)
For your refrence
I have a calculation:
(22,582 / 10,000)^1/15 - 1
In C# I am using like this:
double i = Math.Pow(2.2582,1/15) - 1;
Response.Write(i);
But everytime it returns me 0 in i. I know (1/15) is making some disturbance in the calculation, so to solve this I used (.067) in place of (1/15) which gives me result 0.0560927980835855, but I am still far away from my actual result. Can somebody please tell the right approach.
The first calculation should be:
(22.582d / 10.000d) * (1.0d/15.0d) - 1.0d
You use the "d" in literals to tell the compiler that the number should be a double. If you don't use it the compiler thinks that 1/15 is two integers divided resulting in 0.
So the last calculation should be:
double i = Math.Pow(2.2582d, 1.0d/15.0d) - 1.0d;
Response.Write(i);
This means that:
1/15 = 0
and
1.0d/15.0d = 0.06666667
Here 1 and 15 are considered as integers and were calculated to find the integer result 1/15 =0;
not the double result.
Try using 1f/15f instead of 1/15
Using Sybase ASE 12.5 I have the following situation.
2 values stored in float cols when multiplied give a value.
Converting that value to a varchar (or retrieving it with Java) gives the underlying precise value which the floats approximated to.
My issue is that the value as represented by the floats is correct, but the precise value is causing issues (due to strict rounding rules).
For example
declare #a float,#b float
select #a = 4.047000, #b = 1033000.000000
select #a*#b as correct , str(#a*#b,40,20) as wrong
gives:
correct: 4180551.000000,
wrong: 4180550.9999999995343387
Similarly when
#a = 4.047000, #b = 1
...you get
correct: 4.047000,
wrong: 4.0469999999999997
(same thing happens using convert(varchar(30), #a*#b) and cast(#a*#b, varchar(30) )
I appreciate it would be easy to just round the first example in java but for various business reasons that cannot be done and in any case it wouldn't work for the second.
I also cannot change the float table column datatype.
Is there anyway to get the float representation of the multiplication product either as a string or the actual 'correct' value above?
Thanks
Chris
In an IOS program i am trying to divide some float value but the result is incorrect
float a = 179.891891;
float b = 8.994595;
NSLog(#"Result %f",a/b);
On dividing the two (a/b) the output i get is 20.0000 instead of 19.9999989993991 . I have tried using double instead of float but still the same issue . The value of "b" keeps on varying as i obtain it from some calculations . I need the result to be exactly precise as it further is required for some calculations and 20.0000 instead of 19.9999989993991 make a lot of difference in the final output i get .
Any help on this would be really great :).
I dont know in Objective C, but in C, you should do casting: (float)(a/b) .Otherwise it is integer division.
Two ways to normalize a Vector3 object; by calling Vector3.Normalize() and the other by normalizing from scratch:
class Tester {
static Vector3 NormalizeVector(Vector3 v)
{
float l = v.Length();
return new Vector3(v.X / l, v.Y / l, v.Z / l);
}
public static void Main(string[] args)
{
Vector3 v = new Vector3(0.0f, 0.0f, 7.0f);
Vector3 v2 = NormalizeVector(v);
Debug.WriteLine(v2.ToString());
v.Normalize();
Debug.WriteLine(v.ToString());
}
}
The code above produces this:
X: 0
Y: 0
Z: 1
X: 0
Y: 0
Z: 0.9999999
Why?
(Bonus points: Why Me?)
Look how they implemented it (e.g. in asm).
Maybe they wanted to be faster and produced something like:
l = 1 / v.length();
return new Vector3(v.X * l, v.Y * l, v.Z * l);
to trade 2 divisions against 3 multiplications (because they thought mults were faster than divs (which is for modern fpus most often not valid)). This introduced one level more of operation, so the less precision.
This would be the often cited "premature optimization".
Don't care about this. There's always some error involved when using floats. If you're curious, try changing to double and see if this still happens.
You should expect this when using floats, the basic reason being that the computer processes in binary and this doesn't map exactly to decimal.
For an intuitive example of issues between different bases consider the fraction 1/3. It cannot be represented exactly in Decimal (it's 0.333333.....) but can be in Terniary (as 0.1).
Generally these issues are a lot less obvious with doubles, at the expense of computing costs (double the number of bits to manipulate). However in view of the fact that a float level of precision was enough to get man to the moon then you really shouldn't obsess :-)
These issues are sort of computer theory 101 (as opposed to programming 101 - which you're obviously well beyond), and if your heading towards Direct X code where similar things can come up regularly I'd suggest it might be a good idea to pick up a basic computer theory book and read it quickly.
You have here an interesting discussion about String formatting of floats.
Just for reference:
Your number requires 24 bits to be represented, which means that you are using up the whole mantissa of a float (23bits + 1 implied bit).
Single.ToString () is ultimately implemented by a native function, so I cannot tell for sure what is going on, but my guess is that it uses the last digit to round the whole mantissa.
The reason behind this could be that you often get numbers that cannot be represented exactly in binary, so you would get a long mantissa; for instance, 0.01 is represented internally as 0.00999... as you can see by writing:
float f = 0.01f;
Console.WriteLine ("{0:G}", f);
Console.WriteLine ("{0:G}", (double) f);
by rounding at the seventh digit, you will get back "0.01", which is what you would have expected.
For what seen above, numbers with only 7 digits will not show this problem, as you already saw.
Just to be clear: the rounding is taking place only when you convert your number to a string: your calculations, if any, will use all the available bits.
Floats have a precision of 7 digits externally (9 internally), so if you go above that then rounding (with potential quirks) is automatic.
If you drop the float down to 7 digits (for instance, 1 to the left, 6 to the right) then it will work out and the string conversion will as well.
As for the bonus points:
Why you ? Because this code was 'eager to blow on you'.
(Vulcan... blow... ok.
Lamest.
Punt.
Ever)
If your code is broken by minute floating point rounding errors, then I'm afraid you need to fix it, as they're just a fact of life.