I have a function that has the type Int -> Int -> Int -> Int. When i use div a b as a value for a variable in the function it seems, that the value gets rounded down to 0 if the return of div a b is 1/2 or anything double like.
Is this correct? Does Haskell cut of values like in java, if a double is forced into an integer?
div 1 2 doesn't return 0.5, which is then converted to the integer 0. It returns 0 in the first place. div performs integer division and as such always returns an integer (or other Integral type depending on which type you used it with). There's no doubles involved.
When you do convert a double to an integer, the method of rounding depends on which method you used. For example floor would round the number down whereas round would round to the nearest integer. There are no implicit conversions in Haskell, so any conversion will happen through a function.
Does Haskell cut off values like in java
no it does not.
When doing integer division, Java rounds towards zero, whereas Haskell rounds downwards; so in Haskell
\> (-9) `div` 10
-1
whereas in Java -9 / 10 is zero:
public class IntDiv{
public static void main(String []args){
double a = (-9) / 10;
System.out.printf("%.2f\n", a); // would print 0.00
}
}
Related
I'm using below two statements :-
double foo = 20.00
float bar = 20.00
println foo == bar
And
double foo = 20.01
float bar = 20.01
println foo == bar
It gives the output as :-
true
false
Can anyone know what makes difference between these two statements?
double and float values don't have an exact internal representation for every value. The only decimal values that can be represented as an IEEE-754 binary floating-point for two decimal points are 0, 0.25, 0.5, 0.75 and 1. The rest of representations will always be slightly off, with small differences between doubles and floats creating this inequality behaviour.
This is not just valid for Groovy, but for Java as well.
For example:
double foo = 20.25
float bar = 20.25
println foo == bar
Output:
true
The 0.1 part of 20.01 is infinite repeating in binary; 20.01 =
10100.00000010100011110101110000101000111101011100001010001111010111...
floats are rounded (to nearest) to 24 significant bits; doubles are rounded to 53. That makes the float
10100.0000001010001111011
and the double
10100.000000101000111101011100001010001111010111000011
In decimal, those are
20.0100002288818359375 and
20.010000000000001563194018672220408916473388671875, respectively.
(You could see this directly using my decimal to floating-point converter.)
The Groovy Float aren't kept in the memory precisely. That is the main cause for the differences you have.
In Groovy the definition of the precision by the number of digits after the right side of the dot can be achieved by the following method signature:
public float trunc(int precision)
precision - the number of decimal places to keep.
For more details please follow the Class Float documentation.
It is more prefered to use BigDecimal class as a floating number when using the Groovy language.
The conversion from Number to String is much easier and there is the option to define the precision of the floating number **in the constructor.
BigDecimal(BigInteger unscaledVal, int scale)
Translates a BigInteger unscaled value and an int scale into a BigDecimal.
For more details please follow the Java BigDecimal documentation. As the Groovy language is based on the Java language. More over the BigDecimal will represent the exact value of the number.
What is the default scale of BigDecimal in groovy? And Rounding?
So when trying to do calculations:
def x = 10.0/30.0 //0.3333333333
def y = 20.0/30.0 //0.6666666667
Base on this, I can assume that it uses scale 10 and rounding half up.
Having trouble finding an official documentation saying that though.
You can find it in the official documentation: The case of the division operator
5.5.1. The case of the division operator
The division operators / (and /= for division and assignment) produce
a double result if either operand is a float or double, and a
BigDecimal result otherwise (when both operands are any combination of
an integral type short, char, byte, int, long, BigInteger or
BigDecimal).
BigDecimal division is performed with the divide() method if the
division is exact (i.e. yielding a result that can be represented
within the bounds of the same precision and scale), or using a
MathContext with a precision of the maximum of the two operands'
precision plus an extra precision of 10, and a scale of the maximum of
10 and the maximum of the operands' scale.
And check it in BigDecimalMath.java:
public Number divideImpl(Number left, Number right) {
BigDecimal bigLeft = toBigDecimal(left);
BigDecimal bigRight = toBigDecimal(right);
try {
return bigLeft.divide(bigRight);
} catch (ArithmeticException e) {
// set a DEFAULT precision if otherwise non-terminating
int precision = Math.max(bigLeft.precision(), bigRight.precision()) + DIVISION_EXTRA_PRECISION;
BigDecimal result = bigLeft.divide(bigRight, new MathContext(precision));
int scale = Math.max(Math.max(bigLeft.scale(), bigRight.scale()), DIVISION_MIN_SCALE);
if (result.scale() > scale) result = result.setScale(scale, BigDecimal.ROUND_HALF_UP);
return result;
}
}
This groovy:
float a = 1;
float b = 2;
def r = a + b;
Creates this Java code when reversed from .class with IntelliJ:
float a = (float)1;
float b = (float)2;
Object r = null;
double var7 = (double)a + (double)b;
r = Double.valueOf(var7);
So r contains a Double.
If I do this:
float a = 1;
float b = 2;
float r = a + b;
It generates code that performs the addition with doubles and converts back to float:
float a = (float)1;
float b = (float)2;
float r = 0.0F;
double var7 = (double)a + (double)b;
r = (float)var7;
So should one abandon floats with groovy as it seems to not want to use them anyway?
Groovy decided to take 5 standard result types of numeric operations. fall back to certain standard numeric types for operations. Those are int, long, BigInteger, double and BigDecimal. Thus adding/multiplying two floats returns a double. Division and pow are special.
From http://www.groovy-lang.org/syntax.html
Division and power binary operations aside,
binary operations between byte, char, short and int result in int
binary operations involving long with byte, char, short and int result
in long
binary operations involving BigInteger and any other integral type
result in BigInteger
binary operations between float, double and BigDecimal result in
double
binary operations between two BigDecimal result in BigDecimal
As for if you should abandon float... normally it is good enough to convert the double to float, especially since groovy is doing that automatically for you.
.net (C#) does something similar with 16-bit integers: Addition of Bytes or Int16s yield Int32. Possibly to prevent overflows.
Operations with "smaller" data types may result in the "bigger" data types. And with bigger, I mean more bits.
As illustrated in this example (more digits also means more bits)
15 (2 digits) x 15 (2 digits) = 225 (3 digits)
1.5 (2 digits) x 1.5 (2 digits) = 2.25 (3 digits)
However, adding two 32 bit integers returns jus a 32 bit integer. And adding two doubles just returns a double. This is because the (virtual) machine is optimized for working with these sizes, which is because physical processors used to be optimized for working with these sizes. Some of them still are. 32 bit operations are often still faster than 64 bit operations, even on 64 bit processors. However, 16 bit operations are not or barely.
Your compiler attempts to protect you against overflows, and allows you to check for them explicitly. So unless you have a good reason not to, I'd default to using these types, and optionally trunc to a compacter type when storing the data.
Good reasons not to include scenarios where you process large amounts (1000s) of numbers, e.g. for graphic processing.
I need to convert a UInt32 type to a float without having it rounded. Say I do
float num = 4278190335;
uint num1 = num;
The value instantly gets changed to 4278190336. Is there any way around this?
I need to convert a UInt32 type to a float without having it rounded.
That can't be done.
There are 232 possible uint values. There are fewer than 232 float values (there are 232 bit patterns, but that includes various NaN values). Add to that the fact that there are obviously a lot of float values which can't be represented as uint (e.g. 0.5) and it becomes clear that you can't represent every uint value exactly in a float. However, every uint (and every int) can be represented exactly as a double, so that might be a solution to your problem.
The problem you're seeing in your original source code is that 4278190335 isn't exactly representable as a float; the closest float value is 4278190336. This isn't a problem with the conversion from float to uint - it's a problem with the conversion from the exact value you've specified in your source code into a float; the float to uint conversion happens separately (and again, can easily lose information).
float has only 23 bits of mantissa. Along with the implicit 1 bit it can only represent exactly all numbers that fit in 24 bits. For numbers larger than that it can only store the nearest value. 4278190335 = 0xFF0000FF > 224 so it'll be rounded to 4278190336 when converting to float
Similarly double has 52 bits of mantissa and can represent all numbers within the range [-253, 253] exactly, so it can store any value that fit in 32-bit int including 4278190335. But again double can't store all numbers in long's range although they have the same size (64 bits)
Aside from your question being worded backward; I think what you are saying is.
You need to get the integer portion of a float value, e.g. its whole number value not its decimal value. In which case you can simply cast the float to an int, casting does not round.
e.g.
float myFloat = 1.5;
uint myInt = (uint)myFloat; //myInt == 1
Keep in mind though this isn't always clear to others reading your code. To help there Math.Floor and Math.Ceiling ... Floor returns the whole number below the current value, ceiling returns the whole number above it
e.g
float myFloat = 1.5;
uint myFloorInt = (uint)Math.Floor(myFloat); //myFloorInt == 1
uint myCeilingInt = (uint)Math.Ceiling(myFloat); //myCeilingInt == 2
You will need to cast or convert the value from float to uint, int, etc. as your needs dictate. Most frown on casting as the resulting value isn't always clear to people ... Convert has various methods to help you convert one value to another in nice clearly understandable way.
There is no solution to turn back the original value in your method.
i suggest to try byte to byte copying to make it posible retrieving data back. float typecasting could change original value.
if your processor is 32bit it could help u:
uint32 x;
float y;
memcpy((uint8*)&y,(uint8*)&x,4);
(mohandes...)
I am writing a function in which I need to read a string contains floating point number and turn it back to Rational. But When I do toRational (read input :: Double), it will not turn for eg: 0.9 into 9 % 10 as expected, but instead 81..... % 9007...
Thx
This is correct behavior. The number 0.9 is not representable as a Double, not in Haskell, C, or Java. This is because Double and Float use base 2: they can only represent a certain subset of the dyadic fractions exactly.
To get the behavior you want, import the Numeric module and use the readFloat function. The interface is fairly wonky (it uses the ReadS type), so you'll have to wrap it a little. Here's how you can use it:
import Numeric
myReadFloat :: String -> Rational -- type signature is necessary here
myReadFloat str =
case readFloat str of
((n, []):_) -> n
_ -> error "Invalid number"
And, the result:
> myReadFloat "0.9"
9 % 10
Binary floating point numbers cannot precisely represent all the numbers that base-10 can. The number you see as 0.9 is not precisely 0.9 but something very close to it. Never use floating-point types where decimal precision is needed — they just can't do it.