I have created a Map in groovy:
def measureMap = [:]
measureMap.put('engine.temperature', 70.0)
measureMap.put('fuel.level', 71.0)
The values for this map considered as Bigdecimal.
Is there any reason for this.
Why I am asking this i know the hierarchy
java.lang.Object
java.lang.Number
java.math.BigDecimal
so I thought by default it should consider it as double/float .
By default, Groovy uses BigDecimal for decimal numbers. This is documented:
Conveniently for exact decimal number calculations, Groovy choses java.lang.BigDecimal as its decimal number type. In addition, both float and double are supported, but require an explicit type declaration, type coercion or suffix. Even if BigDecimal is the default for decimal numbers, such literals are accepted in methods or closures taking float or double as parameter types.
If you need your values to be double-typed in the map, you can add the familiar D or F suffixes for double/float literals:
measureMap.put('engine.temperature', 70.0d) //java.lang.Double
measureMap.put('engine.temperature', 71.0f) //java.lang.Float
Related
How come that in the following snippet
int a = 7;
int b = 3;
double c = 0;
c = a / b;
c ends up having the value 2, rather than 2.3333, as one would expect. If a and b are doubles, the answer does turn to 2.333. But surely because c already is a double it should have worked with integers?
So how come int/int=double doesn't work?
This is because you are using the integer division version of operator/, which takes 2 ints and returns an int. In order to use the double version, which returns a double, at least one of the ints must be explicitly casted to a double.
c = a/(double)b;
Here it is:
a) Dividing two ints performs integer division always. So the result of a/b in your case can only be an int.
If you want to keep a and b as ints, yet divide them fully, you must cast at least one of them to double: (double)a/b or a/(double)b or (double)a/(double)b.
b) c is a double, so it can accept an int value on assignement: the int is automatically converted to double and assigned to c.
c) Remember that on assignement, the expression to the right of = is computed first (according to rule (a) above, and without regard of the variable to the left of =) and then assigned to the variable to the left of = (according to (b) above). I believe this completes the picture.
With very few exceptions (I can only think of one), C++ determines the
entire meaning of an expression (or sub-expression) from the expression
itself. What you do with the results of the expression doesn't matter.
In your case, in the expression a / b, there's not a double in
sight; everything is int. So the compiler uses integer division.
Only once it has the result does it consider what to do with it, and
convert it to double.
When you divide two integers, the result will be an integer, irrespective of the fact that you store it in a double.
c is a double variable, but the value being assigned to it is an int value because it results from the division of two ints, which gives you "integer division" (dropping the remainder). So what happens in the line c=a/b is
a/b is evaluated, creating a temporary of type int
the value of the temporary is assigned to c after conversion to type double.
The value of a/b is determined without reference to its context (assignment to double).
In C++ language the result of the subexpresison is never affected by the surrounding context (with some rare exceptions). This is one of the principles that the language carefully follows. The expression c = a / b contains of an independent subexpression a / b, which is interpreted independently from anything outside that subexpression. The language does not care that you later will assign the result to a double. a / b is an integer division. Anything else does not matter. You will see this principle followed in many corners of the language specification. That's juts how C++ (and C) works.
One example of an exception I mentioned above is the function pointer assignment/initialization in situations with function overloading
void foo(int);
void foo(double);
void (*p)(double) = &foo; // automatically selects `foo(fouble)`
This is one context where the left-hand side of an assignment/initialization affects the behavior of the right-hand side. (Also, reference-to-array initialization prevents array type decay, which is another example of similar behavior.) In all other cases the right-hand side completely ignores the left-hand side.
The / operator can be used for integer division or floating point division. You're giving it two integer operands, so it's doing integer division and then the result is being stored in a double.
This is technically a language-dependent, but almost all languages treat this subject the same. When there is a type mismatch between two data types in an expression, most languages will try to cast the data on one side of the = to match the data on the other side according to a set of predefined rules.
When dividing two numbers of the same type (integers, doubles, etc.) the result will always be of the same type (so 'int/int' will always result in int).
In this case you have
double var = integer result
which casts the integer result to a double after the calculation in which case the fractional data is already lost. (most languages will do this casting to prevent type inaccuracies without raising an exception or error).
If you'd like to keep the result as a double you're going to want to create a situation where you have
double var = double result
The easiest way to do that is to force the expression on the right side of an equation to cast to double:
c = a/(double)b
Division between an integer and a double will result in casting the integer to the double (note that when doing maths, the compiler will often "upcast" to the most specific data type this is to prevent data loss).
After the upcast, a will wind up as a double and now you have division between two doubles. This will create the desired division and assignment.
AGAIN, please note that this is language specific (and can even be compiler specific), however almost all languages (certainly all the ones I can think of off the top of my head) treat this example identically.
For the same reasons above, you'll have to convert one of 'a' or 'b' to a double type. Another way of doing it is to use:
double c = (a+0.0)/b;
The numerator is (implicitly) converted to a double because we have added a double to it, namely 0.0.
The important thing is one of the elements of calculation be a float-double type. Then to get a double result you need to cast this element like shown below:
c = static_cast<double>(a) / b;
or
c = a / static_cast(b);
Or you can create it directly::
c = 7.0 / 3;
Note that one of elements of calculation must have the '.0' to indicate a division of a float-double type by an integer. Otherwise, despite the c variable be a double, the result will be zero too (an integer).
For example this code
val stringTuple = ("BLACK", "GRAY", "WHITE")
firstInAlphabet(stringTuple)
Should return "BLACK". How would you define firstInAlphabet?
Personally I prefer simple and fast implementations over complicated ones that would cover a lot of cases.
t.productIterator.map(_.asInstanceOf[String]).min
productIterator converts the elements of the tuple to an iterator. This looses the type information, so we have to cast the elements and then we use min to find the first.
If you have non-String elements in your tuple this version should do the trick:
t.productIterator.map(_.toString).min
instead of casting to String it converts to a String.
int lua_isstring (lua_State *L, int index);
This function returns 1 if the value at the given acceptable index is
a string or a number (which is always convertible to a string), and 0
otherwise. (Source)
Is there a (more elegant) way to really proof if the given string really is a string and not a number in Lua? This function makes absolutely no sense to me!
My first idea is to additionally examine the string-length with
`if(string.len(String) > 1) {/* this must be a string */}`
... but that does not feel so good.
You can replace
lua_isstring(L, i)
which returns true for either a string or a number by
lua_type(L, i) == LUA_TSTRING
which yields true only for an actual string.
Similarly,
lua_isnumber(L, i)
returns true either for a number or for a string that can be converted to a number; if you want more strict checking, you can replace this with
lua_type(L, i) == LUA_TNUMBER
(I've written wrapper functions, lua_isstring_strict() and lua_isnumber_strict().)
This function makes absolutely no sense to me!
It makes sense in light of Lua's coercion rules. Any function that accepts a string should also accept a number, converting that number to a string. That's just how the language semantics are defined. The way lua_isstring and lua_tostring work allow you automatically implement those semantics in your C bindings with no additional effort.
If you don't like those semantics and want to disable automation conversion between string and number, you can define LUA_NOCVTS2N and/or LUA_NOCVTN2S in your build. In particular, if you define LUA_NOCVTN2S, lua_isstring will return false for numbers.
The Java way to compare two BigDecimals is to use the compareTo() method, and check if the result is -1, 0 or 1.
BigDecimal a = new BigDecimal("1.23")
BigDecimal b = new BigDecimal("3.45")
if (a.compareTo(b) > 0)) { }
I have seen that some people are using this format in grails:
if (a > b) { }
Does this work correctly? I.e. will it get the decimals correct, or is it converting to float or similar and comparing that?
How about using "==" vs using equals()?
What is the consequence of something like this:
BigDecimal a = new BigDecimal("1.00")
BigDecimal b = new BigDecimal("1")
assert (a==b)
It seems to work, but we have had it so engrained in Java not to do this kind of thing.
How about +=? e.g.
a+=b?
Would this be the same as
a = a.add(b)
Where does one find this kind of thing out? I have two groovy books, and unfortunately neither mention BigDecimal comparison or arithmetic, only the conversion/declaration.
Groovy allows overloading of operators. When a type implements certain methods then you can use a corresponding operator on that type.
For + the method to implement is plus, not add.
For greater than or less than comparisons, Groovy looks for a compareTo method on the object, and for == it looks for a method named equals. (If you want to compare references as if using == in Java, you have to use is.)
Here's a table of common math operators and the method used to overload them:
Operator Method
a + b a.plus(b)
a - b a.minus(b)
a * b a.multiply(b)
a / b a.divide(b)
a++ or ++a a.next()
a-- or --a a.previous()
a << b a.leftShift(b)
You can see that BigDecimal overloads some of these methods (you get operator overloading for plus, minus, multiply, and divide, but not for next, previous, or leftShift):
groovy:000> BigDecimal.methods*.name
===> [equals, hashCode, toString, intValue, longValue, floatValue, doubleValue,
byteValue, shortValue, add, add, subtract, subtract, multiply, multiply, divide,
divide, divide, divide, divide, divide, remainder, remainder, divideAndRemainde
r, divideAndRemainder, divideToIntegralValue, divideToIntegralValue, abs, abs, m
ax, min, negate, negate, plus, plus, byteValueExact, shortValueExact, intValueEx
act, longValueExact, toBigIntegerExact, toBigInteger, compareTo, precision, scal
e, signum, ulp, unscaledValue, pow, pow, movePointLeft, movePointRight, scaleByP
owerOfTen, setScale, setScale, setScale, stripTrailingZeros, toEngineeringString
, toPlainString, round, compareTo, getClass, notify, notifyAll, wait, wait, wait
, valueOf, valueOf, valueOf]
You get ==, >, <, >=, and <= based on how your object implements equals and compareTo.
So the operator is causing methods declared already in BigDecimal, or added to BigDecimal by groovy, to get called. It is definitely not doing any kind of conversion to a primitive type like float in order to be able to use the operators on primitives.
The table is taken from this developerworks article by Andrew Glover and Scott Davis, which has more details and includes example code.
Groovy does a wonderful job of managing numbers and to infinite precision. First thing you should know is that any number that has a dot in it is by default a BigDecimal -- the reason for the infinite precision. Here is an example of what this means exactly. Consider this snippet:
System.out.println(2.0 - 1.1);
System.out.println(new BigDecimal(2.0).subtract(new BigDecimal(1.1)));
System.out.println(new BigDecimal("2.0").subtract(new BigDecimal("1.1")));
// the above will give these:
0.8999999999999999
0.899999999999999911182158029987476766109466552734375
0.9
This shows the endurance we have to go through to get something decent in Java. In Groovy, this is all you have to do:
println 2 - 1.1
to get your 0.9! Try this on the Groovy web console. Here, the second operand is a BigDecimal, so the entire calculation is in BigDecimal, and precision is what Groovy would strive for to finish off clean in this case.
But how? This is because almost every operator in Groovy is mapped onto method calls on objects under the hood, so a + b is a.plus(b), and a==b translates to a.compareTo(b). It is therefore safe to assume what you assumed and this is the Groovy way of doing things: write less, expressively, and Groovy will do the work for you. You can learn about all this in the Groovy-lang documentation page with examples all over.
The short answer is yes, it's safe to use == for BigDecimal comparison in Groovy.
From the Groovy documentation Behaviour of == section:
In Java == means equality of primitive types or identity for objects. In Groovy == translates to a.compareTo(b)==0, if they are Comparable, and a.equals(b) otherwise. To check for identity, there is is. E.g. a.is(b).
I wish to ask a conceptual question. My code is to print an array of float values of 5 decimal places onto the console. Why must it be String instead of Float? Ans[y] is an array of type float.
println(String(format: "%.5f", Ans[y]))
Instead of Float
println(Float(format: "%.5f", Ans[y]))
Float gives an error of extra argument 'format' in call
You can use map() to format your Float array as string array. Btw you should give it a name starting with a lowercase letter. Try doing as follow:
let floatArray:[Float] = [1.23456,3.21098,2.78901]
let formattedArray = floatArray.map{String(format: "%.5f", $0)}
println(formattedArray) // "[1.23456, 3.21098, 2.78901]"
It's just a matter of understanding what your words mean. String is an object type (a struct). Float is an object type (a struct). The syntax Thing(...) calls a Thing initializer - it creates a new object of type Thing and calls an initializer method init(...). That's what you're doing when you say String(...) and Float(...).
Well, there is a String init(format:) initializer (it actually comes from Foundation NSString, to which String is bridged), but there is no Float init(format:) initializer - the Float struct doesn't declare any such thing. So in the second code you're calling a non-existent method.
You can use NSLog instead of println. NSLog is still in the foundation class and has the flexibility of specifying the exact format you need.