autocasting and type conversion in specman e - programming-languages

Consider the following example in e:
var a : int = -3
var b : int = 3
var c : uint = min(a,b); print c
c = 3
var d : int = min(a,b); print d
d = -3
The arguments inside min() are autocasted to the type of the result expression.
My questions are:
Are there other programming languages that use type autocasting, how do they treat functions like min() and max()?
Is this behavior logical? I mean this definition is not consistent with the following possible definition of min:
a < b ? a : b
thanks

There are many languages that do type autocasting and many that will not. I should say upfront that I don't think it's a very good idea, and I prefer a more principled behavior in my language but some people prefer the convenience and lack of verbosity of autocasting.
Perl
Perl is an example of a language that does type autocasting and here's a really fun example that shows when this can be quite confusing.
print "bob" eq "bob";
print "nancy" eq "nancy";
print "bob" == "bob";
print "nancy" == "nancy";
The above program prints 1 (for true) three times, not four. Why not the forth? Well, "nancy" is not == "nancy". Why not? Because == is the numerical equality operator. eq is for string equality. The strings are being compared as you might thing in eq but in == they are being converted to number automatically. What number is "bob" equally to? Zero of course. Any string that doesn't have a number as its prefix is interpreted as zero. For this reason "bob" == "chris" is true. Why isn't "nancy" == "nancy"? Why isn't "nancy" zero? Because that is read as "nan" for "not a number". NaN is not equal to another NaN. I, personally, find this confusing.
Javascript
Javascript is another example of a language that will do type autocasting.
'5' - 3
What's the result of that expression? 2! Very good.
'5' + 3
What's the result of that one? 8? Wrong! It's '53'.
Confused? Good. Wow Javascript, Such Good Conventions, Much Sense. It's a series of really funny Javascript evaluations based on automatic casting of types.
Why would anyone do this!?
After seeing some carefully selected horror stories you might be asking yourself why people who are intelligent enough to invent two of the most popular programming languages in the entire world would do something so silly. I don't like type autocasting but to be fair, there is an argument for it. It's not purely a bug. Consider the following Haskell expression:
length [1,2,3,4] / 2
What does this equal? 2? 2.0? Nope! It doesn't compile due to a type error. length returns an Int and you can't divide that. You must explicitly cast the length to a fraction by calling fromIntegral in order for this to work.
fromIntegral (length [1,2,3,4]) / 2
That can definitely get quite annoying if you're doing a lot of math with integers and floats moving about in your program. But I would greatly prefer that to having to understand why nancy isn't equal to nancy but chris is, in fact, equal to bob.
Happy medium?
Scheme only have one number type. It automatically converts floats and fractions and ints happy and treats them just as you'd expect. It will allow you to divide the length of a list without explicit casting because it knows what you mean. But no, it will never secretly convert a string to a number. Strings aren't numbers. That's just silly. ;)
Your example
As for your example, it's hard to say it is or is not logical. Confusing, yes. I mean you're here on stack overflow aren't you? The reason you're getting 3 I think is either -3 is being interpreted, like Ross said, as a unit in 2's compliment and is a much higher number or because the result of the min is -3 and then it's getting turned into an unsigned int by dropping the negative. The problem is that you asked it for asked it to put the result into an unsigned int but the result is negative. So I would say that what it did is logical in the context of type autocasting, but that type autocasting is confusing. Presumably, you're being saved from having to do explicit type casting all over the place and paying for it with weird behavior like this on occasion.

Related

Proving the inexpressibility of a function in a given language

I'm currently reading through John C. Mitchell's Foundations for Programming Languages. Exercise 2.2.3, in essence, asks the reader to show that the (natural-number) exponentiation function cannot be implicitly defined via an expression in a small language. The language consists of natural numbers and addition on said numbers (as well as boolean values, a natural-number equality predicate, & ternary conditionals). There are no loops, recursive constructs, or fixed-point combinators. Here is the precise syntax:
<bool_exp> ::= <bool_var> | true | false | Eq? <nat_exp> <nat_exp> |
if <bool_exp> then <bool_exp> else <bool_exp>
<nat_exp> ::= <nat_var> | 0 | 1 | 2 | … | <nat_exp> + <nat_exp> |
if <bool_exp> then <nat_exp> else <nat_exp>
Again, the object is to show that the exponentiation function n^m cannot be implicitly defined via an expression in this language.
Intuitively, I'm willing to accept this. If we think of exponentiation as repeated multiplication, it seems like we "just can't" express that with this language. But how does one formally prove this? More broadly, how do you prove that an expression from one language cannot be expressed in another?
Here's a simple way to think about it: the expression has a fixed, finite size, and the only arithmetic operation it can do to produce numbers not written as literals or provided as the values of variables is addition. So the largest number it can possibly produce is limited by the number of additions plus 1, multiplied by the largest number involved in the expression.
So, given a proposed expression, let k be the number of additions in it, let c be the largest literal (or 1 if there is none) and choose m and n such that n^m > (k+1)*max(m,n,c). Then the result of the expression for that input cannot be the correct one.
Note that this proof relies on the language allowing arbitrarily large numbers, as noted in the other answer.
No solution, only hints:
First, let me point out that if there are finitely many numbers in the language, then exponentiation is definable as an expression. (You'd have to define what it should produce when the true result is unrepresentable, eg wraparound.) Think about why.
Hint: Imagine that there are only two numbers, 0 and 1. Can you write an expression involving m and n whose result is n^m? What if there were three numbers: 0, 1, and 2? What if there were four? And so on...
Why don't any of those solutions work? Let's index them and call the solution for {0,1} partial_solution_1, the solution for {0,1,2} partial_solution_2, and so on. Why isn't partial_solution_n a solution for the set of all natural numbers?
Maybe you can generalize that somehow with some metric f : Expression -> Nat so that every expression expr with f(expr) < n is wrong somehow...
You may find some inspiration from the strategy of Euclid's proof that there are infinitely many primes.

is it safe to use comparison operators on BigDecimal in groovy/grails?

The Java way to compare two BigDecimals is to use the compareTo() method, and check if the result is -1, 0 or 1.
BigDecimal a = new BigDecimal("1.23")
BigDecimal b = new BigDecimal("3.45")
if (a.compareTo(b) > 0)) { }
I have seen that some people are using this format in grails:
if (a > b) { }
Does this work correctly? I.e. will it get the decimals correct, or is it converting to float or similar and comparing that?
How about using "==" vs using equals()?
What is the consequence of something like this:
BigDecimal a = new BigDecimal("1.00")
BigDecimal b = new BigDecimal("1")
assert (a==b)
It seems to work, but we have had it so engrained in Java not to do this kind of thing.
How about +=? e.g.
a+=b?
Would this be the same as
a = a.add(b)
Where does one find this kind of thing out? I have two groovy books, and unfortunately neither mention BigDecimal comparison or arithmetic, only the conversion/declaration.
Groovy allows overloading of operators. When a type implements certain methods then you can use a corresponding operator on that type.
For + the method to implement is plus, not add.
For greater than or less than comparisons, Groovy looks for a compareTo method on the object, and for == it looks for a method named equals. (If you want to compare references as if using == in Java, you have to use is.)
Here's a table of common math operators and the method used to overload them:
Operator Method
a + b a.plus(b)
a - b a.minus(b)
a * b a.multiply(b)
a / b a.divide(b)
a++ or ++a a.next()
a-- or --a a.previous()
a << b a.leftShift(b)
You can see that BigDecimal overloads some of these methods (you get operator overloading for plus, minus, multiply, and divide, but not for next, previous, or leftShift):
groovy:000> BigDecimal.methods*.name
===> [equals, hashCode, toString, intValue, longValue, floatValue, doubleValue,
byteValue, shortValue, add, add, subtract, subtract, multiply, multiply, divide,
divide, divide, divide, divide, divide, remainder, remainder, divideAndRemainde
r, divideAndRemainder, divideToIntegralValue, divideToIntegralValue, abs, abs, m
ax, min, negate, negate, plus, plus, byteValueExact, shortValueExact, intValueEx
act, longValueExact, toBigIntegerExact, toBigInteger, compareTo, precision, scal
e, signum, ulp, unscaledValue, pow, pow, movePointLeft, movePointRight, scaleByP
owerOfTen, setScale, setScale, setScale, stripTrailingZeros, toEngineeringString
, toPlainString, round, compareTo, getClass, notify, notifyAll, wait, wait, wait
, valueOf, valueOf, valueOf]
You get ==, >, <, >=, and <= based on how your object implements equals and compareTo.
So the operator is causing methods declared already in BigDecimal, or added to BigDecimal by groovy, to get called. It is definitely not doing any kind of conversion to a primitive type like float in order to be able to use the operators on primitives.
The table is taken from this developerworks article by Andrew Glover and Scott Davis, which has more details and includes example code.
Groovy does a wonderful job of managing numbers and to infinite precision. First thing you should know is that any number that has a dot in it is by default a BigDecimal -- the reason for the infinite precision. Here is an example of what this means exactly. Consider this snippet:
System.out.println(2.0 - 1.1);
System.out.println(new BigDecimal(2.0).subtract(new BigDecimal(1.1)));
System.out.println(new BigDecimal("2.0").subtract(new BigDecimal("1.1")));
// the above will give these:
0.8999999999999999
0.899999999999999911182158029987476766109466552734375
0.9
This shows the endurance we have to go through to get something decent in Java. In Groovy, this is all you have to do:
println 2 - 1.1​
to get your 0.9! Try this on the Groovy web console. Here, the second operand is a BigDecimal, so the entire calculation is in BigDecimal, and precision is what Groovy would strive for to finish off clean in this case.
But how? This is because almost every operator in Groovy is mapped onto method calls on objects under the hood, so a + b is a.plus(b), and a==b translates to a.compareTo(b). It is therefore safe to assume what you assumed and this is the Groovy way of doing things: write less, expressively, and Groovy will do the work for you. You can learn about all this in the Groovy-lang documentation page with examples all over.
The short answer is yes, it's safe to use == for BigDecimal comparison in Groovy.
From the Groovy documentation Behaviour of == section:
In Java == means equality of primitive types or identity for objects. In Groovy == translates to a.compareTo(b)==0, if they are Comparable, and a.equals(b) otherwise. To check for identity, there is is. E.g. a.is(b).

haskell implementation of a sequence

I just started Haskell and I'm struggling!!!
So I need to create a list om Haskell that has the formula
F(n) = (F(n-1)+F(n-2)) * F(n-3)/F(n-4)
and I have F(0) =1, F(1)=1,F(2)=1,F(3)=1
So I thought of initializing the first 4 elements of the list and then have a create a recursive function that runs for n>4 and appends the values to the list.
My code looks like this
let F=[1,1,1,1]
fib' n F
| n<4="less than 4"
|otherwise = (F(n-1)+F(n-2))*F(n-3)/F(n-4) : fib (n-1) F
My code looks conceptually right to me(not sure though), but I get an incorrect indentation error when i compile it. And am I allowed to initialize the elements of the list in the way that I have?
First off, variables in Haskell have to be lower case. Secondly, Haskell doesn't let you mix integers and fractions so freely as you may be used to from untyped or barely-typed languages. If you want to convert from an Int or an Integer to, say, a Double, you'll need to use fromIntegral. Thirdly, you can't stick a string in a context where you need a number. Fourthly, you may or may not have an indentation problem—be sure not to use tabs in your Haskell files, and to use the GHC option -fwarn-tabs to be sure.
Now we get to the heart of the matter: you're going about this all somewhat wrong. I'm going to give you a hint instead of a full answer:
thesequence = 1 : 1 : 1 : 1 : -- Something goes here that *uses* thesequence

Behavior of `=` in alloy fact

I was experimenting with alloy and wrote this code.
one sig s1{
vals: some Int
}{
#vals = 4
}
one sig s2{
vals: some Int
}{
#vals = 4
}
fact {
all a : s1.vals | a > 2
all i : s2.vals | i < 15
s1.vals = s2.vals
}
pred p{}
run p
It seems to me that {3,4,5,6} at least is a solution however Alloy says no instance found. When I comment s1.vals = s2.vals or change i < 15 to i > 2, it finds instances.
Can anyone please explain me why? Thanks.
Alloy's relationship with integers is sometimes mildly strained; it's not designed for heavily numeric applications, and many uses of integers in conventional programming are better handled in Alloy by other signatures.
The default bit width for integers is 4 bits, and Alloy uses twos-complement integers, so your run p is asking for a world in which integers range in value from -8 to 7. In that world, the constraint i < 15 is subject to integer overflow, and turns out to mean, in effect, i < -1. (To see this, comment out both of your constraints so that you get some instances. Then (a) leaf through the instances produced by the Analylzer and look at the integers that appear in them; you'll see their range is as I describe. Also, (b) open the Evaluator and type the numeral "15"; you'll see that its value in this universe is -1.)
If you change your run command to provide an appropriate bit width for integers (e.g. run p for 5 int), you'll get instances which are probably more like what you were expecting.
An alternative change, however, which leads to a more idiomatic Alloy model, is to abstract away from the specific kind of value by defining a sig for values:
sig value {}
Then change the declaration for vals in s1 and s2 from some Int to some value, and comment out the numeric constraints on them (or substitute some other interesting constraints for them). And then run p in a suitable scope (e.g. run p for 8 value).

Primitive Integer Operation

I am trying to compare 2 sets of element related by a binary relation which as an effect that
#set1 = #set0 + 2
Apparently in this expression the 2 is interpreted as {}, that is what the evaluator tells me, so the expression returns true.
The book says that + arithmetic operator is detected automatically. But apparently the problem is more about how to express the 2 in arithmetic. In the book I saw an example which is exactly what I want to do.
Moreover, when I calculate #Set, which contains set1+set0 the evaluator returns me a negative value.
Does someone have an idea about this?
Thanks in advance.
Try this:
sig A {}
sig B {}
pred show{ #A = add[#B, 2]}
run show for 5
As far as I understand there is special function for adding integers.
Let me know if I understood you right.

Resources