Behavior of `=` in alloy fact - alloy

I was experimenting with alloy and wrote this code.
one sig s1{
vals: some Int
}{
#vals = 4
}
one sig s2{
vals: some Int
}{
#vals = 4
}
fact {
all a : s1.vals | a > 2
all i : s2.vals | i < 15
s1.vals = s2.vals
}
pred p{}
run p
It seems to me that {3,4,5,6} at least is a solution however Alloy says no instance found. When I comment s1.vals = s2.vals or change i < 15 to i > 2, it finds instances.
Can anyone please explain me why? Thanks.

Alloy's relationship with integers is sometimes mildly strained; it's not designed for heavily numeric applications, and many uses of integers in conventional programming are better handled in Alloy by other signatures.
The default bit width for integers is 4 bits, and Alloy uses twos-complement integers, so your run p is asking for a world in which integers range in value from -8 to 7. In that world, the constraint i < 15 is subject to integer overflow, and turns out to mean, in effect, i < -1. (To see this, comment out both of your constraints so that you get some instances. Then (a) leaf through the instances produced by the Analylzer and look at the integers that appear in them; you'll see their range is as I describe. Also, (b) open the Evaluator and type the numeral "15"; you'll see that its value in this universe is -1.)
If you change your run command to provide an appropriate bit width for integers (e.g. run p for 5 int), you'll get instances which are probably more like what you were expecting.
An alternative change, however, which leads to a more idiomatic Alloy model, is to abstract away from the specific kind of value by defining a sig for values:
sig value {}
Then change the declaration for vals in s1 and s2 from some Int to some value, and comment out the numeric constraints on them (or substitute some other interesting constraints for them). And then run p in a suitable scope (e.g. run p for 8 value).

Related

Why doesn't simple integer counterexample occur in Alloy?

I am trying to model a relationship between a numeric variable and a boolean variable, in which if the numeric variable is in a certain range then the boolean variable will change value. I'm new to Alloy, and am having trouble understanding how to constrain my scope sufficiently to yield the obvious counterexample. My code is as follows:
open util/boolean
one sig Object {
discrete : one Bool,
integer : one Int
}
fact { all o : Object | o.integer > 0 and o.integer < 10 }
fact { all o : Object | o.integer > 5 iff o.discrete = False }
assert discreteCondition { all o : Object | o.discrete = True }
check discreteCondition for 1000
Since o.integer is integer-values and ranges from 0 to 10, it could only be one of 10 different choices. And I specified that each Object should only have one integer and one discrete. So it seems reasonable to me that there are really only 10 cases to check here: one case for each value of integer. And yet even with 1000 cases, I get
No counterexample found.
If I remove the integer variable and related facts then it does find the counterexample almost immediately. I have also tried using other solvers and increasing various depth and memory values in the Options, but this did not help, so clearly my code is at fault.
How can I limit my scope to make Alloy find the counterexample (by iterating over possible values of the integer)? Thanks!
By default, the bitwidth used to represent integers is 4 so only integer in the range [-8,7] are considered during the instance generation, and so, due to integer overflows, your first fact is void (as 10 is outside this range).
To fix the problem, increase the bitwidth used to at least 5:
check discreteCondition for 10 but 5 Int.
Note that a scope of 1000 does not mean that you consider 1000 case in your analysis. The scope is the maximum number of atoms present in the generated instance, typed after a given signature. In your case you have only one signature with multiplicity one. So analyzing your model with a scope of 1 or 10000 doesn't change anything. There'll still be only one Object atom in the instance generated.
You might want to check this Q/A to learn more about scopes Specifying A Scope for Sig in Alloy

Alloy - Dealing with unbounded universal quantifiers

Good afternoon,
I've been experiencing an issue with Alloy when dealing with unbounded universal quantifiers. As explained in Daniel Jackson's book 'Software Abstractions' (Section 5.3 'Unbounded Universal Quantifiers'), Alloy has a subtle limitation regarding universal quantifiers and assertion checking. Alloy produces spurious counterexamples in some cases, such as the next one to check that sets are closed under union (shown in the aforementioned book):
sig Set {
elements: set Element
}
sig Element {}
assert Closed {
all s0, s1: Set | some s2: Set |
s2.elements = s0.elements + s1.elements
}
check Closed for 3
Producing a counterexample such as:
Set = {(S0),(S1)}
Element = {(E0),(E1)}
s0 = {(S0)}
s1 = {(S1)}
elements = {(S0,E0), (S1,E1)}
where the analyser didn't populate Set with enough values (a missing Set atom, S2, containing the union of S0 and S1).
Two solutions to this general problem are suggested then in the book:
1) Declaring a generator axiom to force Alloy to generate all possible instances.
For example:
fact SetGenerator{
some s: Set | no s.elements
all s: Set, e: Element |
some s': Set | s'.elements = s.elements + e
}
This solution, however, produces a scope explosion and may also lead to inconsistencies.
2) Omitting the generator axiom and using the bounded-universal form for constraints. That is, quantified variable's bounding expression doesn't mention the names of generated signatures. However, not every assertion can be expressed in such a form.
My question is: is there any specific rule to choose any of these solutions? It isn't clear to me from the book.
Thanks.
No, there's no specific rule (or at least none that I've come up with). In practice, this doesn't arise very often, so I would deal with each case as it comes up. Do you have a particular example in mind?
Also, bear in mind that sometimes you can formulate your problem with a higher order quantifier (ie a quantifier over a set or relation) and in that case you can use Alloy*, an extension of Alloy that supports higher order analysis.

Unexpected results in playing with relations

/*
sig a {
}
sig b {
}
*/
pred rel_test(r : univ -> univ) {
# r = 1
}
run {
some r : univ -> univ {
rel_test [r]
}
} for 2
Running this small test, $r contains one element in every generated instance. When sig a and sig b are uncommented, however, the first instance is this:
In my explanation, $r has 9 tuples here and still, the predicate which asks for a one tuple relation succeeds. Where am I wrong?
An auxiliary question: are these two declarations equivalent?
pred rel_test(r : univ -> univ)
pred rel_test(r : set univ -> univ)
The problem is that with the Forbid Overflow option set to No the integer semantics in Alloy is wrap around, and with the default scope of 3 (bits), then indeed 9=1, as you can confirm in the evaluator.
With the signatures a and b commented the biggest relation that can be generated with scope 2 has 4 tuples (since the max size of univ is 2), so the problem does not occur.
It also does not occur in the latest build because I believe it comes with the Forbid Overflow option set to Yes by default, and with that option the semantics of integers rules out instances where overflows occur, precisely the case when you compute the size of the relation with 9 tuples. More details about this alternative integer semantics can be found in the paper "Preventing arithmetic overflows in Alloy" by Aleksandar Milicevic and Daniel Jackson.
On the main question: what version of Alloy are you using? I'm unable to replicate the behavior you describe (using Alloy 4.2 of 22 Feb 2015 on OS X 10.6.8).
On the auxiliary question: it appears so. (The language reference is not quite as explicit as one might wish, but it begins one part of its discussion of multiplicities with "If the right-hand expression denotes a unary relation ..." and (in what I take to be the context so defined) "the default multiplicity is one"; the conditional would make no sense if the default multiplicity were always one.
On the other hand, the same interpretive logic would lead to the conclusion that the language reference believes that unary multiplicity keywords are only allowed before expressions denoting unary relations (which would appear to make r: set univ -> univ ungrammatical). But Alloy accepts the expression and parses it as set (univ -> univ). (The alternative parse, (set univ) -> univ, would be very hard to assign a meaning to.)

Thrust custom binary predicate parallelization

I am implementing a custom binary predicate used by the thrust::max_element search algorithm. It also keeps track of a value of interest which the algorithm alone cannot yield. It looks something like this:
struct cmp_class{
cmp_class(int *val1){
this->val1 = val1;
}
bool operator()(int i, int j){
if (j < *val1) { *val1 = j; }
return i < j;
}
int *val1;
};
int val1 = ARRAY[0];
std::max_element(ARRAY, ARRAY+length, cmp(&val1));
.... use val1
My actual binary predicate is quite a bit more complex, but the example above captures the gist of it: I am passing a pointer to an integer to the binary predicate cmp, which then writes to that integer to keep track of some value (here the running minimum). Since max_element first calls cmp()(ARRAY[0],ARRAY[1]) and then cmp()(running maximum,ARRAY[i]), I only look at value j inside cmp and therefore initialize val1 = ARRAY[0] to ensure ARRAY[0] is taken into account.
If I do the above on the host, using std::max_element for example, this works fine. The values of val1 is what I expect given known data. However, using Thrust to execute this on the GPU, its value is off. I suspect this is due to the parallelization of thrust::max_element, which is recursively applied on sub arrays, the results of which form another array which thrust::max_element is run on, etc. Does this hold water?
In general, the binary predicates used for thrust reductions are expected to be commutative. I'm using "commutative" here to mean that the predicate result should be valid regardless of the order in which the arguments are presented.
At the initial stage of a thrust parallel reduction, the arguments are likely to be presented in an order you might expect (i.e. in the order of the vectors passed to the reduce function, or in the order that the values occur in a single vector.) However, later on in the parallel reduction process, the origin of the arguments may get mixed up, during the parallel-sweeping. Any binary comparison functor that assumes an ordering to the arguments is probably broken, for thrust parallel reductions.
In your case, the boolean result of your comparison functor should be valid regardless of the order of arguments presented, and in that respect it appears to be properly commutative.
However, regarding the custom storage functionality you have built-in around val1, it seems pretty clear that the results in val1 could be different depending on the order in which arguments are passed to the functor. Consider this simple max-finding sequence amongst a set values passed to the functor as (i,j) (assume val1 starts out at a large value):
values: 10 5 3 7 12
comparison pairs: (10,5) (10,3) (10,7) (10,12)
comparison result: 10 10 10 12
val1 storage: 5 3 3 3
Now suppose that we simply reverse the order that arguments are presented to the functor:
values: 10 5 3 7 12
comparison pairs: (5,10) (3,10) (7,10) (12,10)
comparison result: 10 10 10 12
val1 storage: 10 10 10 10
Another issue is that you have no atomic protection on val1 that I can see:
if (j < *val1) { *val1 = j; }
The above line of code may be OK in a serial realization. In a parallel multi-threaded algorithm, you have the possibility for multiple threads to be accessing (reading and writing) *val1 simultaneously, which will have undefined results.

autocasting and type conversion in specman e

Consider the following example in e:
var a : int = -3
var b : int = 3
var c : uint = min(a,b); print c
c = 3
var d : int = min(a,b); print d
d = -3
The arguments inside min() are autocasted to the type of the result expression.
My questions are:
Are there other programming languages that use type autocasting, how do they treat functions like min() and max()?
Is this behavior logical? I mean this definition is not consistent with the following possible definition of min:
a < b ? a : b
thanks
There are many languages that do type autocasting and many that will not. I should say upfront that I don't think it's a very good idea, and I prefer a more principled behavior in my language but some people prefer the convenience and lack of verbosity of autocasting.
Perl
Perl is an example of a language that does type autocasting and here's a really fun example that shows when this can be quite confusing.
print "bob" eq "bob";
print "nancy" eq "nancy";
print "bob" == "bob";
print "nancy" == "nancy";
The above program prints 1 (for true) three times, not four. Why not the forth? Well, "nancy" is not == "nancy". Why not? Because == is the numerical equality operator. eq is for string equality. The strings are being compared as you might thing in eq but in == they are being converted to number automatically. What number is "bob" equally to? Zero of course. Any string that doesn't have a number as its prefix is interpreted as zero. For this reason "bob" == "chris" is true. Why isn't "nancy" == "nancy"? Why isn't "nancy" zero? Because that is read as "nan" for "not a number". NaN is not equal to another NaN. I, personally, find this confusing.
Javascript
Javascript is another example of a language that will do type autocasting.
'5' - 3
What's the result of that expression? 2! Very good.
'5' + 3
What's the result of that one? 8? Wrong! It's '53'.
Confused? Good. Wow Javascript, Such Good Conventions, Much Sense. It's a series of really funny Javascript evaluations based on automatic casting of types.
Why would anyone do this!?
After seeing some carefully selected horror stories you might be asking yourself why people who are intelligent enough to invent two of the most popular programming languages in the entire world would do something so silly. I don't like type autocasting but to be fair, there is an argument for it. It's not purely a bug. Consider the following Haskell expression:
length [1,2,3,4] / 2
What does this equal? 2? 2.0? Nope! It doesn't compile due to a type error. length returns an Int and you can't divide that. You must explicitly cast the length to a fraction by calling fromIntegral in order for this to work.
fromIntegral (length [1,2,3,4]) / 2
That can definitely get quite annoying if you're doing a lot of math with integers and floats moving about in your program. But I would greatly prefer that to having to understand why nancy isn't equal to nancy but chris is, in fact, equal to bob.
Happy medium?
Scheme only have one number type. It automatically converts floats and fractions and ints happy and treats them just as you'd expect. It will allow you to divide the length of a list without explicit casting because it knows what you mean. But no, it will never secretly convert a string to a number. Strings aren't numbers. That's just silly. ;)
Your example
As for your example, it's hard to say it is or is not logical. Confusing, yes. I mean you're here on stack overflow aren't you? The reason you're getting 3 I think is either -3 is being interpreted, like Ross said, as a unit in 2's compliment and is a much higher number or because the result of the min is -3 and then it's getting turned into an unsigned int by dropping the negative. The problem is that you asked it for asked it to put the result into an unsigned int but the result is negative. So I would say that what it did is logical in the context of type autocasting, but that type autocasting is confusing. Presumably, you're being saved from having to do explicit type casting all over the place and paying for it with weird behavior like this on occasion.

Resources