meaning of Alloy predicate in relational join - alloy

Consider the following simple variant of the Address Book example
sig Name, Addr {}
sig Book { addr : Name -> Addr } // no lone on Addr
pred show(b:Book) { some n : Name | #addr[b,n] > 1 }
run show for exactly 2 Book, exactly 2 Addr, exactly 2 Name
In some model instances, I can get the following results in the evaluator
all b:Book | show[b]
--> yields false
some b:Book | show[b]
--> yields true
show[Book]
--> yields true
If show was a relation, then one might expect to get an answer like: { true, false }. Given that it is a predicate, a single Boolean value is returned. I would have expected show[Book] to be a shorthand for the universally quantified expression above it. Instead, it seems to be using existential quantification to fold the results. Anyone know what might be the rational for this, or have another explanation for the meaning of show[Book]?

(I'm not sure I have the correct words for this, so bear with me if this seems fuzzy.)
Bear in mind that all expressions in Alloy that denote individuals denote sets of individuals, and that there is no distinction available in the language between 'individual X' and 'the singleton set whose member is the individual X'. ([Later addendum:] In the terms more usually used: the general rule in Alloy's logic is that all values are relations. Binary relations are sets of pairs, n-ary relations sets of n-tuples, sets are unary relations, and scalars are singleton sets. See the discussion in sec. 3.2.2 of Software Abstractions, or the slide "Everything's a relation" in the Alloy Analyzer 4 tutorial by Greg Dennis and Rob Seater.)
Given the declaration you give of the 'show' predicate, it's easy to expect that the argument of 'show' should be a single Book -- or more correctly, a singleton set of Book --, and then to expect further that if the argument is not actually a singleton set (as in the expression show[Book] here) then the system will coerce it to being a singleton set, or interpret it with some sort of implicit existential or universal quantification. But in the declaration pred show(b:Book) ..., the expression b:Book just names an object b which will be a set of objects in the signature Book. (To require that b be a singleton set, write pred show(one b: Book) ....) The expression which constitutes the body of show is evaluated for b = Book just as readily as for b = Book$0.
The appearance of existential quantification is a consequence of the way the dot operator at the heart of the expression addr[b,n] (or equivalently n.(b.addr) is defined. Actually, if you experiment you'll find that show[Book] is true whenever there is any name for which the set of all books contains a mapping to two different addresses, even in cases where an existential interpretation would fail. Try adding this to your model, for example:
pred hmmmm { show[Book] and no b: Book | show[b] }
run hmmmm for exactly 2 Book, exactly 2 Addr, exactly 2 Name

Related

Alloy - Dealing with unbounded universal quantifiers

Good afternoon,
I've been experiencing an issue with Alloy when dealing with unbounded universal quantifiers. As explained in Daniel Jackson's book 'Software Abstractions' (Section 5.3 'Unbounded Universal Quantifiers'), Alloy has a subtle limitation regarding universal quantifiers and assertion checking. Alloy produces spurious counterexamples in some cases, such as the next one to check that sets are closed under union (shown in the aforementioned book):
sig Set {
elements: set Element
}
sig Element {}
assert Closed {
all s0, s1: Set | some s2: Set |
s2.elements = s0.elements + s1.elements
}
check Closed for 3
Producing a counterexample such as:
Set = {(S0),(S1)}
Element = {(E0),(E1)}
s0 = {(S0)}
s1 = {(S1)}
elements = {(S0,E0), (S1,E1)}
where the analyser didn't populate Set with enough values (a missing Set atom, S2, containing the union of S0 and S1).
Two solutions to this general problem are suggested then in the book:
1) Declaring a generator axiom to force Alloy to generate all possible instances.
For example:
fact SetGenerator{
some s: Set | no s.elements
all s: Set, e: Element |
some s': Set | s'.elements = s.elements + e
}
This solution, however, produces a scope explosion and may also lead to inconsistencies.
2) Omitting the generator axiom and using the bounded-universal form for constraints. That is, quantified variable's bounding expression doesn't mention the names of generated signatures. However, not every assertion can be expressed in such a form.
My question is: is there any specific rule to choose any of these solutions? It isn't clear to me from the book.
Thanks.
No, there's no specific rule (or at least none that I've come up with). In practice, this doesn't arise very often, so I would deal with each case as it comes up. Do you have a particular example in mind?
Also, bear in mind that sometimes you can formulate your problem with a higher order quantifier (ie a quantifier over a set or relation) and in that case you can use Alloy*, an extension of Alloy that supports higher order analysis.

alloy predicate calculus post production

As I'm new to alloy, this is most likely a simple question. I've been through the on-line tutorials and am now reading the Software Abstractions, revised edition. On page 34 there is an example at the bottom of the page:
r' = {b:B, a:A, c:C | a->b->c in r}
where the text says that this defines a new relation of B->A->C. I don't see how an explicit order for r' is achieved by this statement.
It's the property of set comprehension
{a: A | somePredicate1[a]} is of type A and returns a set containing all atoms for which somePredicate1 holds;
{a: A, b: B | somePredicate2[a, b]} is of type A->B and returns a relation containing all a->b tuples for which somePredicate2 holds;
and so on
The syntax of set comprehension basically consists of two parts (1) type declaration (before the | character), and (2) predicate which must hold for every element in the returned set.

Unexpected results in playing with relations

/*
sig a {
}
sig b {
}
*/
pred rel_test(r : univ -> univ) {
# r = 1
}
run {
some r : univ -> univ {
rel_test [r]
}
} for 2
Running this small test, $r contains one element in every generated instance. When sig a and sig b are uncommented, however, the first instance is this:
In my explanation, $r has 9 tuples here and still, the predicate which asks for a one tuple relation succeeds. Where am I wrong?
An auxiliary question: are these two declarations equivalent?
pred rel_test(r : univ -> univ)
pred rel_test(r : set univ -> univ)
The problem is that with the Forbid Overflow option set to No the integer semantics in Alloy is wrap around, and with the default scope of 3 (bits), then indeed 9=1, as you can confirm in the evaluator.
With the signatures a and b commented the biggest relation that can be generated with scope 2 has 4 tuples (since the max size of univ is 2), so the problem does not occur.
It also does not occur in the latest build because I believe it comes with the Forbid Overflow option set to Yes by default, and with that option the semantics of integers rules out instances where overflows occur, precisely the case when you compute the size of the relation with 9 tuples. More details about this alternative integer semantics can be found in the paper "Preventing arithmetic overflows in Alloy" by Aleksandar Milicevic and Daniel Jackson.
On the main question: what version of Alloy are you using? I'm unable to replicate the behavior you describe (using Alloy 4.2 of 22 Feb 2015 on OS X 10.6.8).
On the auxiliary question: it appears so. (The language reference is not quite as explicit as one might wish, but it begins one part of its discussion of multiplicities with "If the right-hand expression denotes a unary relation ..." and (in what I take to be the context so defined) "the default multiplicity is one"; the conditional would make no sense if the default multiplicity were always one.
On the other hand, the same interpretive logic would lead to the conclusion that the language reference believes that unary multiplicity keywords are only allowed before expressions denoting unary relations (which would appear to make r: set univ -> univ ungrammatical). But Alloy accepts the expression and parses it as set (univ -> univ). (The alternative parse, (set univ) -> univ, would be very hard to assign a meaning to.)

Polymorphic empty relation in Alloy?

I run an Alloy command that involves finding witnesses for some existentials, like this one:
pred foo {
some x, y : E -> E |
baz[x,y] || qux[x,y]
}
Alloy comes up with a model where foo is true. I look at the model in the Visualizer, and find that y happens to be the empty relation. I want to dig deeper into the model and see whether baz or qux is true. So I fire up the Evaluator window and type baz[$foo_x, ???]. But what can I type for ???? Since y is empty, there is no variable with the name $foo_y. And typing none or {} gives a type-checking error.
Does Alloy provide an empty relation that can be used at any type? Or is there any way to get at the y witness even though it's empty?
I belive baz[$foo_x, none->none] should work. The relation none has arity 1, and by using cross product you can get empty relations of the desired arity. The explanation for this can be found in the paper "A Type System for Object Models" by Jonathan Edwards, Daniel Jackson and Emina Torlak.

Union types and Intersection types

What are the various use cases for union types and intersection types? There has been lately a lot of buzz about these type system features, yet somehow I have never felt need for either of these!
Union Types
To quote Robert Harper, "Practical Foundations for Programming
Languages", ch 15:
Most data structures involve
alternatives such as the distinction
between a leaf and an interior node in
a tree, or a choice in the outermost
form of a piece of abstract syntax.
Importantly, the choice determines the
structure of the value. For example,
nodes have children, but leaves do
not, and so forth. These concepts are
expressed by sum types, specifically
the binary sum, which offers a choice
of two things, and the nullary sum,
which offers a choice of no things.
Booleans
The simplest sum type is the Boolean,
data Bool = True
| False
Booleans have only two valid values, T or F. So instead of representing them as numbers, we can instead use a sum type to more accurately encode the fact there are only two possible values.
Enumerations
Enumerations are examples of more general sum types: ones with many, but finite, alternative values.
Sum types and null pointers
The best practically motivating example for sum types is discriminating between valid results and error values returned by functions, by distinguishing the failure case.
For example, null pointers and end-of-file characters are hackish encodings of the sum type:
data Maybe a = Nothing
| Just a
where we can distinguish between valid and invalid values by using the Nothing or Just tag to annotate each value with its status.
By using sum types in this way we can rule out null pointer errors entirely, which is a pretty decent motivating example. Null pointers are entirely due to the inability of older languages to express sum types easily.
Intersection Types
Intersection types are much newer, and their applications are not as widely understood. However, Benjamin Pierce's thesis ("Programming with Intersection Types
and Bounded Polymorphism") gives a good overview:
The most intriguing and potentially
useful property of intersection types
is their ability to express an
essentially unbounded (though of
course finite) amount of information
about the components of a program.
For
example, the addition function (+) can be
given the type Int -> Int -> Int ^ Real -> Real -> Real, capturing both the
general fact that the sum of two real
numbers is always a real and the more
specialized fact that the sum of two
integers is always an integer. A
compiler for a language with
intersection types might even provide
two different object-code sequences
for the two versions of (+), one using a
floating point addition instruction and
one using integer addition. For each
instance of+ in a program, the
compiler can decide whether both
arguments are integers and generate
the more efficient object code sequence
in this case.
This kind of finitary
polymorphism or coherent overloading
is so expressive, that ... the set of
all valid typings for a program
amounts to a complete characterization
of the program’s behavior
They let us encode a lot of information in the type, explaining via type theory what multiple inheritance means, giving types to type classes,
Union types are useful for typing dynamic languages or otherwise allowing more flexibility in the types passed around than most static languages allow. For example, consider this:
var a;
if (condition) {
a = "string";
} else {
a = 123;
}
If you have union types, it's easy to type a as int | string.
One use for intersection types is to describe an object that implements multiple interfaces. For example, C# allows multiple interface constraints on generics:
interface IFoo {
void Foo();
}
interface IBar {
void Bar();
}
void Method<T>(T arg) where T : IFoo, IBar {
arg.Foo();
arg.Bar();
}
Here, arg's type is the intersection of IFoo and IBar. Using that, the type-checker knows both Foo() and Bar() are valid methods on it.
If you want a more practice-oriented answer:
With union and recursive types you can encode regular tree types and therefore XML types.
With intersection types you can type BOTH overloaded functions and refinement types (what in a previous post is called coherent overloading)
So for instance you can write the function add (that overloads integer sum and string concatenation) as follows
let add ( (Int,Int)->Int ; (String,String)->String )
| (x & Int, y & Int) -> x+y
| (x & String, y & String) -> x#y ;;
Which has the intersection type
(Int,Int)->Int & (String,String)->String
But you can also refine the type above and type the function above as
(Pos,Pos) -> Pos &
(Neg,Neg) -> Neg &
(Int,Int)->Int &
(String,String)->String.
where Pos and Neg are positive and negative integer types.
The code above is executable in the language CDuce ( http://www.cduce.org ) whose type system includes union, intersections, and negation types (it is mainly targeted at XML transformations).
If you want to try it and you are on Linux, then it is probably included in your distribution (apt-get install cduce or yum install cduce should do the work) and you can use its toplevel (a la OCaml) to play with union and intersection types. On the CDuce site you will find a lot of practical examples of use of union and intersection types. And since there is a complete integration with OCaml libraries (you can import OCaml libraries in CDuce and export CDuce modules to OCaml) you can also check the correspondence with ML sum types (see here).
Here you are a complex example that mix union and intersection types (explained in the page "http://www.cduce.org/tutorial_overloading.html#val"), but to understand it you need to understand regular expression pattern matching, which requires some effort.
type Person = FPerson | MPerson
type FPerson = <person gender = "F">[ Name Children ]
type MPerson = <person gender = "M">[ Name Children ]
type Children = <children>[ Person* ]
type Name = <name>[ PCDATA ]
type Man = <man name=String>[ Sons Daughters ]
type Woman = <woman name=String>[ Sons Daughters ]
type Sons = <sons>[ Man* ]
type Daughters = <daughters>[ Woman* ]
let fun split (MPerson -> Man ; FPerson -> Woman)
<person gender=g>[ <name>n <children>[(mc::MPerson | fc::FPerson)*] ] ->
(* the above pattern collects all the MPerson in mc, and all the FPerson in fc *)
let tag = match g with "F" -> `woman | "M" -> `man in
let s = map mc with x -> split x in
let d = map fc with x -> split x in
<(tag) name=n>[ <sons>s <daughters>d ] ;;
In a nutshell it transforms values of type Person into values of type (Man | Women) (where the vertical bar denotes a union type) but keeping the correspondence between genres: split is a function with intersection type
MPerson -> Man & FPerson -> Woman
For instance with union types one could describe json domain model without introducing actual new classes but using only type aliases.
type JObject = Map[String, JValue]
type JArray = List[JValue]
type JValue = String | Number | Bool | Null | JObject | JArray
type Json = JObject | JArray
def stringify(json: JValue): String = json match {
case String | Number | Bool | Null => json.toString()
case JObject => "{" + json.map(x y => x + ": " + stringify(y)).mkStr(", ") + "}"
case JArray => "[" + json.map(stringify).mkStr(", ") + "]"
}

Resources