In the following code from section 4.7.2 of the Alloy book, what does the this keyword refer to?
module library/list [t]
sig List {}
sig NonEmptyList extends List {next: List, element: t}
...
fun List.first : t {this.element}
fun List.rest : List {this.next}
fun List.addFront (e: t): List {
{p: List | p.next = this and p.element = e}
}
I would appreciate if you give me an elaborate description of this usage in Alloy.
Section 4.5.2 of Software Abstractions describes (among other things) what it calls the 'receiver' convention, that is the syntactic shorthand of writing functions and predicates as
fun X.f[y : Y, ...] { ... this ... }
instead of
fun f[x : X, y : Y, ...] { ... x ... }
That is, the declaration
fun List.first : t {this.element}
is equivalent to (and shorthand / syntactic sugar for)
fun first[x : List] : t {x.element}
And similarly for the other examples you give. The parallel would be even stronger if we said the long form was
fun first[this : List] : t {this.element}
but while this is a useful illustration, it's not legal: this is a keyword and cannot be used as an ordinary variable name.
You ask for an "elaborate description" of the usage of this in Alloy. Here's a survey. The keyword this can be used in the following situations:
In declarations and signature facts, this acts as a variable implicitly bound to each instance of the signature. So a declaration of the form
sig Foo { ... } { copacetic[this] }
is equivalent to
sig Foo { ... }
fact { all f : Foo | copacetic[f] }
In declarations and signature facts, every reference to a field f declared or inherited by the signature is implicitly expanded to this.f, where this is implicitly bound as described above, unless the reference is prefixed with #. The example at the end of 4.2.4 illustrates the semantics.
In the declaration bodies of functions and predicates declared using the 'receiver' convention, the keyword this acts as a variable implicitly bound to the first argument of the function or predicate. The example at the end of 4.5.2 illustrates this, as does the example cited here by the OP.
The 'receiver' convention is defined in section B.7.5 of the language reference.
All of these are pointed to from the entry for this in the index of Software Abstractions; for more information, read the relevant passages.
Related
The following Alloy predicate p has a parameter t declared as a singleton of type S. Invoke run p gives the correct result because the predicate body states that t may contain two different elements s and s'. However, in the second run command, a set of two disjoint elements of type S is passed into predicate p and this command gives an instance. Why is it the case?
sig S {}
pred p(t: one S) {
some s, s': t | s != s'
}
r1: run p -- no instance found
r2: run { -- instance found
some disj s0, s1: S {
S = s0 + s1
p[S]
}
}
See https://stackoverflow.com/a/43002442/1547046. Same issue, I think.
BTW, there's a nice research problem here. Can you define a coherent semantics for argument declarations that would be better (that is, simpler, unsurprising, and well defined in all contexts)?
Sometimes, when I would like to use recursion in Alloy, I find I can get by with transitive closure, or sequences.
For example, in a model of context-free grammars:
abstract sig Symbol {}
sig NT, T extends Symbol {}
// A grammar is a tuple(V, Sigma, Rules, Start)
one sig Grammar {
V : set NT,
Sigma : set T,
Rules : set Prod,
Start : one V
}
// A production rule has a left-hand side
// and a right-hand side
sig Prod {
lhs : NT,
rhs : seq Symbol
}
fact tidy_world {
// Don't fill the model with irrelevancies
Prod in Grammar.Rules
Symbol in (Grammar.V + Grammar.Sigma)
}
One possible definition of reachable non-terminals would be "the start symbol, and any non-terminal appearing on the right-hand side of a rule for a reachable symbol." A straightforward translation would be
// A non-terminal 'refers' to non-terminals
// in the right-hand sides of its rules
pred refers[n1, n2 : NT] {
let r = (Grammar.Rules & lhs.n1) |
n2 in r.rhs.elems
}
pred reachable[n : NT] {
n in Grammar.Start
or some n2 : NT
| (reachable[n2] and refers[n2,n])
}
Predictably, this blows the stack. But if we simply take the transitive closure of Grammar.Start under the refers relation (or, strictly speaking, a relation formed from the refers predicate), we can define reachability:
// A non-terminal is 'reachable' if it's the
// start symbol or if it is referred to by
// (rules for) a reachable symbol.
pred Reachable[n : NT] {
n in Grammar.Start.*(
{n1, n2 : NT | refers[n1,n2]}
)
}
pred some_unreachable {
some n : (NT - Grammar.Start)
| not Reachable[n]
}
run some_unreachable for 4
In principle, the definition of productive symbols is similarly recursive: a symbol is productive iff it is a terminal symbol, or it has at least one rule in which every symbol in the right-hand side is productive. The simple-minded way to write this is
pred productive[s : Symbol] {
s in T
or some p : (lhs.s) |
all r : (p.rhs.elems) | productive[r]
}
Like the straightforward definition of reachability, this blows the stack. But I have not yet found a relation I can define on symbols which will give me, via transitive closure, the result I want. Have I found a case where transitive closure cannot substitute for recursion? Or have I just not thought hard enough to find the right relation?
There is an obvious, if laborious, hack:
pred p0[s : Symbol] { s in T }
pred p1[s : Symbol] { p0[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p0[e]}
pred p2[s : Symbol] { p1[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p1[e]}
pred p3[s : Symbol] { p2[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p2[e]}
... etc ...
pred productive[n : NT] {
p5[n]
}
This works OK as long as one doesn't forget to add enough predicates to handle the longest possible chain of non-terminal references, if one raises the scope.
Concretely, I seem to have several questions; answers to any of them would be welcome:
1 Is there a way to define the set of productive non-terminals in Alloy without resorting to the p0, p1, p2, ... hack?
2 If one does have to resort to the hack, is there a better way to define it?
3 As a theoretical question: is it possible to characterize the set of recursive predicates that can be defined using transitive closure, or sequences of atoms, instead of recursion?
Have I found a case where transitive closure cannot substitute for recursion?
Yes, that is the case. More precisely, you found a recursive relation that cannot be expressed with the first-order transitive closure (that is supported in Alloy).
Is there a way to define the set of productive non-terminals in Alloy without resorting to the p0, p1, p2, ... hack? If one does have to resort to the hack, is there a better way to define it?
There is no a way to do this in Alloy. However, there might be a way to do this in Alloy* which supports higher-order quantification. (The idea would be to describe the set of productive elements with a closure over the relation of "productiveness", which would use second-order quantification over the set of productive symbols, and constraining this set to be minimal. Similar idea is described in "A.1.9 Axiomatizing Transitive Closure" in the Alloy book.)
As a theoretical question: is it possible to characterize the set of recursive predicates that can be defined using transitive closure, or sequences of atoms, instead of recursion?
This is an interesting question. The wiki article mentions relative expressiveness of second order logic when transitive closure and fixed point operator are added (the later being able to express forms of recursion).
Consider the following simple variant of the Address Book example
sig Name, Addr {}
sig Book { addr : Name -> Addr } // no lone on Addr
pred show(b:Book) { some n : Name | #addr[b,n] > 1 }
run show for exactly 2 Book, exactly 2 Addr, exactly 2 Name
In some model instances, I can get the following results in the evaluator
all b:Book | show[b]
--> yields false
some b:Book | show[b]
--> yields true
show[Book]
--> yields true
If show was a relation, then one might expect to get an answer like: { true, false }. Given that it is a predicate, a single Boolean value is returned. I would have expected show[Book] to be a shorthand for the universally quantified expression above it. Instead, it seems to be using existential quantification to fold the results. Anyone know what might be the rational for this, or have another explanation for the meaning of show[Book]?
(I'm not sure I have the correct words for this, so bear with me if this seems fuzzy.)
Bear in mind that all expressions in Alloy that denote individuals denote sets of individuals, and that there is no distinction available in the language between 'individual X' and 'the singleton set whose member is the individual X'. ([Later addendum:] In the terms more usually used: the general rule in Alloy's logic is that all values are relations. Binary relations are sets of pairs, n-ary relations sets of n-tuples, sets are unary relations, and scalars are singleton sets. See the discussion in sec. 3.2.2 of Software Abstractions, or the slide "Everything's a relation" in the Alloy Analyzer 4 tutorial by Greg Dennis and Rob Seater.)
Given the declaration you give of the 'show' predicate, it's easy to expect that the argument of 'show' should be a single Book -- or more correctly, a singleton set of Book --, and then to expect further that if the argument is not actually a singleton set (as in the expression show[Book] here) then the system will coerce it to being a singleton set, or interpret it with some sort of implicit existential or universal quantification. But in the declaration pred show(b:Book) ..., the expression b:Book just names an object b which will be a set of objects in the signature Book. (To require that b be a singleton set, write pred show(one b: Book) ....) The expression which constitutes the body of show is evaluated for b = Book just as readily as for b = Book$0.
The appearance of existential quantification is a consequence of the way the dot operator at the heart of the expression addr[b,n] (or equivalently n.(b.addr) is defined. Actually, if you experiment you'll find that show[Book] is true whenever there is any name for which the set of all books contains a mapping to two different addresses, even in cases where an existential interpretation would fail. Try adding this to your model, for example:
pred hmmmm { show[Book] and no b: Book | show[b] }
run hmmmm for exactly 2 Book, exactly 2 Addr, exactly 2 Name
Given a datatype
data Foo = Foo { one :: Int, two :: String } deriving (Show)
An incomplete expression passes typechecking -- e.g.
foo :: Foo
foo = Foo { one = 5 }
main = print foo
Typechecks (emitting a warning about the incomplete record), then (obviously) crashes when the expression is encountered. Why does it pass? Without record syntax it doesn't (i.e. bar = Foo 5 :: Foo).
The Haskell 2010 report says in section 3.15.2 Construction Using Field Labels
A constructor with labeled fields may be used to construct a value in which the components are specified by name rather than by position. Unlike the braces used in declaration lists, these are not subject to layout; the { and } characters must be explicit. (This is also true of field updates and field patterns.) Construction using field labels is subject to the following constraints: [...]
Fields not mentioned are initialized to ⊥.
A compile-time error occurs when any strict fields (fields whose declared types are prefixed by !) are omitted during construction.
So it's part of the language specification and a compiler must accept the code. All fields are initialized, just some are initialised with undefined.
foo = Foo{ one = 5 }
is equivalent to
foo = Foo 5 undefined
A nice compiler will warn you about it if you ask it to. If you want an error, make the fields strict.
Groovy supports both default, and named arguments. I just dont see them working together.
I need some classes to support construction using simple non named arguments, and using named arguments like below:
def a1 = new A(2)
def a2 = new A(a: 200, b: "non default")
class A extends SomeBase {
def props
A(a=1, b="str") {
_init(a, b)
}
A(args) {
// use the values in the args map:
_init(args.a, args.b)
props = args
}
private _init(a, b) {
}
}
Is it generally good practice to support both at the same time? Is the above code the only way to it?
The given code will cause some problems. In particular, it'll generate two constructors with a single Object parameter. The first constructor generates bytecode equivalent to:
A() // a,b both default
A(Object) // a set, b default
A(Object, Object) // pass in both
The second generates this:
A(Object) // accepts any object
You can get around this problem by adding some types. Even though groovy has dynamic typing, the type declarations in methods and constructors still matter. For example:
A(int a = 1, String b = "str") { ... }
A(Map args) { ... }
As for good practices, I'd simply use one of the groovy.transform.Canonical or groovy.transform.TupleConstructor annotations. They will provide correct property map and positional parameter constructors automatically. TupleConstructor provides the constructors only, Canonical applies some other best practices with regards to equals, hashCode, and toString.