Consider a upgrades relationship:
I need to make sure that upgrades cannot be circular. How can I do that in Alloy?
It is sufficient to enforce transitivity and antireflexivity.
fact {
no a: Item | a in a.upgrades
}
fact{
all a,b,c: Item |
a in b.upgrades and b in c.upgrades implies
a in c.upgrades
}
From your example, I infer that the upgrades relation is not intended to be transitive: in the example, a diamond sword upgrades a stone sword, and a stone sword upgrades a wooden sword, but the pair WoodSword -> DiamondSword is not in the upgrades relation.
So what you want to say is something like
fact upgrades_acyclic {
no x : univ | x in x.^upgrades
}
Some modelers prefer the more succinct formulation in terms of relations:
fact upgrades_acyclic { no ^upgrades & iden }
Related
I'm learning Alloy and experimenting with creating predicates for relations being injective and surjective. I tried this in Alloy using the following model:
sig A {}
sig B {}
pred injective(r: A -> B) {
all disj a, a': r.B | no (a.r & a'.r)
}
pred inj {
no r: A -> B | injective[r]
}
run inj for 8
However, I get this error on no r:
Analysis cannot be performed since it requires higher-order
quantification that could not be skolemized.
I've read the portions of Software Abstractions about skolemization and some other SO questions but it's not clear to me what the issue is here. Can I fix this by rephrasing or have I hit a fundamental limitation?
EDIT:
After some experimenting the issue seems to be associated with the negation. Asking for some r: A -> B | injective[r] immediately produces an injective example. This makes conceptual sense as a generally harder problem, but they seem like more or less isomorphic questions in a small scope.
EDIT2:
I've found using the following model that Alloy gives me examples which also satisfy the commented predicate that gives me the error.
sig A {}
sig B {}
pred injective(r: A -> B) {
all disj a, a': r.B | no (a.r & a'.r)
}
pred surjective(r: A -> B) {
B = A.r
}
pred function(f: A -> B) {
all a: f.B | one a.f
}
pred inj {
some s: A -> B | function[s] && surjective[s]
// no r: A -> B | function[r] && injective[r]
}
run inj for 8
Think of it like this: every Alloy analysis involves model finding, in which the solution is a model (called an instance in Alloy) that maps names to relations. To show that no model exists at all for a formula, you can run it and if Alloy finds no solution, then there is no model (in that scope). So if you run
some s: A -> B | function[s] && surjective[s]
and you get no solution, you know there is no surjective function in that scope.
But if you ask Alloy to find a model such that no relation exists with respect to that model, that's a very different question, and requires a higher order analysis, because the solver can't just find a solution: it needs to show that there is no extension of that solution satisfying the negated constraint. Your initial example is like that.
So, yes, it's a fundamental limitation in the sense that higher order logic is fundamentally less tractable. But whether that's a limitation for you in practice is another matter. What analysis do you actually need?
One compelling use of higher order formulas is synthesis. A typical synthesis problem takes the form "find me a program structure P such that there is no counterexample to the specification S". This requires higher order analysis, and is the kind of problem that Alloy* was designed to solve.
But there's generally no reason to use Alloy* for standard verification/simulation problems.
The following model produces instances with exactly 2 address relations when the number of Books is limited to 1, however, if more Books are allowed it will create instances with 0-3 address relations. My misunderstanding of how Alloy works?
sig Name{}
sig Addr{}
sig Book { addr: Name -> lone Addr }
pred show(b:Book) { #b.addr = 2 }
// nr. of address relations in every Book should be 2
run show for 3 but 2 Book
// works as expected with 1 Book
Each instance of show should include one Book, labeled as being the b of show, which has two address pairs. But show does not say that every book must have two address pairs, only that at least one must have two address pairs.
[Postscript]
When you ask Alloy to show you an instance of a predicate, for example by the command run show, then Alloy should show you an instance: that is (quoting section 5.2.1 of Software abstractions, which you already have open) "an assignment of values to the variables of the constraint for which the constraint evaluates to true." In any given universe, there may be many other possible assignments of values to the variables for which the constraint evaluates to false; the existence of such possible non-suitable assignments is unavoidable in any universe with more than one atom.
Informally, we can think of a run command for a predicate P with arguments X, Y, Z as requesting that Alloy show us universes which satisfy the expression
some X, Y, Z : univ | P[X, Y, Z]
The run command does not amount to the expression
all X, Y, Z : univ | P[X, Y, Z]
If you want to see only universes in which every book has two pairs in its addr relation, then say so:
pred all_books_have_2 { all b : Book | #b.addr = 2 }
I think it's better that run have implicit existential quantification, rather than implicit universal quantification. One way to see why is to imagine a model that defines trees, such as:
sig Node { parent : lone Node }
fact parent_acyclic { no n : Node | n in n.^parent }
Suppose we get tired of seeing universes in which every tree is trivial and contains a single node. I'd like to be able to define a predicate that guarantees at least one tree with depth greater than 1, by writing
pred nontrivial[n : Node]{ some n.parent }
Given the constraint that trees be acyclic, there can never be a non-empty universe in which the predicate nontrivial holds for all nodes. So if run and pred had the semantics you have been supposing, we could not use nontrivial to find universes containing non-trivial trees.
I hope this helps.
For an university project I'm trying to write the chinese game of Go (http://en.wikipedia.org/wiki/Go_%28game%29) in Alloy. (i'm using the 4.2 version)
I managed to write the base structure. Go's played on a board 9 x 9 wide, but i'm using a smaller set of 3 x 3 for checking it faster.
The board is made of crosses which can either be empty or occupied by black or white stones.
abstract sig Colour {}
one sig White, Black, Empty extends Colour {}
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group
}
one sig C11, C12, C13,
C21, C22, C23,
C31, C32, C33 extends Cross {}
sig Group {
stones : some Cross,
freedom : some Cross
}
pred closeStones {
near=
C11->C12 + C11->C21 +
C12->C11 + C12->C13 + C12->C22 +
C13->C12 + C13->C23 +
C21->C22 + C21->C11 + C21->C31 +
C22->C21 + C22->C23 + C22->C12 + C22->C32 +
C23->C22 + C23->C13 + C23->C33 +
C31->C32 + C31->C21 +
C32->C31 + C32->C33 + C32->C22 +
C33->C32 + C33->C23
}
fact stones2 {
all g : Group |
all c : Cross |
(c.group=g) iff c in g.stones
}
fact noGroup{
all c : Cross | (c.Status=Empty) iff c.group=none
}
fact groupNearStones {
all disj c,d : Cross |
((d in c.near) and c.Status=d.Status)
iff
d.group=c.group
}
The problem is: following Go rules, every stones must be considered as part of a group. This group is made of all the adiacent stones with the same colour.
My fact "groupNearStones" should be sufficient to describe that condition, but this way I can't get groups made of more of 3 stones.
I've tried rewriting it in different ways, but either the analizer says it found "0 variables" or it groups up all the stones with the same status, regardless of wheter they're near each other or not.
If you could give me any insight I will be grateful, since i'm breaking my head on this simple matter for days.
Ask yourself two questions.
First: in Go, what constitutes a group? You say yourself: it is a set of adjacent stones with the same color. Not that every stone in the group must be adjacent to every other; it suffices for every stone to be adjacent to another stone in the group.
So from a formal point of view: given a stone S, the set of stones in the group as S is the transitive closure of the stones reachable through the relation same_color_and_adjacent, or S.*same_color_and_adjacent.
Second: what constitutes being the same color and adjacent? I think you can define this easily, with what you have.
On a side issue; you may find it easier to scale the model to arbitrary sizes of boards if you reify the notion of rows and columns.
I hope this helps.
[Addendum:] Apparently it doesn't help enough. I'll try to be a bit more explicit, but I want the full solution to come from you and not from me.
Note that the point of defining a relation like same_color_and_adjacent is not to eliminate the formulation of facts or predicates in your model, but to make them easier to write and to write correctly. It's not magic.
Consider first a reformulation of your fact groupNearStones in terms of a single relation that holds for pairs of stones which are adjacent and have the same color. The relation can be defined by modifying your declaration for Cross:
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group,
near_and_similar : some Cross
}{
near_and_similar = near & { c : Cross | c.#Status = Status}
}
Now your existing fact can be written as:
fact groupNearStones2 {
all disj c,d : Cross |
d in c.near_and_similar
iff
d.group=c.group
}
Actually, I would write both versions of groupNearStones as predicates, not facts. That would allow you to check that the new formulation is really equivalent to the old one by running a check like:
pred GNS_equal_GNS2 {
groupNearStones iff groupNearStones2
}
(I have not run such a check; I'm being a little lazy.)
Now, let us consider the problems you mention:
You never get groups containing more than three stones. Actually, given the formulation of groupNearStones, I'm surprised you get groups with more than two. Consider what groupNearStones says: any two stones in a group are adjacent and have the same color. Draw a board on a piece of paper and draw a group of five stones. Now ask whether such a group satisfies the fact groupNearStones. Say the group is C11, C12, C13, C21, C22. What does groupNearStones say about the pair C21, C13?
Do you see the problem? Are the relations near and 'close enough to be in the same group' really the same? If they are not the same, are they related?
Hint: think about transitive closure.
You never get groups containing a single stone.
How surprising is this, given that groupNearStones says that c.group = d.group only if c and d are disjoint? If you never get single-stone groups, then every stone that should be a single-stone group is not classed as being in any group at all, since such a stone must not satisfy the expression s.group = s.group.
Do you see the problem?
Hint: think about reflexive transitive closure.
I've bought and read the Software Abstractions book (great book actually) a couple of months if not 1.5 years ago. I've read online tutorials and slides on Alloy, etc. Of course, I've also done exercises and a few models of my own. I've even preached for Alloy in some confs. Congrats for Alloy btw!
Now, I am wondering if one can model and solve maximizing problems over integers in Alloy. I don't see how it could be done but I thought asking real experts could give me a more definitive answer.
For instance, say you have a model similar to this:
open util/ordering[State] as states
sig State {
i, j, k: Int
}{
i >= 0
j >= 0
k >= 0
}
pred subi (s, s': State) {
s'.i = minus[s.i, 2]
s'.j = s.j
s'.k = s.k
}
pred subj (s, s': State) {
s'.i = s.i
s'.j = minus[s.j, 1]
s'.k = s.k
}
pred subk (s, s': State) {
s'.i = s.i
s'.j = s.j
s'.k = minus[s.k, 3]
}
pred init (s: State) {
// one example
s.i = 10
s.j = 8
s.k = 17
}
fact traces {
init[states/first]
all s: State - states/last | let s' = states/next[s] |
subi[s, s'] or subj[s, s'] or subk[s, s']
let s = states/last | (s.i > 0 => (s.j = 0 and s.k = 0)) and
(s.j > 0 => (s.i = 0 and s.k = 0)) and
(s.k > 0 => (s.i = 0 and s.j = 0))
}
run {} for 14 State, 6 Int
I could have used Naturals but let's forget it. What if I want the trace which leads to the maximal i, j or k in the last state? Can I constrain it?
Some intuition is telling me I could do it by trial and error, i.e., find one solution and then manually add a constraint in the model for the variable to be stricly greater than the one value I just found, until it is unsatisfiable. But can it be done more elegantly and efficiently?
Thanks!
Fred
EDIT: I realize that for this particular problem, the maximum is easy to find, of course. Keep the maximal value in the initial state as-is and only decrease the other two and you're good. But my point was to illustrate one simple problem to optimize so that it can be applied to harder problems.
Your intuition is right: trial and error is certainly a possible approach, and I use it regularly in similar situations (e.g. to find minimal sets of axioms that entail the properties I want).
Whether it can be done more directly and elegantly depends, I think, on whether a solution to the problem can be represented by an atom or must be a set or other non-atomic object. Given a problem whose solutions will all be atoms of type T, a predicate Solution which is true of atomic solutions to a problem, and a comparison relation gt which holds over atoms of the appropriate type(s), then you can certainly write
pred Maximum[ a : T ] {
Solution[a]
and
all s : T | Solution[s] implies
(gt[a,s] or a = s)
}
run Maximum for 5
Since Alloy is resolutely first-order, you cannot write the equivalent predicate for solutions which involve sets, relations, functions, paths through a graph, etc. (Or rather, you can write them, but the Analyzer cannot analyze them.)
But of course one can also introduce signatures called MySet, MyRelation, etc., so that one has one atom for each set, relation, etc., that one needs in a problem. This sometimes works, but it does run into the difficulty that such problems sometimes need all possible sets, relations, functions, etc., to exist (as in set theory they do), while Alloy will not, in general, create an atom of type MySet for every possible set of objects in univ. Jackson discusses this technique in sections 3.2.3 (see "Is there a loss of expressive power in the restriction to flat relations?"), 5.2.2 "Skolemization", and 5.3 "Unbounded universal quantifiers" of his book, and the discussion has thus far repaid repeated rereadings. (I have penciled in an additional index entry in my copy of the book pointing to these sections, under the heading 'Second-order logic, faking it', so I can find them again when I need them.)
All of that said, however: in section 4.8 of his book, Jackson writes "Integers are not actually very useful. If you think you need them, think again; ... Of course, if you have a heavily numerical problem, you're likely to need integers (and more), but then Alloy is probably not suitable anyway."
Can someone please help me understand predicates using the following example:
sig Light{}
sig LightState { color: Light -> one Color}
sig Junction {lights: set Light}
fun redLigths(s:LightState) : set Light{ s.color.Red}
pred mostlyRed(s:LightState, j:Junction){
lone j.lights - redLigths(s)
}
I have the below questions about the above code:
1) What happens if the above predicate is true?
2) What happends if it is false?
3) Can someone show me a bit of alloy code that uses the above code and clarifies the meaning of predicates through the code.
I am just trying to understand how do we use the above predicate.
Nothing "happens" until you place a call to a predicate or a function in a command to find an example or counterexample.
First, use the right terminology, nothing 'happens' when a predicate is true; it's the more like the other way around, an instance (an allocation of atoms to sets) satisfies (or doesn't) some condition, making the predicate true (or false).
Also, your model is incomplete, because there is no sig declaration for Color (which should include an attribute called Red).
I assume you want to model a world with crossroads containing traffic lights, if so I would use the following model:
abstract sig Color {}
one sig Red,Yellow,Green extends Color {}
sig Light {
color: Color
}
sig Junction {
lights : set Light
}
// This is just for realism, make sure each light belongs to exactly one junction
fact {
Light = Junction.lights
no x,y:Junction | x!=y and some x.lights & y.lights
}
fun count[j:Junction, c:Color] : Int {
#{x:Light | x in j.lights and x.color=c}
}
pred mostly[j:Junction, c:Color] {
no cc:Color | cc!=c and count[j,cc]>=count[j,c]
}
run{
some j:Junction | mostly[j,Red]
} for 10 Light, 2 Junction, 10 int
Looking at the above, i'm using the # operator to count the number of atoms in a set, and I'm specifying a bitwidth of 10 to integers just so that I don't stumble into an overflow when using the # operator for large sets.
When you execute this, you will get an instance with at least one junction that has mostly red lights, it will be marked as $j in the visualizer.
Hope this helps.
sig Light{}
sig LightState { color: Light -> one Color}
sig Junction {lights: set Light}
fun redLigths(s:LightState) : set Light{ s.color.Red}
pred mostlyRed(s:LightState, j:Junction){
lone j.lights - redLigths(s)
}
What the predicate simply means in the example you gave is;
The difference between the set A, in this case the relation (j.lights) and another set say B, returned from the function redligths, of which the Predicate will always constraint the constraint analyser to return only red light when you run the Predicate "mostlyRed".
And note that the multiplicity "lone" you added to the predicate's body only evaluate after the difference between the set A and B (as I assumed) has been evaluated, to make sure that at most one atom of red is returned. I hope my explanation was helpful. I will welcome positive criticism. Thanks