Cannot understand output of alloy code:
abstract sig Name{}
one sig N0, N1, N2 extends Name{}
abstract sig Book{}
one sig b0 extends Book { addr : Name -> Name}
abstract sig E{}
one sig e0 extends E{}
pred show(){
some *(b0.addr)
}
run show
I am curious if the output will contain (e0,e0) and (b0,b0). I have attached the output of the analyzer but dont know how to interpret it. Does it mean that (e0,e0) is in the solution?
What do you mean by (e0, e0) being "in the solution"? I'd recommend that you read the Alloy book (Software Abstractions, MIT Press, 2012) for an explanation of all the basic notions.
Related
In the dining philosophers problem we have a table with Philosophers and Forks.
sig P {}
sig F {}
For this problem I want the following relation that represents the table:
P1 -> F1
F1 -> P2
P2 -> F2
F2 -> P3
P3 -> F3
F3 -> P1
I.e. each P would point to an F and each F to a P, and this would form a circle. I would like to call a function to get this relation:
fun table : (P+F) one -> one (P+F) { ... }
I've been trying hard to make this work but it feels like I am missing something fundamental that also is relevant for other problems I am having. Somehow I miss a 'constructor'.
Any pointers?
Additional
#Hovercouch gave an working solution with a helper sig. However, this required a non-natural extension to the P and F and introduced a new sig. This can also be solved by:
sig P, F {}
one sig Table {
setting : (P+F) one -> one (P+F)
} {
# P = # F
all p : P, f : F | P in p.^setting and F in f.^setting
}
run {} for 6
Which addresses the non-natural inheritance concerns.
However, it still seems very global and a lot of work for an imho very simple problem. Still keeping the question open to see if there are other solutions.
If you're willing to add a helper object, we can do this by making an abstract sig Thing and then making both P and F instances of Thing:
abstract sig Thing {
next: Thing
} {
Thing = this.^#next
}
sig F extends Thing {} {
next in P
}
sig P extends Thing {} {
next in F
}
fact SameNumberOfThings {
#P = #F
}
run {} for 6
There may be a design tradeoff involved here, between expressive power and tractability.
There is certainly an issue of what counts as clean or intuitive; you say that the 'next'-ness of P and F is "an aspect of the table setting" and not "an aspect of P or F". I think I understand your thinking, but I don't think you are likely to have any more success defining a principled way to distinguish between "aspects" of P and F and relations in whose domain or range they appear, any more than any of the philosophers who have tried, over the last couple thousand years, to distinguish reliably between essence and accidence.
And if we accept that the distinction is unreliable, but we nevertheless find it useful, then the question becomes "who made the rule that a relation defined as part of a signature must relate to an (intrinsic) aspect of the individuals involved, and not to an extrinsic relation which is not an aspect of the individuals?" The answer is: you did, not [the creators of] Alloy. If one insists too strongly on one's intuitions about the constructs one wants to use to express something, there is a certain risk of insisting not just that the thing should be expressible but that we should be able to express it using a particular construct. That kind of insistence can teach us a lot about a notation, but sometimes it's easier to accept that the designers of the notation also had intuitions.
This general topic is discussed in Daniel Jackson's Software Abstractions under the questions Does Alloy allow freestanding declarations? (in discussion following section 3.5.3 on higher-order quantification) and Must all relations be declared as fields? (in discussion following section 4.2.2 on basic field declarations). The nut of the discussion is "If you want to declare some relations that don't belong naturally to any existing signatures, you can simply declare them as fields of a singleton signature." Mutatis mutandis, the example given looks a lot the Table sig in your addendum.
TL;DR yes, you may find it a bit cumbersome, but the singleton sig to contain a relation you don't want to define on its first member really is as close to an established idiom as there is, for this sort of thing.
I am trying to learn about how ordering works in Alloy. I have a time signature which I have used to instantiate the ordering module. I want the predicate addPage to add a page to the book at time t' where t' = t.next. (Basically add a page to the Book on the next time) However it is not working as expected and instead Time2 has lesser number of pages than Time1. Can someone explain to me why this is happening? Thanks.
open util/ordering[Page] as P0
open util/ordering[Time] as T0
sig Page {}
sig Time {}
sig Book
{
pages: Page -> Time
}
pred addPage(b:Book, p:Page, t: Time)
{
t != T0/last implies
{
let t' = t.next |
b.pages.t' = b.pages.t + p
}
}
run addPage {} for 3
The problem are the extra curly braces in the run statement.
I think Alloy executes an empty predicate in this case.
Try:
run addPage for 3
instead. You will see a visualization where the selected instances for b, t and p are marked.
You're trying to change state which can only be simulated in constraint logic.
Please notice that the expression in addPage is basically ineffective /run your model without it/ and that there's only one Book atom in the solution.
Here's a model you can start with and gradually refine.
open util/ordering[Time]
sig Page {}
sig Time {}
sig Book {
pages : Page lone -> Time // each Time atom is mapped to at most one Page atom
}
pred addPage(b0, b1 : Book, pg : Page, t0, t1 : Time) {
one pg and // one page at a time (it's likely redundant)
not pg in b0.pages.Time and // it's a 'new' page
b0.pages + pg->t1 = b1.pages and // 'new state' of b0
t1 = t0.next // pg is 'added' with the next time stamp
}
run addPage for 3 but 2 Book
I used the optional 'and' operators, placed t1 = t0.next at the end of the constraint, positioned b1.pages /representing the 'new state'/ on the right and used quotes in the comments to emphasize that there's no real state change and sequence of operation in the sense imperative programming works.
I have the following specification in Alloy:
sig A {}
sig Q{isA: one A}
fact {
all c1,c2:Q | c1.isA=c2.isA => c1=c2 // injective mapping
all a1:A | some c1:Q | c1.isA=a1 //surjective
}
In my models the above fact repeats similarly between different signature. I tried to factor out it as a separate module so I created a module as below:
module library/copy [A,Q]
fact {
all c1,c2:Q | c1.isA=c2.isA => c1=c2 // injective mapping
all a1:A | some c1:Q | c1.isA=a1 //surjective
}
Then I tries to use it as bellow:
module family
open library/copy [Person,QP]
sig Person {}
sig QP{isA:Person}
run {} for 4
but Alloy complains that "The name "isA" cannot be found." in the module.
What is wrong with my approach? and Why alloy complains?
In my previous answer I tried to address your "similarly between different signature" point, that is, I thought your main goal was to have a module that somehow enforces that there is a field named isA in the sig associated with parameter Q, and that isA is both injective and surjective. I realize now that what you probably want is reusable predicates that assert that a given binary relation is injective/sujective; this you can achieve in Alloy:
library/copy.als
module copy [Domain, Range]
pred inj[rel: Domain -> Range] {
all c1,c2: Domain | c1.rel=c2.rel => c1=c2 // injective mapping
}
pred surj[rel: Domain -> Range] {
all a1: Range | some c1: Domain | c1.rel=a1 //surjective
}
family.als
open copy[QP, Person]
sig Person {}
sig QP{isA:Person}
fact {
inj[isA]
surj[isA]
}
run {} for 4
In fact, you can open the built-in util/relation module and use the injective and sujective predicates to achieve the same thing, e.g.:
family.als
open util/relation
sig Person {}
sig QP{isA:Person}
fact {
injective[isA, Person]
surjective[isA, Person]
}
run {} for 4
You can open the util/relation file (File -> Open Sample Models) and see a different way to implement these two predicates. You can then even check that your way of asserting injective/surjective is equivalent to the built-in way:
open copy[QP, Person]
open util/relation
sig Person {}
sig QP{isA:Person}
check {
inj[isA] <=> injective[isA, Person]
surj[isA] <=> surjective[isA, Person]
} for 4 expect 0 // no counterexample is expected to be found
Modules in Alloy are treated as independent units (i.e., a module can access only the stuff defined in that module itself and the modules explicitly opened in that module), so when compiling the "copy" module, isA is indeed undefined. A theoretical solution would be to additionally parametrize the "copy" module by the isA relation, but in Alloy module parameters can only be sigs.
A possible solution for your problem would be to define abstract sigs A and Q in module "copy", and then in other modules define concrete sigs that extend A and Q, e.g.,
copy.als:
module library/copy
abstract sig A {}
abstract sig Q {isA: one A}
fact {
all c1,c2:Q | c1.isA=c2.isA => c1=c2 // injective mapping
all a1:A | some c1:Q | c1.isA=a1 //surjective
}
family.als:
open library/copy
sig Person extends A {}
sig QP extends Q {} {
this.#isA in Person // restrict the content of isA to Person
}
run {} for 4
Using inheritance to achieve this kind of code reuse is conceptually not ideal, but in practice is often good enough and I can't think of another way to do it in Alloy.
I am writing a simple Alloy code but I cannot understand how can I say AT MOST one A has associate with p.D (so AT MOST would be One or Zero). So I wrote the below code but the assertion presents no counter-example with an instance of P1 without D. Could you help me how can I define my fact in terms of having AT MOST one instance for p.D where I can see a counter example that p has no connection for its D.
abstract sig A {}
sig A1,A2,A3 extends A{}
abstract sig P {}
sig P1 extends P {D: A}
fact
{
all p: P1 | lone (p.D & A)
}
assert asr
{no p: P1 | no (p.D & A)}
check asr for 5
Your specification (introduction of sig P1) says that for each p in P1 is always related by d to exactly one a in A. Your fact is redundant ("0 or 1" is implied by "always 1").
You could declare "sig P1 extends P (D : lone A}". (The fact would still be redundant.)
Also note that the "& A"s in your fact and assertion are redundant.
You might have meant the fact to be
fact {lone P1.D}
which says that all those instances of P1 which are related to an A are related to the same A.
The Alloy 4 grammar allows signature declarations (and some other things) to carry a private keyword. It also allows Allow specifications to contain enumeration declarations of the form
enum nephews { hughie, louis, dewey }
enum ducks { donald, daisy, scrooge, nephews }
The language reference doesn't (as far as I can tell) describe the meaning of either the private keyword or the enum construct.
Is there documentation available? Or are they in the grammar as constructs that are reserved for future specification?
This is my unofficial understanding of those two keywords.
enum nephews { hughie, louis, dewey }
is semantically equivalent to
open util/ordering[nephews] as nephewsOrd
abstract sig nephews {}
one sig hughie extends nephews {}
one sig louis extends nephews {}
one sig dewey extends nephews {}
fact {
nephewsOrd/first = hughie
nephewsOrd/next = hughie -> louis + louis -> dewey
}
The private keyword means that if a sig has the private attribute, its label is private within the same module. The same applies for private fields and private functions.
In addition to the previous accepted answer, I'd like to add some useful insights coming from a one-week experience with Alloy on enums, in particular on the main differences with standard sig.
If you use abstract sig + extend, you'll come up with a model in which there are many sets corresponding to the same concept. Maybe an example could clarify it better.
Suppose somthing like
sig Car {
dameges: set Damage
}
You have the choice to use
abstract sig Damage {}
sig MajorDamage, MinorDamage extends Damage {}
vs
enum Damage {
MajorDamage, MinorDamage
}
In the first case we can come up wiht a model with different MinorDamage atoms (MinorDamage0, MinorDamage1, ...) associatet to Cars, while in the second case you always have only one MinorDamage to which different Cars can refer.
It could have some sense in this case to use an abstract sig + extend form (because you can decide to track different MinorDamage or MajorDamage elements).
On the other hand, if you want to have a currentState: set State, it could be better to use an
enum State {Damaged, Parked, Driven}
to map the concept, in order to have exactly three State to which each Car can refer to. In this way, in the Visualizer, you can decide to project your model on exactly one of the states and it will highlight all the Cars associated to this state. You can't do that with the abstract + extend construct, of course, because projecting over MajorDamage0 will highlight only the Car associated to that Damage and nothing else.
So, in conclusion, it really depends on what you have to do.
Also, keep in mind that if you have an enum composed by X elements and execute
run some_predicate for Y
where Y < X, Alloy produces no instance at all.
So, in our last example, we can't have a Y < 3.
As a last note, enums don't always appear in the Visualizer if you use the Magic Layout button, but as I said previously you can "project" your model over the enum and switch between the different elements of the enum.